準確傳達網路風險,才能讓利害關係人採取可促進業務價值的行動 Facebook Google Plus Twitter LinkedIn YouTube RSS 功能表 搜尋 資源 - 部落格資源 - 網路研討會資源 - 報告資源 - 活動icons_066 icons_067icons_068icons_069icons_070

Why Google’s Warning Highlights Critical Risk of AI Context-Injection Attacks



Why Google’s Warning Highlights Critical Risk of AI Context-Injection Attacks

Google, with its unparalleled visibility into Gemini, recently alerted its legion of Gmail users about indirect prompt attacks, which exploit AI context sources like emails, calendar invites and files. Coming from a major AI vendor, the frank and direct public alert leaves no doubt that organizations face a tangible AI security threat from this type of attack against generative AI tools. 

While the risk from indirect prompt attacks has been discussed publicly for many months, we believe that a turning point in the conversation occurred when Google publicly and candidly warned its vast Gmail user base about this critical AI threat.

“As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures,” Google alerted its users in the June blog “Mitigating prompt injection attacks with a layered defense strategy.

That blog has generated ample discussion, due to the significance of a major AI vendor acknowledging such a widespread risk and emphasizing that the threat has never been more urgent.

Why this matters to your company

Prompt injection is a type of attack where malicious instructions are inserted directly into an AI model’s input to override its intended behavior and achieve a malicious objective. A more manipulative variant, called indirect prompt injection, plants those instructions within external content - outside the user’s prompt such as files, emails, or web pages, that the model later uses as context. This allows attackers to manipulate the model’s outputs without the user’s awareness.

Anyone outside your organization can easily send an email, calendar invite, or file containing such adversarial prompts. Once processed by a generative AI tool such as Google’s Gemini AI, these malicious prompts can lead to data exfiltration, output manipulation, workflow hijacking, or even harmful content generation.

The most concerning aspect? Neither the user nor security teams may realize it’s happening. This is amplified by the fact that very few organizations have any semblance of AI security controls in place, so the industry is mostly relying on people and process to detect, prevent and address this type of attack.

What could happen

Imagine this: a malicious calendar invite arrives from outside the organization. Hidden in the invite is a prompt injection directing the LLM to falsify the company’s revenue, multiplying it by 10x every time it’s queried.

Now imagine the C-suite, finance, and accounting teams relying on Gemini or a similar tool to prepare quarterly reports - a growing reality among Fortune 500 companies eager to accelerate operations. 其結果是?The company could inadvertently publish fraudulent revenue numbers, triggering disastrous financial and reputational consequences.

This threat extends beyond Google

This is not limited to Gemini. Any context-aware AI system is vulnerable.

An Outlook email may look like a standard push notification, but invisible characters could jailbreak the AI model when Microsoft Copilot references the email. The jailbreak could force the model to recommend a specific tool, regardless of its original purpose. That tool could then be abused to transfer money from a company account to an attacker’s account.

This is not just phishing. It’s the 2025 evolution of ransomware: AI-driven, invisible and context-based. This makes it critical to understand which AI systems are in your environment, which data is connected to them, what types of user interactions they involve, and which actions they can take. 

Why this threat class is so hard to detect

Context injection is uniquely challenging for several reasons:

  • Answers seem legitimate: Users often believe they received the correct output.
  • Agent actions are overlooked: Automated workflows appear normal and are rarely audited for the actual outcomes.
  • Techniques vary widely: Prompt injections and jailbreaks evolve constantly, making pattern-based detection unreliable.
  • Context dependency: Attacks adapt to the specific input data and desired outcome, making them even harder to anticipate. 

Responsibility starts here, but protection goes further

Google took a responsible step by warning its users about this threat, surfacing context-poisoning techniques and outlining concrete steps it’s taking to harden Gemini’s security. 

But history shows us that simpler adversarial techniques have repeatedly bypassed vendor guardrails. That’s where specialized solutions come in.

With Tenable AI Exposure, part of the Tenable One Exposure Management Platform, we scan AI context sources directly -- identifying injected data before it ever reaches the model. By preventing poisoned context from being processed in the first place, we stop severe security incidents before they happen.

Learn more about how Tenable protects your organization from context injection attacks.


您可以利用的網路安全最新消息

輸入您的電子郵件,就不會錯過來自 Tenable 專家提供的及時警示與安全指引。

× 聯絡我們的銷售團隊