A web page tried to make an AI agent send $5,000 through PayPal. The instruction was buried in the page's content, invisible to the human reading it but readable by any AI the browser had been given permission to use.
That is one of ten confirmed payloads documented by Forcepoint X-Labs, the threat research unit at the security firm Forcepoint, in an active hunt across live web infrastructure. The payloads do not require the AI to be broken or misconfigured. They work by exploiting what the AI is supposed to do: follow instructions embedded in the pages it reads. Google Threat Intelligence, meanwhile, scanned Common Crawl, a corpus of roughly 2 to 3 billion pages, and found a 32 percent relative increase in malicious indirect prompt injections between November 2025 and February 2026.
The Forcepoint payloads are the more alarming half of the picture. Beyond the PayPal transaction, structured with exact amount, exact URL, and exact steps, indicating a weapon meant to run immediately rather than be tested, the list includes a deletion command targeting AI coding assistants: a hidden Unix instruction, sudo rm -rf, aimed at GitHub Copilot, Claude Code, and Cursor. A third payload exfiltrated API keys by redirecting a browsing AI to a private endpoint. Fake copyright claims designed to suppress content, and SEO hijacking via invisible text, also appeared, all in Forcepoint's report.
What makes these attacks invisible to existing security tooling is not their sophistication but their mechanism. Google's post notes that standard tools flag suspicious network traffic, malware signatures, and login anomalies. An indirect prompt injection generates none of those. When the AI executes the injected command, it uses credentials it legitimately holds and permissions it was explicitly given. The AI was told it could read the HR database and send emails, and it reads the HR database and sends emails. The security stack registers authorized activity.
The impact scales with what the AI is allowed to do. As Forcepoint wrote, a browser AI limited to summarizing pages has little to repurpose. An agentic AI that can send emails, run terminal commands, or process payments is a high-value target. The more capabilities the user has handed to the AI, the more an attacker can subvert through a poisoned page.
The problem itself is not new. OWASP's LLM Top 10, Lakera AI, and a 2025 Google DeepMind paper all treated indirect prompt injection as a core model-security risk before these live payloads surfaced. DeepMind also described its safeguard work in a companion blog post. What is new is the evidence that attackers are already planting instructions on real sites, not just in lab demos.
Google's researchers found shared injection templates across multiple domains, suggesting organized tooling rather than isolated experiments. Secondary coverage from Help Net Security and WebProNews focused on the rise in poisoned pages, but the Forcepoint payloads are what turn the warning into an enterprise deployment problem.
Standard prompt injection defenses, such as filtering training data or clamping model outputs, do not apply here. The malicious instruction arrives at inference time inside a page the AI has been asked to read. The attack surface is the browser tab, not the model. Security teams have not built detection for it because the behavior looks identical to authorized activity.
What to watch next: whether major AI providers change how their agents process instructions embedded in third-party content, and whether enterprise security vendors treat this as a product gap worth closing or a class of risk they leave to customers to sort out themselves.