The open-source supply chain that the AI industry built its production infrastructure on has a mutable-tag problem. Until that problem is fixed, compromised packages are a when, not an if. The TeamPCP campaign, which on March 24, 2026 poisoned two versions of the LiteLLM Python package on PyPI with a credential-harvesting payload, is a concrete example of that structural risk. It is not an outlier.
The chain of the attack is documented in detail by Snyk. On March 19, TeamPCP rewrote GitHub Action tags for Trivy, a popular security scanning tool, to point to a malicious release. That gave the attackers a way to harvest credentials from LiteLLM's continuous integration and deployment pipeline. With those credentials, they published litellm versions 1.82.7 and 1.82.8 to PyPI. The exfiltration domain, models.litellm.cloud, was registered on March 23, the same day the malicious packages went live. The payload sent environment variables — API keys, credentials, tokens — to that domain.
Lapsus$, the group that claimed credit, said it stole 4 terabytes of data from Mercor, including 939 gigabytes of source code. The attackers also published what they claimed were internal communications and customer information tied to OpenAI and Anthropic. Mercor, founded in 2023, works with both companies to train AI models by contracting specialized domain experts — scientists, doctors, lawyers — and was valued at $10 billion following a $350 million Series C led by Felicis Ventures in October 2025. The company facilitates more than $2 million in daily payouts to those contractors. The exposure of that contractor data is a distinct harm from the source code theft.
The attack was technically straightforward in concept: harvest CI/CD credentials through a compromised dependency, publish a poisoned package to PyPI, wait for downstream ingestion. The enabling condition was GitHub Actions' use of mutable tags without transparency logs, combined with the pull_request_target trigger, which runs untrusted code in a privileged context. Chainguard CEO Dan Lorenc called the design "plain irresponsible today" in a SANS interview, noting that it ignores a decade of supply chain security work from other ecosystems.
LiteLLM is an open-source proxy and gateway for routing requests to over 100 LLM providers, including OpenAI, Anthropic, Azure OpenAI, Google Vertex AI, and AWS Bedrock. It has over 40,000 GitHub stars and approximately 95 million monthly downloads on PyPI, making it one of the most widely deployed libraries in the AI infrastructure stack. Wiz, which monitors cloud environments for enterprise customers, found that LiteLLM is present in 36 percent of the environments it tracks. Mandiant, Google's incident response subsidiary, knew of over 1,000 SaaS environments actively dealing with cascading effects from the campaign as of early April. The campaign is tracked as CVE-2026-33634 with a CVSS score of 9.4 out of 10. TeamPCP is also known as PCPcat, Persy_PCP, ShellForce, and DeadCatx3, per the Wiz Threat Center.
LiteLLM has responded by changing its compliance certification provider from Delve to Vanta, according to TechCrunch. That addresses a certification process. It does not address the structural vulnerability in how the AI industry consumes open-source infrastructure.
The same window that the LiteLLM packages were poisoned, a separate incident exposed Claude Code's source code via a misconfigured npm distribution. As we reported last week, approximately 500,000 lines from Anthropic's coding assistant leaked publicly via a source map. Security researcher Chaofan Shou was first to flag it on X, where the post reached 28.8 million views. Both incidents landed in the final days of March 2026. Both involved human error in release pipelines. Both touched infrastructure that AI companies depend on. The timing is coincidental; the pattern is not.
The mutable-tag problem in GitHub Actions has been documented by security researchers for years. GitHub has taken steps, but the underlying architecture persists across thousands of repositories. Until immutable references and transparency logs are the default, a compromised dependency anywhere in the AI stack can propagate credentials across hundreds of production environments in hours. LiteLLM routes to 100+ model providers. That routing is the feature. It is also the blast radius.
For operators: pin your dependencies. Audit your GitHub Actions. Treat third-party CI/CD access as equivalent to production access.
† Add attribution or verification: 'The company facilitates more than $2 million in daily payouts to those contractors, according to [source name].' If no independent source is available, add † footnote: 'Source-reported; not independently verified.'