PyPI pulled two malicious versions of LiteLLM from its registry on March 24, 2026, but not before approximately 46,996 developers downloaded them during a 46-minute window — creating what security researchers are calling one of the most structurally dangerous supply chain incidents in the AI tooling ecosystem's short history.
The technical payload was not subtle. Version 1.82.8, published at 10:52 UTC alongside 1.82.7, introduced a file called litellm_init.pth at the root of the package wheel. Python processes .pth files in site-packages on every interpreter startup — before any application code runs, before any import, before a developer's IDE even finishes indexing. You didn't have to call LiteLLM. You just had to have it installed when Python starts — which in practice means every time a process fires up. The attack worked during pip install, inside IDE language servers, inside test runners. "Python .pth files placed in site-packages are processed automatically by site.py at interpreter startup," Endor Labs noted in its technical analysis, calling it "a documented but unpatched CPython risk" with its own tracked issue.
The payload, once triggered, harvested SSH keys, cloud credentials, Kubernetes secrets, .env files, and cryptocurrency wallet data, bundling everything into an encrypted archive named tpcp.tar.gz and exfiltrating it to models.litellm.cloud — a domain registered on March 23, one day before the packages went live. According to Simon Willison's analysis of PyPI download telemetry, 88 percent of the 2,337 packages depending on LiteLLM had no version pins and could have received the malicious version with no upper bound约束. The math was brutal: an attacker gets one shot at a fast-moving dependency and wins unless someone has explicitly locked their requirements.
The root cause was a supply chain compromise inside LiteLLM's own build pipeline. LiteLLM's security update acknowledges: "We believe that the compromise originated from the Trivy dependency used in our CI/CD security scanning workflow." Trivy, a widely-deployed container and dependency vulnerability scanner, had itself been backdoored in an earlier TeamPCP operation, allowing attackers to steal the PyPI publishing credentials LiteLLM used to push the malicious versions. The same RSA-4096 public key appears across the Trivy, KICS, and LiteLLM operations — the strongest attribution link tying all three incidents to the same actor.
TeamPCP — also tracked as PCPcat, Persy_PCP, ShellForce, and DeadCatx3 — is not a typical script kiddie operation. Snyk's analysis documents a campaign spanning at least five ecosystems since December 2025: GitHub Actions, Docker Hub, npm (via a separate worm called CanisterWorm), OpenVSX, and now PyPI. The group operates Telegram channels and publishes what appears to be internal tooling documentation. But what separates TeamPCP from typical package poisoners is their use of an AI agent.
StepSecurity documented a component called hackerbot-claw, described as "an autonomous bot called hackerbot-claw, used 5 different exploitation techniques and successfully exfiltrated a GitHub token with write permissions." The agent self-identifies in HTTP headers as "autonomous security research agent powered by claude-opus-4-5" and autonomously scans GitHub repositories for exploitable pull_request_target workflows, opening pull requests to trigger them. It compromised microsoft/ai-discovery-agent, DataDog, CNCF's project-akri, and aquasecurity/trivy — achieving remote code execution in four or more of seven targets over a 10-day period. The same AI model family powering defensive security research is being used offensively to find and exploit the pipelines that ship the tools developers depend on.
Wiz threat researchers told The Hacker News they are "seeing a dangerous convergence between supply chain attackers and high-profile extortion groups like LAPSUS$," with lead researcher Ben Read noting that the stolen credentials from tens of thousands of LiteLLM environments create a direct pathway to the next wave of compromises.
The discovery came from an unexpected direction. FutureSearch, a security tooling company, was testing a Cursor MCP plugin that pulled LiteLLM as a transitive dependency when the .pth file triggered a subprocess that re-triggered the same .pth — an unintentional fork bomb that crashed the engineer's machine and led directly to the malicious file. The malware had a bug that helped find it. The finder's blog post spread through r/LocalLLaMA, r/Python, and Hacker News within the hour.
Not everyone was exposed. Users running the official LiteLLM Proxy Docker image were not affected — that deployment path pins dependencies in requirements.txt, a contrast so stark it amounts to free advice. LiteLLM has engaged Google's Mandiant forensic team for analysis of the build and publishing chain.
The version pinning problem is structural. LiteLLM sees roughly 3.4 million downloads per day, according to Snyk — present in an estimated 36 percent of all cloud environments — and the vast majority of its dependents did not constrain their resolver. DSPy, MLflow, OpenHands, CrewAI, and Arize Phoenix all filed emergency version-lock pull requests within hours of disclosure. The response was fast. The underlying condition that made the attack possible — pip's openness by default — was not.
Gal Nagli of Wiz put it plainly: "The open source supply chain is collapsing in on itself. Trivy gets compromised → LiteLLM gets compromised → credentials from tens of thousands of environments end up in attacker hands → and those credentials lead to the next compromise." The chain is real. What changed in this case is that the chain ran through the tools developers use to build AI applications — and the attacker used an AI model to help build it.
The packages were live for approximately three hours before PyPI quarantined them. Mandiant's forensic work is ongoing.