Anthropic Is Funding the Foundations Whose Code Its Own AI Found Vulnerable
Financial regulators across four jurisdictions are now monitoring a single AI model. Anthropic built that model, sells access to banks, and funds the foundations whose code it found vulnerable.

Anthropic's Mythos AI, which autonomously discovers software vulnerabilities, has triggered unprecedented regulatory scrutiny from financial authorities across Asia-Pacific and a first-ever US supply chain risk designation for an American AI lab. The company's Project Glasswing arrangement creates a structural conflict of interest: Anthropic sells vulnerability detection tools to banks while simultaneously funding the open-source foundations (Apache, OpenSSF) whose code Mythos found flaws in, raising concerns about who controls remediation standards. Testing by the UK AI Security Institute found Mythos effective against corporate networks but ineffective against critical infrastructure control systems.
- •Financial regulators in Singapore, Australia, Hong Kong, and South Korea are jointly monitoring Anthropic's Mythos, marking the first time a single AI model is treated as a systemic financial risk.
- •Anthropic's $4 million funding of open-source foundations whose code Mythos identified as vulnerable creates a conflict of interest, as the company may influence what counts as "remediated."
- •The US government designated Anthropic a supply chain risk—the first such designation for an American AI lab—after a federal appeals court rejected Anthropic's attempt to pause the classification.


