Anthropic has on Wednesday launched Claude Mythos Preview, a new cyber AI model, but the public cannot use it.
Speaking via a blog post, the company said, “AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.”
The AI company said it has estimated global cybercrime costs at around $500 billion a year.
According to Anthropic, its launch group for Mythos Preview includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
More than 40 other organizations that build or maintain critical software also got access. Anthropic said it will provide up to $100 million in usage credits and $4 million in direct support for open source security groups.
In its press release, Anthropic claims that Mythos Preview has allegedly found thousands of high-severity vulnerabilities across every major operating system and every major web browser.
One example was a 27-year-old flaw in OpenBSD that could let an attacker remotely crash a machine just by connecting to it. Another was a 16-year-old flaw in FFmpeg hiding in code that automated tools had hit five million times without catching the issue.
The model also found and chained several flaws in the Linux kernel so an attacker could move from ordinary user access to full control of a machine.
Anthropic said for other bugs, it plans to publish cryptographic hashes now and will reveal more once fixes are in place, as the model found nearly all of those vulnerabilities and built many related exploits on its own.
On CyberGym, Mythos Preview scored 83.1% on vulnerability reproduction, compared with 66.6% for Claude Opus 4.6. VentureBeat separately reported 93.9% on SWE-bench Verified, versus 80.8% for Opus 4.6.
Anthropic then explained that recent frontier systems have cut the cost, effort, and skill needed to find and exploit security holes.
Under Project Glasswing, partners will use Mythos Preview for defensive work on internal systems and open source code.
Anthropic said the work will include local vulnerability detection, black box testing of binaries, endpoint security, and penetration testing.
After the research preview, participants will be able to access the model through the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry at $25 per million input tokens and $125 per million output tokens.
The company also said it gave $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, plus $1.5 million to the Apache Software Foundation.
AWS said it analyzes more than 400 trillion network flows a day, Microsoft said the model showed gains on CTI-REALM, CrowdStrike said the gap between finding a flaw and exploiting it has collapsed, and Google said it will make the model available through Vertex AI, while Palo Alto Networks said defenders need these tools before attackers get them.
The New York Times reported that late last year, Anthropic said state-backed Chinese hackers used its AI in an effort to target about 30 companies and government agencies, with human operators doing only 10% to 20% of the work.
The report also said attackers are already using AI to draft phishing emails, write ransom notes, sort stolen data, and speed up breach sales.
The crypto card with no spending limits. Get 3% cashback and instant mobile payments. Claim your Ether.fi card.