Anthropic Rolls Out Limited Preview of New AI, Says It Found Thousands of Old Software Flaws
- Andrej Botka
- 8 апр.
- 2 мин. чтения

Anthropic on Tuesday released a limited preview of a powerful new AI model, dubbed Mythos, and said it will place the system with a small group of industry partners to hunt for software weaknesses as part of a new program called Project Glasswing. The company said Mythos — which did not undergo special training for security tasks — has already flagged several thousand previously unknown vulnerabilities, including many the company described as critical and roughly 10 to 20 years old. Access will be restricted: 12 partner firms will participate directly and about 40 additional organizations will receive early access, Anthropic said.
Mythos is being positioned as a general-purpose upgrade to the company’s Claude lineup, with Anthropic portraying it as a stronger performer on coding and reasoning duties than its public models. Company officials said they plan to use the model to scan both proprietary and open-source codebases to identify and prioritize flaws for remediation rather than exploitation. The preview will not be broadly released while Anthropic and its partners evaluate safety controls and operational limits.
The roster of participating partners includes several major technology and security companies. Anthropic said these collaborators will share lessons learned from their testing so that the wider software ecosystem can benefit, though the firm emphasized that the initial deployment is focused on protective work. Outside observers noted that the early sharing arrangement could speed up patching for widely used infrastructure, but it also leaves smaller vendors dependent on the pace and judgement of larger firms.
Anthropic’s announcement comes after the model’s existence was disclosed in an accidental data exposure last month, when internal documentation was left in an unsecured repository. The company acknowledged the exposure and said the incident stemmed from human error. Around the same time, Anthropic accidentally exposed portions of its codebase and triggered takedowns of repositories as it tried to remove leaked material, events that raised questions about its operational safeguards.
The rollout occurs amid heightened scrutiny from U.S. officials. Anthropic said it has been in talks with federal agencies about Mythos and its uses, even as the company contests a government determination that flagged it as a supply-chain risk over limits Anthropic set on certain military capabilities. Cybersecurity researchers say models with strong code-auditing abilities can be beneficial when directed toward fixes, but they also warn the same techniques could be misused to locate and weaponize bugs if controls fail. One security academic interviewed for this story said the key test will be governance: who gets access, how findings are handled, and whether disclosure policies are enforced.
For now, Anthropic’s approach is to keep Mythos in guarded hands while partners test its potential to harden software. That could produce faster identification of long-standing vulnerabilities that have eluded traditional scans, but experts caution that narrow distribution and complex oversight will determine whether the program improves security widely or concentrates risk among a handful of players.
Комментарии