Washington, D.C., April 16, 2026 — The White House is taking a significant step toward deploying one of the most powerful — and controversial — AI models ever developed. The U.S. government is preparing to make a version of Anthropic's new AI model, Mythos, available to major federal agencies, even as officials acknowledge concerns that the tool could sharply increase cybersecurity risk. The move marks a pivotal moment in the relationship between frontier AI development and national security.

Gregory Barbaccia, federal chief information officer of the White House Office of Management and Budget (OMB), sent an email to top technology and cybersecurity officials at multiple Cabinet departments — including Defense, Treasury, Commerce, Homeland Security, Justice, and State — informing them that OMB is setting up protections to allow their agencies to begin using Mythos. The email told agency chiefs to expect more details "in the coming weeks," though no firm timeline or deployment plan was confirmed.

Mythos is being deployed as part of Anthropic's "Project Glasswing," a tightly controlled initiative under which select organizations — including tech giants like Nvidia, Microsoft, Google, and Apple, as well as major financial institutions — have been granted limited access to a preview version of the model for defensive cybersecurity purposes.

The reason for such careful handling is extraordinary. Mythos has already identified thousands of major vulnerabilities in operating systems, web browsers, and other software. Its advanced coding capabilities give it an unprecedented ability to detect cybersecurity weaknesses — and potentially devise ways to exploit them. Within Anthropic, company leaders grew alarmed when testers used Mythos to uncover the types of critical bugs that would normally require the world's best hackers to find — prompting the decision to severely restrict its release.

Officials briefed on the model have described it in striking terms. Among national defense officials, the introduction of Mythos has created profound uncertainty about how to evaluate cybersecurity risk — with one person familiar with the matter comparing it to turning a conventional soldier into a special forces operator. The Bank of England is also reported to be holding urgent internal discussions with cybersecurity experts after previewing the model.

The White House's move comes despite a turbulent backdrop in Anthropic's relationship with the federal government. The Pentagon had declared Anthropic a supply chain threat — an authority normally reserved for foreign adversaries — over a dispute about AI safeguards. Anthropic subsequently won a court order blocking a ban on government use of its technology, arguing the restriction could cost it billions in lost revenue. Despite this friction, Anthropic co-founder Jack Clark confirmed the company has continued discussions with the Trump administration about Mythos.

Barbaccia's email states: "We're working closely with model providers, other industry partners, and the intelligence community to ensure the appropriate guardrails and safeguards are in place before potentially releasing a modified version of the model to agencies."

For those tracking the broader implications of AI in national security and government, Reuters and Bloomberg have been providing in-depth coverage of this fast-moving story as it develops.

The Mythos situation encapsulates a defining tension of the AI era: the same capabilities that make a model extraordinarily valuable for defense are precisely what make it dangerous in the wrong hands. How the White House and Anthropic navigate this balance will likely set a precedent for how frontier AI models are governed — and deployed — at the highest levels of government for years to come.