US Weighs Risks And Benefits Of Anthropic Technology

By Cointribune EN
about 1 hour ago
AI TRUMP SECURITY MYTH TRUMP2024

Can a technology deemed too risky become a strategic asset overnight? In Washington, the Anthropic case illustrates this shift. After being excluded from federal systems, the AI company is returning to the center of discussions, as its capabilities trigger as much interest as concern. Between military requirements, cybersecurity issues, and ethical limits claimed by its leaders, the US administration is now seeking a compromise path in a highly sensitive case.

In brief

  • The White House is reconsidering its stance after banning Anthropic over strategic security concerns.
  • A conflict opposes the company to the Pentagon regarding the military use of its AI models.
  • Washington is now considering a directive to allow a controlled return of Anthropic within federal agencies.
  • The Claude Mythos model is emerging as a key cybersecurity tool due to its ability to detect and exploit vulnerabilities.

A ban contested behind the scenes in Washington

In February, the administration of Donald Trump classified Anthropic as “a risk to the security of the supply chain,” ordering federal agencies to immediately cease any use of its technologies, with a six-month deadline to phase them out.

This decision follows the refusal of CEO Dario Amodei to grant unrestricted access to the models for military uses. The president justified this position by stating: “The United States of America will never allow a radical left company, said to be woke, to dictate how our great army fights and wins its wars!”

Today, the White House is working on a directive likely to modify this trajectory. According to several sources, the objective is to allow federal agencies to reuse Anthropic’s solutions while avoiding a political disavowal. This evolution is part of a context of intensive discussions between political leaders and economic actors, a sign that the issues now go beyond the simple technological framework.

Here are some factual elements :
• The administration imposed an immediate ban with a six-month deadline to abandon Anthropic ;
• The conflict originates from the refusal to open the models to unrestricted military uses ;
• A new directive would aim to bypass the “risk to the security of the supply chain” classification ;
• Meetings involved key figures such as Susie Wiles and Scott Bessent as well as major banks.

Mythos, an AI at the heart of security tensions

At the center of this potential reversal is Mythos, Anthropic’s most advanced model unveiled in March. Designed to detect and exploit software vulnerabilities, it is already attracting the attention of public and private institutions.

The AI Security Institute, thanks to on-chain data, reports that Mythos Preview is the first model capable of succeeding at “The Last Ones” (TLO), a 32-step network attack simulation that usually requires 20 hours of human work. For its part, Mozilla claims to have identified and fixed 271 flaws in its browser thanks to this technology, evoking a sense of vertigo in the face of the scale of discoveries: “many other teams now feel the same dizziness we did when these results began to clearly appear.”

These capabilities explain the growing interest of government agencies, despite official restrictions. The National Security Agency is reportedly already using Claude Mythos Preview on classified networks, a sign of pragmatic adoption in view of the stakes. At the same time, major financial institutions such as Citigroup or Goldman Sachs have been associated with discussions on the risks posed by such an AI, capable of both securing and compromising critical systems.

The evolution of this case could redefine the relationship between technology companies and states, as artificial intelligence capabilities go beyond traditional control frameworks. Between national security requirements and ethical red lines set by developers, the compromise remains fragile. The next decisions in Washington will be closely watched, as they could influence the global governance of AI and its uses in sensitive fields.

Related News