Anthropic is urging the White House to take immediate action on AI security and its economic implications, warning that rapid advancements in artificial intelligence demand stronger safeguards and enhanced monitoring. The company calls for a proactive approach and classified information sharing to ensure the U.S. maintains its leadership in mitigating potential risks.
The urgency for action comes as the U.S. faces growing global competition for AI supremacy. Anthropic stresses that without decisive measures, the country risks losing its technological edge while critical national security vulnerabilities remain unaddressed. With AI’s impact poised to reshape industries, Anthropic emphasizes that timely government intervention is imperative for staying ahead.
What Anthropic wants from the White House on AI
In a detailed security response to the White House Office of Science and Technology Policy (OSTP), Anthropic has advocated for comprehensive testing protocols to evaluate AI systems for potential biosecurity and cybersecurity risks before deployment. The AI company warned that its own testing revealed disturbing improvements in Claude 3.7 Sonnet’s capacity to support aspects of biological weapons development.
This security-focused strategy comes as Anthropic projects powerful AI systems with “intellectual capabilities matching or exceeding Nobel Prize winners” could emerge as soon as 2026.
The company also highlighted pressing energy infrastructure challenges, projecting that by 2027, training a single advanced AI model will require approximately five gigawatts of power. Anthropic urged the U.S. administration to establish an ambitious national target to add 50 gigawatts of power dedicated to the artificial intelligence industry within three years.
The generative AI leader cautioned that neglecting these energy requirements could force U.S. AI developers to relocate operations overseas, potentially transferring the foundation of America’s AI economy to foreign competitors.
Classified AI sharing: Anthropic CEO’s security solution
To address potential national security threats from AI, Anthropic CEO Dario Amodei has recommended that the U.S. government develop classified communication channels with AI companies and intelligence agencies. Amodei pointed out the significance of sharing security information to prevent the misuse of powerful AI systems, which could be exploited for cyberattacks or to control physical systems, such as lab equipment or manufacturing tools.
How Anthropic’s stance affects organizations
As Anthropic urges the White House for tougher AI security measures, organizations must recognize the gravity of the situation. The increasing risk of adversarial access to high-performance AI systems demands proactive steps immediately. Preparing for stricter security protocols now could prevent costly future disruptions and safeguard enterprise infrastructure.