Claude, Anthropic’s AI chatbot, was facing a service disruption on Monday, according to Downdetector, which monitors such issues through user reports.
Claude down and when will it be back up?
“We have identified that the Claude API is working as intended. The issues we are seeing are related to Claude.ai and with the login/logout paths. We are continuing to investigate this issue,” the message on Claude’s website said.
Several users in cities including Delhi, Ahmedabad, Mumbai, and Hyderabad have been facing issues, as seen on Downdetector.
The Trump administration had on Friday directed all US government agencies to halt the use of Anthropic’s AI systems and introduced additional significant penalties, intensifying a highly visible dispute between the government and the company over AI safety standards, according to PTI.
President Donald Trump, Defense Secretary Pete Hegseth and other officials criticised Anthropic on social media for not granting the military unrestricted access to its AI tools by the Friday deadline. They accused the company of putting national security at risk after CEO Dario Amodei declined to reverse his stance, citing concerns that the technology could be used in ways that breach its built-in safeguards.
“We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote on social media.
Hegseth also labelled the company a “supply chain risk,” a term usually applied to foreign adversaries that could jeopardize the firm’s key partnerships with other companies.
In a statement released Friday night, Anthropic said it would contest what it described as an unprecedented and legally flawed move “never before publicly applied to an American company.”
Anthropic had previously said it was seeking limited guarantees from the Pentagon that its AI chatbot, Claude, would not be employed for mass surveillance of Americans or in fully autonomous weapons systems. The Pentagon responded that it had no intention of using the technology in those ways and would apply it only within legal bounds, but it maintained that it required unrestricted access.
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court,” the company said.
The government’s attempt to exert greater control over the company’s internal decision-making is unfolding amid a broader dispute over the role of artificial intelligence in national security, including worries about how increasingly advanced systems might be deployed in high-risk contexts involving lethal force, classified data or state surveillance.