News

Anthropic and the federal government will be checking to make sure you're not trying to build a nuclear bomb with Claude's ...
As part of its ongoing work with the National Nuclear Security Administration, the small but critical agency charged with ...
Artificial intelligence (AI) firm Anthropic has rolled out a tool to detect talk about nuclear weapons, the company said in a ...
Anthropic, an Artificial Intelligence (AI) start-up backed by Amazon and Google, has developed a new tool to stop its chatbot ...
Because savvy terrorists always use public internet services to plan their mischief, right? Anthropic says it has scanned an undisclosed portion of conversations with its Claude AI model to catch ...
With the US government’s help, Anthropic built a tool designed to prevent its AI models from being used to make nuclear weapons.
Claude AI of Anthropic now prohibits chats about nuclear and chemical weapons, reflecting the company's commitment to safety ...
Anthropic, in collaboration with the US government, has created an AI-powered classifier that detects and blocks nuclear weapons-related queries, aiming to prevent AI misuse in national security ...
Though we fortunately haven't seen any examples in the wild yet, many academic studies have demonstrated it may be possible ...
Human judgement remains central to the launch of nuclear weapons. But experts say it’s a matter of when, not if, artificial ...
Amid growing scrutiny of AI safety, Anthropic has updated its usage policy for Claude, expanding restrictions on dangerous applications and reinforcing safeguards against misuse.