News

Anthropic and the federal government will be checking to make sure you're not trying to build a nuclear bomb with Claude's ...
Anthropic, an Artificial Intelligence (AI) start-up backed by Amazon and Google, has developed a new tool to stop its chatbot ...
Artificial intelligence (AI) firm Anthropic has rolled out a tool to detect talk about nuclear weapons, the company said in a ...
Because savvy terrorists always use public internet services to plan their mischief, right? Anthropic says it has scanned an ...
As part of its ongoing work with the National Nuclear Security Administration, the small but critical agency charged with ...
With the US government’s help, Anthropic built a tool designed to prevent its AI models from being used to make nuclear weapons.
Claude AI of Anthropic now prohibits chats about nuclear and chemical weapons, reflecting the company's commitment to safety ...
Anthropic, in collaboration with the US government, has created an AI-powered classifier that detects and blocks nuclear weapons-related queries, aiming to prevent AI misuse in national security ...
Anthropic, an AI start-up, has developed a tool called Claude that prevents its AI from being used for harmful purposes such as creating nuclear weapons.
The GSA is leveraging the State Department's “privacy-preserving” API for passport records to compare passport photos submitted to Login.gov.
Though we fortunately haven't seen any examples in the wild yet, many academic studies have demonstrated it may be possible ...
Amid growing scrutiny of AI safety, Anthropic has updated its usage policy for Claude, expanding restrictions on dangerous applications and reinforcing safeguards against misuse.