AI Firm Anthropic Seeks Weapons Expert to Prevent 'Catastrophic Misuse' The U.S. artificial intelligence company Anthropic is seeking to hire a chemical weapons and high-yield explosives expert to prevent its AI tools from being misused in ways that could lead to catastrophic outcomes. The firm is concerned that its software might inadvertently provide instructions for creating chemical or radioactive weapons and wants an expert to strengthen its safeguards. In a LinkedIn recruitment post, Anthropic outlined the role, requiring candidates to have at least five years of experience in "chemical weapons and/or explosives defense" and knowledge of "radiological dispersal devices," commonly known as dirty bombs. The firm described the position as similar to roles in other sensitive areas it has already created. Anthropic is not alone in this approach. OpenAI, the developer of ChatGPT, has also advertised a researcher position focused on "biological and chemical risks," offering a salary of up to $455,000, nearly double the amount provided by Anthropic. However, some experts have raised concerns about the risks of this strategy, warning that it could expose AI systems to information about weapons, even if the tools are instructed not to use it. Dr. Stephanie Hare, a tech researcher and co-presenter of the BBC’s AI Decoded TV program, questioned the safety of using AI to handle sensitive information related to chemical and radiological weapons. She noted the absence of international regulations governing this type of work, emphasizing that the use of AI in these contexts is happening without oversight. The AI industry has long warned about the potential existential threats posed by its technology, but efforts to slow its development have been limited. The urgency of the issue has increased as the U.S.#dario_amodei #palantir #anthropic #openai #stephanie_hare
