Nvidia CEO Jensen Huang Condemns Comparison of China Chip Sales to Nuclear Arms Deals Nvidia’s chief executive, Jensen Huang, has dismissed the notion that selling advanced chips to China is akin to transferring nuclear weapons to North Korea, calling such comparisons “lunacy.” The remark came in response to Anthropic CEO Dario Amodei, who previously likened the practice to “selling nuclear weapons to North Korea and then bragging that the missile casings are made by Boeing” in a January essay. Huang’s defense of his company’s strategy to expand into the Chinese market has sparked heated debate within the tech industry, with critics warning of the risks and proponents emphasizing the economic opportunities. Amodei, a vocal opponent of U.S. companies selling advanced chips to China, argued in his essay that such sales would give China an unfair advantage during a critical period for its AI development. He warned that China’s ability to produce frontier chips in large quantities lags behind the U.S., and that providing it with access to cutting-edge technology could accelerate its rise as a global AI power. “There is no reason to give a giant boost to their AI industry during this critical period,” Amodei wrote, highlighting concerns about the long-term implications for U.S. technological dominance. Huang, however, has consistently defended the decision to sell chips in China, framing it as a necessary step to maintain the U.S.’s influence in the global AI landscape. During a recent episode of the Dwarkesh Podcast, he sharply rebuked Amodei’s analogy, calling it “lunacy” and emphasizing that chips are not equivalent to nuclear materials. “We’re not enriched uranium.#dario_amodei #nvidia #jensen_huang #anthropic #us_china_relations

Anthropic Gains UK Support Amid Pentagon Dispute; London Mayor Backs AI Expansion The UK government has reportedly stepped in to support Anthropic, a US-based artificial intelligence company, amid escalating tensions with the US Department of Defense. This move comes after the Pentagon labeled Anthropic a supply-chain risk, prompting the company to face increased scrutiny from US authorities. London Mayor Sadiq Khan has sent a letter to Anthropic’s CEO, Dario Amodei, expressing confidence in the UK’s ability to provide a stable and innovation-friendly environment for the company’s growth. The outreach reflects broader efforts by the UK to position itself as a key player in the global AI landscape, offering regulatory stability and investment opportunities amid geopolitical tensions. The UK’s engagement with Anthropic is part of a larger strategy to bolster its “AI sovereignty” and reduce reliance on US and other foreign technology providers. Officials are exploring ways to expand the company’s presence in London, including potential measures such as increasing the size of its London office and pursuing a dual listing on UK and US stock exchanges. These plans are expected to be discussed during Amodei’s upcoming visit to the UK in late May, where he will meet with policymakers and stakeholders. The UK Prime Minister is also considering proposals to strengthen ties with Anthropic, which currently employs around 200 people in the country, including approximately 60 researchers. The UK’s push for collaboration with Anthropic aligns with its broader goals to foster domestic AI innovation. Last month, the government outlined plans for a £40 million state-backed research lab focused on “blue-sky” AI projects, aiming to leverage the country’s scientific expertise in fields such as healthcare, transportation, and science.#dario_amodei #uk_government #anthropic #sadiq_khan #rishi_sunak

Nvidia CEO Disagrees With Anthropic CEO’s ‘Doomsday AI Layoffs’ Prediction Nvidia’s CEO, Jensen Huang, has rejected warnings from tech leaders like Anthropic CEO Dario Amodei and AI pioneer Geoffrey Hinton about a potential surge in AI-driven unemployment. Huang, speaking during a December interview with podcast host Joe Rogan, argued that while AI will reshape the job market, he does not foresee a sudden wave of layoffs. Instead, he emphasized that the technology will gradually transform industries rather than cause catastrophic disruptions. Huang acknowledged that routine and repetitive tasks are most vulnerable to automation. He used the example of a job requiring manual labor, such as chopping vegetables, stating that AI tools like Cuisinart could eventually replace such roles. However, he highlighted that positions demanding interpretation, judgment, or creativity—such as those of radiologists—will remain resilient. “If your job is just to chop vegetables, Cuisinart’s gonna replace you,” Huang remarked, underscoring the distinction between automated and human-centric roles. While dismissing “doomsday” scenarios, Huang also outlined his vision of a future where AI creates entirely new industries. He speculated about emerging roles like robot tailors, who would design clothing for AI-powered robots. “You’re gonna have robot apparel, so a whole industry of—isn’t that right? Because I want my robot to look different than your robot,” he said, illustrating how AI could spawn previously unimaginable job categories. Huang also mentioned the rise of roles focused on building and maintaining AI assistants, as well as industries that are currently hard to envision. Huang acknowledged that even the imagined role of a robot clothesmaker might eventually be automated, noting, “Eventually.#dario_amodei #jensen_huang #geoffrey_hinton #nvidia_ceo #anthropic_ceo

Nvidia CEO Jensen Huang Predicts Gradual AI Job Shift, Not Mass Layoffs Nvidia’s chief executive, Jensen Huang, has expressed cautious optimism about the future of employment in the age of artificial intelligence, stating that while AI will significantly reshape the job market, he does not anticipate a sudden surge in layoffs. Instead, Huang emphasized that the technology will likely create new roles and transform existing ones, rather than simply eliminating jobs. During a December interview with podcast host Joe Rogan, Huang highlighted the potential for AI to revolutionize industries while also acknowledging the challenges it poses for certain professions. Huang argued that jobs requiring non-routine tasks, such as those in healthcare or creative fields, are more likely to remain resilient against AI disruption. For example, he pointed out that radiologists are not merely tasked with analyzing medical scans but are essential for interpreting those images to diagnose diseases. “The image studying is simply a task in service of diagnosing the disease,” he explained. In contrast, jobs involving repetitive or routine tasks—such as chopping vegetables—may face greater risk, as AI-driven tools could eventually replace human labor in such roles. While Huang acknowledged that some positions will inevitably be displaced by AI, he avoided using the alarmist language of critics like Geoffrey Hinton, known as “the Godfather of AI,” or Anthropic CEO Dario Amodei, who have previously warned of widespread unemployment due to AI advancements. Instead, Huang focused on the potential for AI to generate new opportunities. He speculated that the rise of AI-powered systems could create demand for technicians who specialize in building and maintaining these technologies.#dario_amodei #elon_musk #jensen_huang #geoffrey_hinton #mit
AI Safety Push: Anthropic Hires Manager to Handle Chemical, Explosive Threat Risks Anthropic, a U.S.-based artificial intelligence firm, has announced plans to hire a Policy Manager specializing in chemical weapons and high-yield explosives. The role involves designing and implementing evaluation methods to assess AI models’ capabilities related to chemical weapons, explosives synthesis, and energetic materials. The company emphasized that the position aims to shape how AI systems manage sensitive information about these materials. The recruitment post specifies that applicants should have at least five years of experience in chemical weapons or explosives defense, with expertise in energetic materials, chemical agents, or related fields. Anthropic is not alone in this effort; OpenAI previously posted a similar vacancy for a researcher focused on frontier biological and chemical risks. OpenAI’s Preparedness team is tasked with identifying and preparing for catastrophic risks posed by advanced AI models, ensuring the technology promotes positive outcomes. Experts caution that this approach could inadvertently provide AI tools with information about weapons, even if the systems are instructed not to use such data. The AI industry faces growing scrutiny over its potential role in existential threats, with the U.S. government recently involving AI firms in military operations, including conflicts in Iran and Venezuela. Anthropic has previously challenged the U.S. government’s designation of the company as a supply chain risk, arguing its systems should not be used for autonomous weapons or mass surveillance. Co-founder Dario Amodei warned in February that current AI technology is not yet advanced enough for such applications. However, reports indicate that Anthropic’s AI assistant, Claude, remains in use by the U.S.#iran #dario_amodei #anthropic #openai #claire

AI Firm Anthropic Seeks Weapons Expert to Prevent 'Catastrophic Misuse' The U.S. artificial intelligence company Anthropic is seeking to hire a chemical weapons and high-yield explosives expert to prevent its AI tools from being misused in ways that could lead to catastrophic outcomes. The firm is concerned that its software might inadvertently provide instructions for creating chemical or radioactive weapons and wants an expert to strengthen its safeguards. In a LinkedIn recruitment post, Anthropic outlined the role, requiring candidates to have at least five years of experience in "chemical weapons and/or explosives defense" and knowledge of "radiological dispersal devices," commonly known as dirty bombs. The firm described the position as similar to roles in other sensitive areas it has already created. Anthropic is not alone in this approach. OpenAI, the developer of ChatGPT, has also advertised a researcher position focused on "biological and chemical risks," offering a salary of up to $455,000, nearly double the amount provided by Anthropic. However, some experts have raised concerns about the risks of this strategy, warning that it could expose AI systems to information about weapons, even if the tools are instructed not to use it. Dr. Stephanie Hare, a tech researcher and co-presenter of the BBC’s AI Decoded TV program, questioned the safety of using AI to handle sensitive information related to chemical and radiological weapons. She noted the absence of international regulations governing this type of work, emphasizing that the use of AI in these contexts is happening without oversight. The AI industry has long warned about the potential existential threats posed by its technology, but efforts to slow its development have been limited. The urgency of the issue has increased as the U.S.#dario_amodei #palantir #anthropic #openai #stephanie_hare

Atlassian CEO's Layoff Letter Is Good News for Graduates Atlassian’s CEO, Mike Cannon-Brookes, outlined three categories of employees the company prioritized retaining during recent layoffs, offering a positive outlook for recent graduates in the job market. The software firm announced it was cutting 1,600 jobs, or about 10% of its global workforce, to redirect resources toward its AI initiatives. Cannon-Brookes emphasized retaining high performers, employees with transferable skills, and graduates, signaling confidence in their value despite broader economic challenges. The decision contrasts with growing concerns about AI’s impact on entry-level roles. Recent studies suggest that younger workers, particularly those aged 22 to 25, face heightened risks as AI tools automate tasks traditionally handled by entry-level professionals. For instance, Stanford researchers noted a 16% relative employment decline for early-career workers in AI-exposed fields. Anthropic CEO Dario Amodei has also warned that up to half of entry-level white-collar jobs could be displaced by AI within the next 1 to 5 years. Despite these trends, Atlassian’s hiring practices suggest a different trajectory. Last October, Cannon-Brookes stated the company was increasing its recruitment of new graduates, citing the need for fresh talent to drive innovation in research and development. He argued that graduates bring a unique perspective to software development, capable of reshaping the industry. This stance aligns with the firm’s recent hiring numbers: 95 new graduates joined in February 2025, and 108 were set to start in February 2026. Cannon-Brookes’ letter to employees did not elaborate on the rationale for prioritizing graduates, but several possibilities exist.#dario_amodei #atlassian #mike_cannonbrookes #anthropic #stanford

Tesla's Former AI Director Andrej Karpathy Who Said He Feels Behind As Programmer Now Says Software Programming Has Changed Due To... Karpathy says programmers now manage AI agents instead of writing code line by line This is a sharp evolution from the man who coined "vibe coding" just last year, where he described casually prompting AI and barely reviewing the output. That was fun for throwaway projects, he'd said at the time. Now, Karpathy is talking about something far more structured—spinning up AI agents, assigning tasks in plain English, and reviewing their work in parallel. "You're not typing computer code into an editor like the way things were since computers were invented, that era is over," he wrote. The post comes weeks after Karpathy admitted he'd "never felt this much behind as a programmer," describing the profession as being "dramatically refactored." He spoke of needing to master a new layer of abstraction involving agents, subagents, prompts, memory modes, and MCP protocols—all while the underlying AI models keep changing. AI coding productivity still a mixed bag despite industry hype Not everyone's sold on the revolution, though. A METR study from July found AI assistants actually decreased experienced developers' productivity by 19%. Bain & Company called programming productivity gains "unremarkable." Yet Google CEO Sundar Pichai has said AI writes over 30% of new code at Google, and Anthropic CEO Dario Amodei claimed Claude was behind 90% of the company's code. Karpathy, for his part, isn't calling it magic. "It needs high-level direction, judgement, taste, oversight, iteration and hints and ideas," he wrote. But the leverage, he believes, is already enormous—and only growing.#andrej_karpathy #metr_study #sundar_pichai #dario_amodei #ai_coding
