OpenAI is set to significantly expand its workforce as it intensifies efforts to compete with Anthropic and Google. According to a report by The Financial Times, the company plans to grow from approximately 4,500 employees to around 8,000 by the end of the year, adding roughly 12 new hires daily. This hiring drive is part of a broader strategy to counter growing competition, particularly from Anthropic, which is gaining traction with business clients, and Google, which is challenging OpenAI in the consumer chatbot market. The new roles will focus on product development, engineering, research, and sales. OpenAI is also prioritizing the recruitment of “technical ambassadors”—specialists embedded within businesses to help maximize the value of its tools. Both OpenAI and Anthropic are expanding their forward-deployed engineering teams to strengthen relationships with enterprise customers and ensure steady revenue. To support its growing workforce, OpenAI has signed a new office lease in San Francisco. The hiring push comes amid heightened competition, with Anthropic reportedly outpacing OpenAI in business client acquisition. However, OpenAI has disputed data suggesting that first-time business buyers are three times more likely to choose Anthropic over its products. A company spokesperson criticized the methodology, comparing it to using a child’s lemonade stand sales to estimate global lemon demand. Internal pressure has also mounted at OpenAI. Last year, CEO Sam Altman issued a “code red” directive, urging employees to refocus on ChatGPT, the company’s core product, following Google’s success with Gemini 3.0.#google #anthropic #openai #sam_altman #fidji_simo

Goodbye human coders? Sam Altman says thank you to developers as AI takes over The rise of AI in software development has sparked debates about the future of human coders. OpenAI CEO Sam Altman recently acknowledged the critical role developers have played in shaping the digital world, while also highlighting how AI is transforming the field. His message, shared on X, emphasized the immense effort required to build complex systems manually, a process that once defined the profession. Altman expressed gratitude for developers who wrote code line by line, noting that the difficulty of such work is often overlooked. This sentiment comes as AI tools now automate tasks like writing code, fixing errors, and optimizing program structures, raising concerns about the impact on coding jobs, particularly for beginners. While some fear AI could replace human coders, industry experts argue the profession is evolving rather than disappearing. Elon Musk’s AI chatbot Grok responded to Altman’s post by stating that software engineering is not dying but adapting. According to Grok, AI enhances productivity by handling routine tasks, allowing developers to focus on higher-level work such as system architecture, debugging, ethical considerations, and innovation. This aligns with experiences many developers report: AI streamlines repetitive tasks, but complex problem-solving and decision-making still require human expertise. For instance, designing large-scale systems or addressing unforeseen technical challenges remains a uniquely human endeavor. A recent study by Anthropic further clarifies the relationship between AI and coding roles. The research analyzed how its AI model, Claude, is used in workplaces, revealing that while AI could theoretically assist with nearly 94% of tasks in computer and math-related jobs, current adoption is only around 33%.#elon_musk #anthropic #openai #sam_altman #grok

Sam Altman’s Gratitude to Coders Sparks Memes Amid Tech Layoffs Sam Altman, CEO of OpenAI, sparked a wave of online reactions after posting a message on X expressing gratitude to software developers for their work. The post, shared on Tuesday, read: “I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took. Thank you for getting us to this point.” The sentiment, while heartfelt, quickly became a focal point for critics and humorists amid a backdrop of widespread layoffs in the tech industry. The timing of Altman’s post coincided with a surge in reports about major companies cutting jobs in the name of advancing artificial intelligence. Amazon had laid off 16,000 workers, Block (parent company of PayPal) had reduced its workforce by nearly half, and Atlassian had trimmed 10% of its staff. Meanwhile, Meta was reportedly considering another round of layoffs that could affect 20% of its employees. These moves, framed as necessary for AI development, have left many developers questioning the irony of their situation. Altman’s company, OpenAI, has been central to the AI revolution, training its models on vast amounts of code written by developers using traditional methods. Critics argue that the very tools and systems developers built are now being used to justify their displacement. The post’s tone, which romanticizes the painstaking process of coding, has been interpreted as dismissive of the ongoing challenges faced by developers. Some see it as a eulogy for the profession, with one meme captioning: “Sam’s eulogy for software engineers.” The backlash has taken many forms, from sarcastic quips to satirical takes. One popular meme joked: “Dear devs, you will lose your jobs forever and be forced to work in the coal mines.#amazon #atlassian #block #openai #sam_altman

AI Safety Push: Anthropic Hires Manager to Handle Chemical, Explosive Threat Risks Anthropic, a U.S.-based artificial intelligence firm, has announced plans to hire a Policy Manager specializing in chemical weapons and high-yield explosives. The role involves designing and implementing evaluation methods to assess AI models’ capabilities related to chemical weapons, explosives synthesis, and energetic materials. The company emphasized that the position aims to shape how AI systems manage sensitive information about these materials. The recruitment post specifies that applicants should have at least five years of experience in chemical weapons or explosives defense, with expertise in energetic materials, chemical agents, or related fields. Anthropic is not alone in this effort; OpenAI previously posted a similar vacancy for a researcher focused on frontier biological and chemical risks. OpenAI’s Preparedness team is tasked with identifying and preparing for catastrophic risks posed by advanced AI models, ensuring the technology promotes positive outcomes. Experts caution that this approach could inadvertently provide AI tools with information about weapons, even if the systems are instructed not to use such data. The AI industry faces growing scrutiny over its potential role in existential threats, with the U.S. government recently involving AI firms in military operations, including conflicts in Iran and Venezuela. Anthropic has previously challenged the U.S. government’s designation of the company as a supply chain risk, arguing its systems should not be used for autonomous weapons or mass surveillance. Co-founder Dario Amodei warned in February that current AI technology is not yet advanced enough for such applications. However, reports indicate that Anthropic’s AI assistant, Claude, remains in use by the U.S.#iran #dario_amodei #anthropic #openai #claire

AI in Warfare Explained: OpenAI, Anthropic Move to Set Guardrails The growing integration of artificial intelligence into modern warfare has prompted major tech companies like OpenAI and Anthropic to take proactive steps to establish ethical boundaries for their technologies. As reports surface of U.S. military operations in the Iran conflict potentially involving AI tools, both firms are expanding their recruitment efforts to include experts in chemical and biological risks. These hires aim to mitigate the potential for catastrophic misuse of their systems in military contexts. Anthropic, which developed the AI model Claude, is seeking a specialist in chemical weapons and high-yield explosives to help design safeguards against its technology being weaponized. Similarly, OpenAI is pursuing researchers with expertise in biological and chemical risks. These moves come amid heightened scrutiny of AI’s role in warfare, particularly after leaked information suggested the U.S. military had used Claude during operations against Iran. The AI system is alleged to have been involved in tasks such as target identification, intelligence analysis, and simulating battlefield outcomes for airstrike planning. The situation has intensified tensions between the Pentagon and Anthropic, which has long resisted military requests to remove ethical constraints on its AI. The dispute reached a critical point when the Pentagon labeled Anthropic a “supply chain risk,” urging federal agencies to phase out its technology within six months. This designation followed disagreements over how the military could use Claude, with Anthropic insisting on safeguards to prevent its AI from being used for mass domestic surveillance or autonomous weapons development.#iran #pentagon #anthropic #openai #claire

AI Firm Anthropic Seeks Weapons Expert to Prevent 'Catastrophic Misuse' The U.S. artificial intelligence company Anthropic is seeking to hire a chemical weapons and high-yield explosives expert to prevent its AI tools from being misused in ways that could lead to catastrophic outcomes. The firm is concerned that its software might inadvertently provide instructions for creating chemical or radioactive weapons and wants an expert to strengthen its safeguards. In a LinkedIn recruitment post, Anthropic outlined the role, requiring candidates to have at least five years of experience in "chemical weapons and/or explosives defense" and knowledge of "radiological dispersal devices," commonly known as dirty bombs. The firm described the position as similar to roles in other sensitive areas it has already created. Anthropic is not alone in this approach. OpenAI, the developer of ChatGPT, has also advertised a researcher position focused on "biological and chemical risks," offering a salary of up to $455,000, nearly double the amount provided by Anthropic. However, some experts have raised concerns about the risks of this strategy, warning that it could expose AI systems to information about weapons, even if the tools are instructed not to use it. Dr. Stephanie Hare, a tech researcher and co-presenter of the BBC’s AI Decoded TV program, questioned the safety of using AI to handle sensitive information related to chemical and radiological weapons. She noted the absence of international regulations governing this type of work, emphasizing that the use of AI in these contexts is happening without oversight. The AI industry has long warned about the potential existential threats posed by its technology, but efforts to slow its development have been limited. The urgency of the issue has increased as the U.S.#dario_amodei #palantir #anthropic #openai #stephanie_hare

GPT-5.4 Now Available in GitHub Copilot OpenAI’s latest agentic coding model, GPT-5.4, is now being rolled out in GitHub Copilot. Early testing of the model’s real-world capabilities in software development has shown it achieving higher success rates compared to previous versions. The model demonstrates improved logical reasoning and task execution, particularly in handling complex, multi-step processes that require the use of external tools. The update is available to GitHub Copilot users on Pro, Pro+, Business, and Enterprise plans. Users can select the GPT-5.4 model through the model picker within the Copilot interface. While the model is accessible across all supported versions, OpenAI recommends upgrading to the most recent version of each product to ensure optimal performance with prompting and model parameters. For Copilot Enterprise and Copilot Business plan administrators, the GPT-5.4 model must be enabled through the Copilot settings by applying the appropriate policy. Detailed information about the available models in GitHub Copilot can be found in the official documentation, which also provides guidance on getting started with the platform. Users are encouraged to join the GitHub Community to share feedback and insights about their experiences with GPT-5.4. The release marks a significant step in enhancing the capabilities of AI-driven coding tools, offering developers more efficient and reliable assistance in their workflows.#software_development #github_copilot #openai #gpt_5_4 #github_community
OpenAI Held Early Talks With The Trade Desk to Sell Ads OpenAI, the artificial intelligence research laboratory, reportedly engaged in preliminary discussions with The Trade Desk, a major digital advertising platform, to explore potential advertising sales strategies. The collaboration is seen as a significant step in OpenAI's efforts to monetize its advanced language models, including the widely used GPT series. While details of the talks remain undisclosed, industry analysts suggest that such partnerships could open new revenue streams for OpenAI, allowing it to leverage its cutting-edge technology in the competitive digital advertising landscape. The move also highlights the growing interest in AI-driven advertising solutions, as companies seek to enhance targeting precision and user engagement through machine learning capabilities.#openai #the_trade_desk #gpt_series #digital_advertising #ai_driven_advertising
Iran, Berkshire Hathaway earnings, OpenAI's Pentagon deal and more in Morning Squawk The weekend saw significant geopolitical developments as U.S. and Israeli strikes targeted Iran’s leadership, resulting in the death of Supreme Leader Ayatollah Ali Khamenei and multiple casualties among Iranian citizens. The operation, dubbed "Operation Epic Fury," triggered immediate retaliation from Iran and raised concerns about the potential for prolonged conflict. President Donald Trump vowed to "avenge" the deaths of three U.S. service members killed in the strikes, which he described as part of a military operation that was "ahead of schedule." He warned that the conflict could last up to four weeks and lead to further American casualties. The strikes also led to the closure of a large portion of the Middle East’s airspace, causing widespread flight cancellations and stranding travelers globally. U.S. crude oil prices surged in response, with investors speculating about the possibility of a 1970s-style energy crisis. Market reactions to the conflict were swift and severe. Stock futures fell sharply in premarket trading as investors grappled with the uncertainty surrounding the geopolitical situation. Gold futures rose as a safe-haven asset, while Wall Street’s fear gauge hit its highest levels of 2026. Energy and defense stocks, however, saw gains amid heightened concerns about supply chain disruptions and military spending. The broader market was already in a precarious position, with the S&P 500 and Nasdaq Composite posting their worst months in nearly a year. The Dow Jones Industrial Average managed a slight gain for February, marking its longest winning streak since 2018. Berkshire Hathaway’s recent financial performance highlighted ongoing challenges for the conglomerate.#iran #pentagon #berkshire_hathaway #openai #life_time