A federal courtroom in California became the latest battleground in the debate over artificial intelligence safety this week as testimony in Elon Musk’s lawsuit against OpenAI
raised new questions about whether the company drifted away from its original mission in pursuit of commercial growth.
The case centers on Musk’s claim that OpenAI transformed from a research-focused nonprofit dedicated to developing artificial general intelligence for the benefit of humanity into a profit-driven technology company. Testimony from former employees and board members offered a detailed look into internal disputes over safety oversight, governance and the rapid commercialization of advanced AI systems.
Rosie Campbell, a former member of OpenAI’s AGI readiness team, told the court that the organization changed significantly during her time there. According to her testimony, the company initially prioritized long-term research and discussions around AI safety. Over time, however, she said the culture became increasingly focused on launching products and expanding market presence.
Campbell said concerns emerged after Microsoft deployed a version of GPT 4 through its Bing platform in India before OpenAI’s internal Deployment Safety Board had completed its review. While she acknowledged the model itself did not appear highly dangerous, she argued that bypassing safety procedures risked weakening safeguards as AI systems become more powerful.
Under questioning from OpenAI’s attorneys, Campbell also conceded that she believed OpenAI still maintained stronger safety practices than xAI
, the AI company founded by Musk. OpenAI has publicly released safety frameworks and evaluation reports for its models, though company representatives declined to discuss current AGI alignment strategies during the proceedings.
The trial also revisited the turmoil surrounding the brief ousting of CEO Sam Altman in 2023. Former OpenAI board member Tasha McCauley testified that members of the nonprofit board became increasingly concerned about whether they were receiving complete and accurate information from leadership. She described a breakdown in trust that undermined the board’s ability to oversee the company’s growing for-profit operations.
McCauley cited several examples that contributed to those concerns, including disputes involving board communications and the launch of ChatGPT. She said the board struggled to effectively supervise the company’s direction despite its formal authority under OpenAI’s unique governance structure.
The conflict ultimately intensified when employees rallied behind Altman following his removal. Microsoft also backed efforts to restore him to leadership, prompting board members opposed to Altman to step down. The episode exposed deeper tensions between OpenAI’s nonprofit mission and the financial realities of competing in the rapidly expanding AI industry.
Legal experts testifying on Musk’s behalf argued that safety commitments lose meaning if companies fail to consistently follow their own review processes. The broader implications extend beyond OpenAI itself. As AI systems become embedded in global commerce, healthcare, defense and communications, questions about oversight and accountability are becoming increasingly urgent.
McCauley told the court that relying heavily on individual executives to make decisions about advanced AI development may not be sufficient given the potential consequences for society. Her remarks reflected a growing view among some policymakers and researchers that stronger external regulation of frontier AI systems may eventually become unavoidable.