Ex-OpenAI Employee Reveals Reasons Behind Firing
In a recent turn of events, a former OpenAI researcher, Leopold Aschenbrenner, has come forward to shed light on his dismissal from the company. Aschenbrenner, who previously worked on OpenAI’s superalignment team, shared details about his termination in an interview with podcaster Dwarkesh Patel.
Aschenbrenner explained that his firing stemmed from a memo he had written following a significant security incident at the company. In the memo, he raised concerns about the inadequacy of OpenAI’s security measures in safeguarding vital algorithmic secrets from potential foreign threats. Despite receiving positive feedback from colleagues to whom he had shared the memo, Aschenbrenner faced repercussions from the company.
According to Aschenbrenner, human resources at OpenAI reprimanded him for the memo, labeling it as “racist” and “unconstructive” due to his concerns about Chinese Communist Party espionage. Subsequently, an OpenAI lawyer probed Aschenbrenner about his views on artificial intelligence (AI) and artificial general intelligence (AGI), questioning his loyalty to the company and the superalignment team.
Following an investigation that involved scrutinizing his digital artifacts, Aschenbrenner was terminated on grounds of leaking confidential information and lack of cooperation during the inquiry. The alleged leak pertained to a document outlining safety and security measures needed for AGI, which Aschenbrenner had shared with external researchers for feedback.
Despite Aschenbrenner’s assertion that he had taken precautions to ensure the document’s integrity before sharing it, OpenAI considered certain details, such as a reference to planning for AGI by 2027 to 2028, as confidential. This decision was met with surprise by Aschenbrenner, who believed the planning horizon was publicly discussed by CEO Sam Altman.
In response to Aschenbrenner’s claims, an OpenAI spokesperson stated that his concerns did not contribute to his dismissal and expressed disagreement with his portrayal of the company’s operations. Aschenbrenner joins a growing list of former employees who have raised safety apprehensions within OpenAI, emphasizing the need for transparency and protection for whistleblowers in AI companies.