Technology
A Timeline of All the Recent Accusations Leveled at OpenAI and Sam Altman
Recent weeks have not been kind to OpenAI. The release of the company’s latest model, GPT-4o, has been somewhat overshadowed by a series of accusations leveled at both the company and its CEO, Sam Altman.
Accusations range from prioritizing product development over safety to using the likeness of actor Scarlett Johansson’s voice without her consent. This comes at the same time that several high-profile employees, including co-founder and chief scientist Ilya Sutskever, have chosen to leave the company.
This is not the first time the Silicon Valley startup has been embroiled in Scandal. In November, Altman was briefly ousted from the company after the board found he had not been “consistently candid” with them. He returned five days after being removed, with the support of most staff. The board was subsequently reconstituted, and Altman was reappointed to it in March.
As AI systems of the sort developed by OpenAI become increasingly powerful, they carry the potential for both tremendous benefits and serious risks. For example, many experts believe that they could be used to enable large-scale criminal or terrorist activities. Meanwhile, OpenAI says its mission is to build “artificial general intelligence” (AGI)—a speculative Technology that could perform almost all economically valuable tasks better than a human—in a way that benefits “all of humanity.”
Recent events have cast doubt on whether the company can be trusted to act responsibly in pursuit of this lofty goal, and have led many to underscore the need for regulation in the AI sector.
Here is a timeline of all the recent accusations leveled at OpenAI and Sam Altman.
May 17: Senior safety researcher Jan Leike criticizes OpenAI for prioritizing products over safety
After resigning from OpenAI on May 15, Jan Leike explained his decision in a post shared on X (formerly Twitter), writing “safety culture and processes have taken a backseat to shiny products.” Leike departed on the same day as OpenAI’s chief scientist Sutskever. The pair co-led the “superalignment” team, the newest of OpenAI’s three safety teams. While the company’s other safety groups focused on the risks posed by AI systems in the short-to-medium term, the superalignment team was established in July 2023 to devise ways of controlling hypothetical future AI systems.
Leike said that for months the team had been “sailing against the wind” and struggling to access the computing power needed for its research—despite OpenAI promising the team a fifth of its total computing resources. Days later, multiple sources confirmed to Fortune that OpenAI had failed to keep its commitment. The superalignment team had also lost at least three staff since March, two of whom were fired for allegedly leaking information. With co-leaders Leike and Sutskever no longer there to take the helm, the superalignment team was disbanded.
Altman responded by saying he was grateful for Leike’s contributions and acknowledging there was a lot more safety work to be done. In a follow-up post, Altman and OpenAI’s President, Greg Brockman, laid out their vision going forward, saying “we take our role here very seriously and carefully weigh feedback on our actions.” Leike has since announced that he has joined rival AI lab Anthropic to “continue the superalignment mission.”
May 17: OpenAI is criticized for silencing former staff with restrictive agreements
On May 17, Vox reported on the existence of “extremely restrictive offboarding agreements” used by OpenAI to stifle criticism. In order to retain the vested equity that employees had accrued, they were reportedly required to sign an agreement containing both non-disparagement provisions that would eternally forbid them from criticizing their former employer, and non-disclosure provisions that prevented them from mentioning the agreement’s existence. This came to light after one former employee, Daniel Kokotajlo, posted to an online forum about his refusal to sign.
In a series of posts shared on X, Kokotajlo said: “I told OpenAI that I could not sign because I did not think the policy was ethical; they accepted my decision, and we parted ways.”
The next day, Altman took to X to deny any knowledge of these provisions, stating, “this is on me and one of the few times I've been genuinely embarrassed running OpenAI; I did not know this was happening and I should have.” Altman also said in the same post that they had never clawed back anyone’s equity, and did not intend to.
OpenAI subsequently confirmed, in messages sent to current and former employees, that these provisions would no longer be enforced, and that they would remove the offending language from all exit paperwork going forward. The messages were reviewed by Bloomberg.
The credibility of Altman’s denial was called into question when, on May 22, leaked documents appeared to show his signature, as well as the signatures of other senior executives such as OpenAI’s chief strategy officer Jason Kwon and chief operating officer Brad Lightcap, on documents that explicitly authorized the provisions.
Read More: The Scarlett Johansson Dispute Erodes Public Trust In OpenAI
May 20: Scarlett Johansson criticizes OpenAI for imitating her voice without consent
When OpenAI demoed its latest model,GPT-4o, which responds to speech in real-time in a human-like voice, many were struck by how familiar one of the voices (“Sky”) sounded. Listeners thought Sky’s voice was similar to that of Samantha, played by Johansson, from the 2013 Science-fiction film Her. A few days before the GPT-4o’s demo, Altman tweeted “her,” and on his personal blog he wrote that interacting with the model “feels like AI from the movies.”
On May 20, Johansson shared a statement saying that she was “shocked, angered, and in disbelief” that Altman would use a voice so similar to her own without her consent. She explained that Altman approached her several times trying to hire her to voice GPT-4o, which she declined. The actor says she was approached again two days before the release of the demo, asking her to reconsider.
In a response posted to the OpenAI website, Altman stated: “The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”
The law is unclear on whether hiring an actor to imitate another actor’s voice amounts to copyright infringement.
May 22: AI policy researcher Gretchen Krueger explains her reasons for resignation
Adding to the voices explaining their reasons for resignation, on May 22 AI policy researcher Gretchen Krueger, who resigned on May 14, said that OpenAI “need[s] to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own Technology; and mitigations for impacts on inequality, rights, and the environment.”
May 26: Former board members accuse Sam Altman of lying
When OpenAI’s board fired Altman last November, they offered a somewhat-cryptic explanation of their decision, saying he was “not consistently candid in his communications with the board and the broader OpenAI team.” After more than 700 of OpenAI’s roughly 770 person team threatened to follow Altman to Microsoft if Altman was not reinstated, he was brought back as CEO—and two of the board members who called for his dismissal stepped down.
The law firm WilmerHale subsequently carried out an investigation into the circumstances and found “that his conduct did not mandate removal.” But the board members who were pushed out remained tight-lipped about their side of the story.
That changed on May 26, when former board members Helen Toner and Tasha McCauley published an op-ed in The Economist where they accused Altman of “lying” and engaging in “psychological abuse” against some employees, including senior-level ones who had gone to the board with their concerns.
OpenAI’s novel corporate structure—it is a capped for-profit company ultimately controlled by a nonprofit—was intended to enable the board to act decisively if they felt the company was failing to uphold its original mission of acting in the best interests of humanity. But according to Toner and McCauley, Altman made the board’s role difficult by neglecting to share key information.
Toner elaborated on this point on the TED AI Show, explaining that Altman had not disclosed that, until April, he legally controlled the OpenAI Startup Fund, instead repeatedly telling the public he had no financial stake in the company. The board was also not informed prior to the release of ChatGPT, which would go on to become the company’s flagship product. Toner also said Altman gave false information about the company’s formal safety processes on multiple occasions, and that he had tried to push her off the board after she published a research paper that he disagreed with. She concluded that OpenAI’s experiment in self-governance had failed, urging for regulatory oversight.
Four days later, two current OpenAI board members, Bret Taylor and Larry Summers, rejected the claims made by Toner and McCauley, describing Altman as “highly forthcoming.” They insisted OpenAI is a leader on safety, and cited WilmerHale’s internal investigation, which concluded that Altman’s dismissal was not over safety concerns.
Read More: Two Former OpenAI Employees On the Need for Whistleblower Protections
June 4: 11 Current and former employees sign a letter criticizing advanced AI companies such as OpenAI and Google DeepMind for cultivating a reckless culture
On June 4, 13 current and former employees from advanced AI companies—11 from OpenAI, and two from Google DeepMind—published a letter decrying the lack of accountability in AI companies, and calling for stronger whistleblower protections.
The letter notes that “AI companies have strong financial incentives to avoid effective oversight,” and that its authors do not believe that “bespoke corporate structures” are sufficient to address this. It goes on to argue that, in the absence of effective government regulation, company employees are the only ones capable of holding these companies to account when they engage in risky behavior. The ability of employees to do this is hampered by the widespread use of confidentiality agreements in the industry.
One signatory to the letter, alignment researcher Carroll Wainwright, elaborated on X, saying: “I worry that the board will not be able to effectively control the for-profit subsidiary, and I worry that the for-profit subsidiary will not be able to effectively prioritize the mission when the incentive to maximize profits is so strong.”
The letter calls on AI companies to commit to not enter into legal agreements that restrict employee’s abilities to criticize them; to facilitate the anonymous reporting of concerns to company boards, governments, or an appropriate independent organization; to support a culture of open criticism; and to commit to not retaliating against employees in cases where they share confidential information with the public, and other reporting procedures have failed.
In a response sent to the New York Times, a spokesperson for OpenAI said: “We’re proud of our track record of providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this Technology, and we’ll continue to engage with governments, civil society and other communities around the world.” Google DeepMind did not comment on the letter.
Some OpenAI employees have been critical of the letter. Joshua Achiam, a research scientist at the company, said on X that the letter “disrupts a delicate and important trust equilibrium that exists in the field and among AGI frontier lab staff today.” In particular, Achiam argues that allowing safety researchers a wide discretion to decide whether disclosing confidential information to the public is justified undermines trust within labs in a way that is ultimately bad for safety.
June 4: Former OpenAI safety researcher Leopold Aschenbrenner says he was fired for raising safety concerns to OpenAI’s board
In an interview on the Dwarkesh Podcast, former OpenAI safety researcher Leopold Aschenbrenner explained the events that led to his dismissal from the company, and criticized the company in the process.
He said he first received a formal warning for sharing an internal memo with OpenAI’s board, in which he criticized the security measures the company had taken to protect “model weights or key algorithmic secrets from foreign actors” as being insufficient.
The action that ultimately led to his dismissal was, Aschenbrenner claims, relatively innocuous. Per his account, he shared a document he’d written on safety, that had been scrubbed of sensitive information, with external researchers for feedback. Aschenbrenner says this was “totally normal at OpenAI at the time.” However, leadership at OpenAI felt that the document leaked confidential information pertaining to their plan to develop AGI by 2027/2028. Aschenbrenner alleges that this information was already in the public domain; indeed, Altman told TIME a similar timeframe during an interview in 2023.
Aschenbrenner was a researcher at OpenAI’s now-disbanded superalignment team. He was not the only superalignment team member to leave from the company prior to co-leads Sutskever and Leike’s departure. Per an update shared on his LinkedIn profile in May, researcher Pavel Izmailov now works at AI lab Anthropic. Meanwhile, William Saunders resigned from OpenAI in February. Ryan Lowe, another alignment researcher at the company, left in March. It suggests that friction between the superalignment team and OpenAI’s leadership had been brewing for months before reaching breaking point earlier in May, when co-leaders Sutskever and Leike resigned.
-
Technology7h ago
Breaking up Google? What a Chrome sell-off could mean for the digital world | The Express Tribune
-
Technology22h ago
AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond
-
Technology23h ago
Newborn planet found orbiting young star, defying planet formation timeline | The Express Tribune
-
Technology1d ago
Awkwardness can hit in any social situation – here are a philosopher’s 5 strategies to navigate it with grace
-
Technology1d ago
No need to overload your cranberry sauce with sugar this holiday season − a food scientist explains how to cook with fewer added sweeteners
-
Technology1d ago
Teslas are deadliest road vehicles despite safety features: study | The Express Tribune
-
Technology2d ago
US pushes to break up Google, calls for Chrome sell-off in major antitrust move | The Express Tribune
-
Technology2d ago
Public health surveillance, from social media to sewage, spots disease outbreaks early to stop them fast