Connect with us


dq Scarlett Johansson expressed anger towards OpenAI when they used her likeness without her consent.




Scarlett Johansson has expressed shock and outrage towards OpenAI after discovering that the voice of Sky, the voice assistant of GPT-4o, bears a strange resemblance to hers, despite her not giving consent. CNN quoted the actress as saying that she had authorized her lawyer to work with OpenAI upon finding out about the uncanny similarity.

Johansson is famously known for her role as the female lead in the 2013 Science-fiction film “Her,” where she portrayed a virtual assistant. In the movie, her character’s AI becomes the object of affection for Joaquin Phoenix’s character but ultimately breaks his heart by admitting to loving hundreds of other users. Eventually, the AI assistant collapses, becoming inaccessible.

On May 14, when GPT-4o was announced, OpenAI CEO Sam Altman also posted “her” on his X account, reminiscent of the iconic film.

Scarlett Johansson và logo OpenAI phía sau. Minh họa: Newsx

OpenAI has currently halted Sky on the new language model. In a post on X on May 20, the company stated, “We have received many inquiries about the voice selection in ChatGPT, especially Sky. The company is pausing Sky while addressing the issue.” Prior to this, many had also noted that the voice in GPT-4o sounded rushed and lacked naturalness.

Johansson revealed that back in September last year, Sam Altman had raised the issue, inviting her to voice the AI generated by the company. However, she declined for personal reasons. “Two days before GPT-4o was unveiled, Altman contacted my representative to reconsider the offer. But OpenAI announced the platform before an agreement could be reached,” the actress said.

Johansson stated that she had authorized her lawyer, and OpenAI “reluctantly agreed” to remove Sky after two letters were sent to CEO Altman.

“In an era where people are grappling with deepfake and protecting their image, careers, and identities, I believe these are worthy questions to be clarified. I hope to receive transparent solutions to ensure personal rights are legally protected,” Johansson emphasized.

In response, OpenAI asserted that the voice of GPT-4o is not related to Johansson’s but belongs to “another professional actress.” The company stated that they had used this person’s natural voice to train the AI. However, OpenAI did not disclose the identity of the voice owner.

Internal Rifts within OpenAI

The legal troubles involving Scarlett Johansson are just part of the turmoil within the company under the leadership of Sam Altman.

Immediately after the launch of GPT-4o, Jan Leike, Director of AI Safety, and Ilya Sutskever, Chief Scientist of OpenAI, simultaneously resigned from X. Leike even publicly criticized OpenAI leadership for prioritizing “flashy products” over safety. Sam Altman shared Leike’s post and stated, “He is right, we have a lot of work to do. We are committed to doing that.”

According to CNBC, OpenAI disbanded the Superalignment project last week, established in 2023 to research long-term risks of artificial intelligence. Meanwhile, The Information reported that two AI safety researchers, Leopold Aschenbrenner and Pavel Izmailov, were fired by OpenAI for leaking internal information. Cullen O’Keefe, head of policy research, left in April according to his LinkedIn profile. Diane Yoon, Vice President of Human Resources, and Chris Clark, Director of Strategic Initiatives and Nonprofits, have also resigned from OpenAI.

Business Insider reported that the breakdown of the AI safety team has raised many doubts about Sam Altman’s leadership. Altman himself admitted on Joe Rogan’s podcast last year, “Many of us are very concerned about AI safety. With the ‘non-extermination of humanity’ version, we have a lot to do.” However, what is happening at OpenAI is eroding public trust in Altman. Former employee Daniel Kokotajlo told Vox that he was “gradually losing confidence in OpenAI’s leadership and their responsible handling of AGI superintelligence.”

Another personnel crisis at OpenAI is the “employee gag policy.” Vox reported that the company has strict agreements, forcing employees not to share information about the company after leaving. On May 19, Sam Altman admitted on X that he felt “embarrassed” by operating OpenAI in this manner. He affirmed that he did not know this clause applied to former employees and was working to rectify it.

Analysts believe this is a rare instance of Altman admitting a mistake, contrary to the calm image he has been building amidst the chaos at OpenAI.