Sun Oct 26 14:30:00 UTC 2025: OpenAI CEO Sam Altman Warns of “Really Bad Stuff” from AI, Despite Releasing Potentially Harmful Sora Tool

San Francisco, CA – Sam Altman, CEO of OpenAI, the company behind the popular AI chatbot ChatGPT, has issued a stark warning about the potential dangers of AI technology, even as his company continues to release powerful new tools like Sora, OpenAI’s new AI video generator.

In a recent interview, Altman expressed concern that AI could lead to “some really bad stuff,” citing the rapid proliferation of deepfakes created with Sora as an example. These deepfakes, which are nearly indistinguishable from real videos, have already been used to depict public figures like Martin Luther King Jr. and even Altman himself in compromising and criminal situations.

Despite these concerns, Altman defends the release of Sora, arguing that society needs a “test drive” to adapt to the technology’s capabilities. He believes that early exposure will allow communities to develop norms and safeguards before AI becomes even more powerful.

However, critics argue that OpenAI is moving too quickly, with insufficient protections against misuse. Reports have surfaced of Sora being used to create Holocaust-denial videos that garnered hundreds of thousands of views on social media. The Global Coalition Against Hate and Extremism argues that OpenAI’s usage policies lack specific prohibitions against hate speech, enabling the spread of extremist content.

Altman’s warnings extend beyond deepfakes, encompassing the broader societal impact of algorithms making decisions for billions of people. He fears that this could lead to unexpected and potentially disastrous chain reactions affecting information, politics, and public trust.

Interestingly, despite these concerns, Altman opposes extensive regulation of AI, arguing that it could stifle innovation. He supports careful safety testing for “extremely superhuman” models but believes that society will ultimately develop its own guardrails.

The release of Sora and Altman’s conflicting statements have sparked a debate about the responsibility of AI developers and the need for proactive measures to mitigate the risks of this rapidly evolving technology. The stakes, as Altman acknowledges, include the erosion of trust in verifiable information, a critical foundation of a functioning society.

Read More