Technology reporters testing OpenAI’s new Sora video app have delivered harsh reviews, describing the AI-generated content as “disconcerting” and warning it could accelerate disinformation campaigns.
Mike Isaac and Eli Tan from The New York Times spent hours using the TikTok-style application that creates realistic videos using uploaded face images, reports The New York Times.
The journalists found immediate problems with the app’s capabilities after generating hyper-realistic videos of themselves in various scenarios. Colleagues who viewed the AI-generated content found the clips “slightly disturbing,” according to their review.
The negative reaction extended to personal relationships. After Mike Isaac showed his partner an AI video of himself playing a psychopathic character from “No Country for Old Men,” she responded: “Please never, ever show me this kind of video again.”
The reviewers highlighted serious concerns about the app’s potential misuse. They noted that early users immediately began creating videos using copyrighted characters from “Rick and Morty” and Pokémon, whilst warning that the technology could “pour gasoline on disinformation.”
Safety expert Rachel Tobac supported their concerns, telling the reporters: “It makes it really easy to create a believable deepfake in a way that we haven’t quite seen yet.”
The journalists expressed particular worry about realistic video likenesses creating “clips of fake events that look so real that they might spur people into real-world action.” One viral example showed an artificial Sam Altman appearing to steal computer equipment in security camera-style footage.
The review also raised concerns about “slop” – the growing volume of nonsensical AI-generated videos flooding social networks. The reporters suggested Sora could significantly worsen this problem by making video creation effortless for users regardless of skill level.