What OpenAI's Sora means for the future of truth

We may adapt to the forthcoming flood of AI videos faster and better than you think.
By Chris Taylor  on 
A Sora video on a phone atop the OpenAI logo
Credit: CFOTO/Future Publishing via Getty Images

There's a story from the earliest days of cinema that seems applicable to Sora, the text-to-video creation tool launched by OpenAI this week. And given that Sora's servers are struggling with demand, with many OpenAI subscribers still waiting to try it out, we've got time for stories.

You probably know of Arrival of a Train at La Ciotat Station (1896) by the Lumiere brothers, even if you've never seen it. Like Sora, the Lumieres created very short movies that showcased the latest tech. We're talking cinematograph rather than AI rendering, and a luxurious 50 seconds of film rather than the maximum 20 seconds allowed in Sora videos.

Still, it's the same principle: This was an early peek at a shockingly new form of entertainment. According to legend — a legend cemented in Martin Scorcese's charming movie about a boy in the Lumiere era, Hugo (2011) — Arrival of a Train audiences ran in terror from a steam engine that appeared to be heading straight for them.

A similar sense of panic clings to Sora — specifically, panic about what AI videos might do to further crack up our "post-truth" media landscape. The average viewer is already having a hard time judging what is real and what isn't, and the problem is worse if they're depressed. We're living in a golden age of conspiracy theories. The world's richest man already shared an AI deepfake video in order to help swing an election.

What happens when Sora can make any prompt look as real as something you might see on the evening news — ready-made to spread on social media?

OpenAI seems to think its watermarks, both visible and invisible, would prevent any shenanigans. But having downloaded dozens of Sora videos now, I can attest that the visible watermark is tiny, illegible, and fades into the background more often than not. It would be child's play for video editing software to clip it out altogether.

So a world of deliberate disinformation, either from bad political actors or influencers trying to gin up their engagement, is barreling down on us like a train. Right?

Wrong. Because as the actual story of the Lumiere movie tells us, humans are actually a lot smarter about new video entertainment than we give them credit for.

Here's the thing about Arrival of a Train: the legend is almost certainly wrong. We have zero first-hand evidence that audiences fled the cinema, or even flinched when they saw a train approaching in a 50-second clip.

Media studies professor Martin Loiperdinger calls the panic tale "cinema's founding myth," and notes it can be traced back to books written in the second half of the 20th century. It's possible that authors conflated it with the Lumieres' later experimental 3-D version of Arrival of a Train, which screened a handful of times in 1934 and was — like a lot of 3-D movies to come — a novelty, and a commercial failure.

So no, early audiences likely did not confuse a moving image of a train with a real train. Rather, they seem to have adapted to the whole concept of movies very quickly. Contemporary accounts of the Lumiere shorts (of which there were dozens; Arrival of a Train was not seen as a stand-out) are filled with excitement at the possibilities now unlocked.

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

"Why, if this continues," wrote one newspaper, Le Courier de Paris, in 1896, "we could almost overcome memory loss, almost put an end to separation, almost abolish death itself." (Spoiler alert: we did not, although that sounds like a great premise for a 19th century Black Mirror episode.)

Another periodical, La Science Francais, enthused about the "most unbelievably wonderful sorcery" that had created the cinematograph's "hallucinatory phantasmagoria." Even today's most tech-happy AI boosters would have a hard time endorsing Sora in the same terms.

Because like most AI, Sora is often "hallucinatory" — and not in a good way.

As I discovered in the moments that OpenAI servers weren't slammed, almost every Sora-generated video has some detail that looks wrong to human eyes. I typed a prompt for "journalist slams desk in frustration at not being able to access AI videos," then noticed a pen that appears and disappears in the journalist's hand.

The mistakes went on and on. The novelty factor diminished fast. Friends were amused and a little freaked out by the realness of the swag in "hip-hop artist models a cozy Christmas sweater" — until we spotted that the rapper's gold chain had become a gold pony tail at the back, and the reindeer on the sweater had eight legs.

Sora's response to "a funeral mass with circus clowns" pretty much nailed the prompt ... except that the colorful-wigged, red-nosed figure in the casket was missing his body.

That's not to say Sora won't have an immediate impact on the moving image industry. Given less outlandish prompts, it could certainly replace a lot of the generic B-roll often seen in YouTube explainers and corporate training videos. (That's assuming OpenAI isn't going to be forced to cease and desist training Sora on internet video footage without the makers' permission.)

It is to say that there's a significant barrier to entry when it comes to creating videos featuring anything unusual, anything you're trying to lie about, anything that Sora hasn't been specifically trained on. Rooting out all those mistakes, to the point where we won't immediately notice, can be an exercise in frustration.

And perhaps these early mistake-filled AI videos will serve as a kind of mass inoculation — a small dose of the post-truth disease, one that effectively gives our brains AI-resistant antibodies that can better prepare us for a future epidemic of visual fakes.

AI video needs to board the clue train

I'm certainly less impressed with AI after I prompted Sora for a new take on the Lumieres' Arrival of a Train. I asked for a video where a locomotive does actually break through the projection screen at the end, crushing the cinematograph audience.

But Sora couldn't even access the original 50-second short, which is way out of copyright and widely available online (including a version already upscaled by AI). It hallucinated a movie called "Arrival of a tal [sic] train," apparently released in the year "18965."

As for breaking a literal fourth wall, forget about it: despite multiple prompt-rewording attempts, Sora simply couldn't grok what I was asking. The projection screen remained intact.

Still, this version of Sora may still be a harbinger of some terrifying visual fakery to come — perhaps when more robust AI video tech falls into the hands of a future D.W. Griffith.

Two decades passed between Arrival of a Train and Griffith's infamous movieThe Birth of a Nation (1915)the first real blockbuster, a landmark in the history of cinema, which also happened to be a skewed take on recent American history stuffed with racist lies.

Griffith's movie, protested at the time by the NAACP, was hugely influential in perpetuating segregation and reviving the Ku Klux Klan.

So yes, perhaps Sora's release is slowly nudging us further in the direction of a fragmented post-truth world. But even in an AI-dominated future, bad actors are going to have to work overtime if they want to do more damage to society than the cinematograph's most dangerous prompts.

Chris Taylor
Chris Taylor

Chris is a veteran tech, entertainment and culture journalist, author of 'How Star Wars Conquered the Universe,' and co-host of the Doctor Who podcast 'Pull to Open.' Hailing from the U.K., Chris got his start as a sub editor on national newspapers. He moved to the U.S. in 1996, and became senior news writer for Time.com a year later. In 2000, he was named San Francisco bureau chief for Time magazine. He has served as senior editor for Business 2.0, and West Coast editor for Fortune Small Business and Fast Company. Chris is a graduate of Merton College, Oxford and the Columbia University Graduate School of Journalism. He is also a long-time volunteer at 826 Valencia, the nationwide after-school program co-founded by author Dave Eggers. His book on the history of Star Wars is an international bestseller and has been translated into 11 languages.


Recommended For You
How to try OpenAI's Sora right now
Sam Altman announcing Sora during 12 Days of OpenAI livestream


OpenAI Sora is restricting depictions of people due to safety concerns
Sora OpenAI logo in front of Sora homepage showing ai-generated videos

Sora reportedly shipping as part of '12 Days of OpenAI' livestream marathon
A computer rather lazily decorated for Christmas

OpenAI Sora leak: What it was and what it wasn’t.
The Sora announcement on a smartphone in front of the OpenAI logo

Trending on Mashable
NYT Connections hints today: Clues, answers for December 15, 2024
A phone displaying the New York Times game 'Connections.'

Wordle today: Answer, hints for December 15
a phone displaying Wordle



NYT Strands hints, answers for December 15
A game being played on a smartphone.
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!