OpenAI Sora is restricting depictions of people due to safety concerns

Only a "subset of users" are allowed to upload images of people to Sora.
By Cecily Mauran  on 
Sora OpenAI logo in front of Sora homepage showing ai-generated videos
Why OpenAI is limiting the depiction of people in Sora videos. Credit: FilipArtLab / Shutterstock

OpenAI Sora is limiting depictions of real people and taking other strict safety measures to prevent misuse.

The video generator, which was announced on Monday as part of its 12 Days of OpenAI event, has all sorts of editing capabilities for users to create and customize AI-generated videos. But there are certain things you aren't allowed to do with Sora, as users soon discovered.

According to its system card, "the ability to upload images of people will be made available to a subset of users," meaning most users can't create videos of people based on an uploaded image. Those users are part of a "Likeness pilot" that OpenAI is testing with a select few. An OpenAI spokesperson said AI-generated videos of people is limited in order to "address concerns around misappropriation of likeness and deepfakes." OpenAI "will actively monitor patterns of misuse, and when we find it we will remove the content, take appropriate action with users, and use these early learnings to iterate on our approach to safety," the spokesperson continued.

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

Limiting the depiction of people in Sora videos makes sense from a liability standpoint. There are all sorts of ways the tool could be misused: non-consensual deepfakes, the depiction of minors, scams, and misinformation to name a few. To combat this, Sora has been trained to reject certain requests from text prompts or image uploads.

It will reject prompts for NSFW (Not Safe For Work) and NCII (Non-Consensual Intimate Imagery) content and the generation of realistic children, although fictitious images are allowed. OpenAI has added C2PA metadata to all Sora videos and made a visible watermark the default, even though it can be removed, and implemented an internal reverse image search to assess the video's provenance.

Despite the fact that many guardrails have been put in place to prevent misuse, the question of how Sora will respond to mass stress-testing remains. Currently, access to Sora is unavailable due to high demand.

Mashable Image
Cecily Mauran

Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on Twitter at @cecily_mauran.


Recommended For You
How to try OpenAI's Sora right now
Sam Altman announcing Sora during 12 Days of OpenAI livestream


Sora reportedly shipping as part of '12 Days of OpenAI' livestream marathon
A computer rather lazily decorated for Christmas

OpenAI Sora leak: What it was and what it wasn’t.
The Sora announcement on a smartphone in front of the OpenAI logo

OpenAI's Sora is officially here
sora homepage featuring a feed of ai-generated videos

Trending on Mashable
NYT Connections hints today: Clues, answers for December 15, 2024
A phone displaying the New York Times game 'Connections.'

Wordle today: Answer, hints for December 15
a phone displaying Wordle


NYT Strands hints, answers for December 15
A game being played on a smartphone.

The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!