9 October 2025 - Sora 2, Open AI’s new AI video generator, puts a visual watermark on every video it generates.
But the little cartoon-eyed cloud logo meant to help people distinguish between reality and AI-generated bullshit is easy to remove and there are half a dozen websites that will help anyone do it in a few minutes.
And, no, I am not going to link to them. A simple search for “sora watermark” on any social media site will return links to scores of places where a user can upload a Sora 2 video and remove the watermark. I did test three of these websites, and they all seamlessly removed the watermark from the video in a matter of seconds.
Hany Farid, a UC Berkeley professor and an expert on digitally manipulated images, was interviewed on CNN last night and he said he’s not shocked at how fast people were able to remove watermarks from Sora 2 videos:
“It was so predictable. Sora isn’t the first AI model to add visible watermarks and this isn’t the first time that within hours of these models being released, someone released code or a service to remove these watermarks. OpenAI just needs to pump out product. It does not care. The financial stress is enormous to produce, produce, produce."
Hours after its release on September 30th, Sora 2 emerged as a copyright violation machine full of Nazi SpongeBobs and criminal Mickey Mouse. Open AI had tamped down on that kind of content after the initial thrill of seeing Rick and Morty shill for crypto sent people scrambling to download the app.
But now that the novelty is wearing off we’re grappling with the unpleasant fact that Open AI’s new tool is very good at making realistic videos that are almost impossible to distinguish from reality.
To "help" us all from going mad, Open AI said in its release of Sora 2 it had installed watermarks:
“At launch, all outputs carry a visible watermark. All Sora videos also embed C2PA metadata - an industry-standard signature - and we maintain internal reverse-image and audio search tools that can trace videos back to Sora with high accuracy, building on successful systems from ChatGPT image generation and Sora 1".
But experts say that those safeguards fell way short, easy to override. Rachel Tobac, CEO of SocialProof Security who knows exactly how these security features should work, said:
“A watermark (visual label) is never enough to prevent persistent nefarious users attempting to trick folks with AI generated content from Sora. It is a bandaid. This was tech security theatre - as always".
Tobac also said there are scores of tools out there that dismantle AI-generated metadata by altering the content’s hue and brightness:
“Unfortunately we are seeing these Watermark and Metadata Removal tools easily break that standard [the C2PA metadata standard]. This standard will still work for less persistent AI slop generators, but will not stop dedicated bad actors from tricking people.”
Note to readers: the C2PA (Coalition for Content Provenance and Authenticity) is an open, industry-wide standard for digital content metadata that provides a verifiable history of a file's origin and edits, using cryptographic signatures to make this information tamper-evident. This metadata, known as "Content Credentials, empowers consumers to verify the source and integrity of digital media, helping to combat misinformation by revealing details like creation time, author, and processing steps, even in the age of AI-generated content. Nefarious users/ AI slop generators know how to break the standards.
As an example of how much trouble we’re in, Tobac pointed to an AI-generated video that went viral on TikTok over the weekend she called “stranger husband train.” In the video, a woman riding the subway cutely proposes marriage to a complete stranger sitting next to her. He accepts. One instance of the video has been liked almost 5 million times on TikTok. It didn’t have a watermark. She said:
“We're already seeing relatively harmless AI Sora slop confusing even the savviest of Gen Z and Millennial users. With many typically-savvy commenters naming how ‘cooked’ we are because they believed it was real.
This type of viral AI slop account will attempt to make as much money from the creator fund as possible before social media companies learn they need to invest in detecting and limiting AI slop, before their platform succumbs to the Slop Fest".
But it’s not just the slop. It’s also the scams. Tobac:
“At its most innocuous, AI generated content without watermarking and metadata accelerates the enshittification of the internet and tricks people with inflammatory content.
At its most malignant, AI generated content without watermarking and metadata could lead to every day people losing their savings in scams, becoming even more disenfranchised during election season, could tank a stock price within a few hours, could increase the tension between differing groups of people, and could inspire violence, terrorism, stampede or panic amongst everyday folks".
Tobac showed our media partner, 404 Media, a few horrifying videos to illustrate her point.
- In one, a child pleads with their parents for bail money.
- In another, a woman tells the local news she’s going home after trying to vote because her polling place was shut down.
- In a third, Sam Altman tells a room that he can no longer keep Open AI afloat because the copyright cases have become too much to handle.
Everything single one of these videos looked real and were created on Sora. None of them have a watermark. Tobac said:
“All of these examples have one thing in common. They’re attempting to generate AI content for use off Sora 2’s platform on other social media to create mass or targeted confusion, harm, scams, dangerous action, or fear for everyday folk who don’t understand how believable AI can look now in 2025".
Matt Gault of 404 Media told us that Sora 2 wasn’t uniquely dangerous. It’s just one among many:
“It is part of a continuum of AI models being able to create images and video that are passing through the uncanny valley. Having said that, both Veo 3 and Sora 2 are big steps in our ability to create highly visual compelling videos.
But it seems the same types of abuses we’ve seen in the past will be supercharged by these new powerful tools".
Yes, Open AI is "decent" at employing strategies like watermarks, content credentials, and semantic guardrails to manage malicious use.
But it doesn’t matter. It is just a matter of time before someone else releases a model without any safeguards.
And analysts say that the ease at which people can remove watermarks from AI-generated content wasn’t a reason to stop using watermarks. Using a watermark is the bare minimum for an organization attempting to minimize the harm that their AI video and audio tools create.
But they think the companies need to go further - but won't. One told me:
“You would need a broad partnership between AI and Social Media companies to build in detection for scams/harmful content and AI labeling not only on the AI generation side, but also on the upload side for social media platforms. Social Media companies would need to build large teams to manage the likely influx of AI generated social media video and audio content to detect and limit the reach for scammy and harmful content.
But we did not see it after the horrors Facebook unleashed, and we will not see it now. In fact, every Social Media company is decreasing its content/monitoring teams".
News flash! Tech companies have, historically, been a shitshow at that kind of moderation at scale.
OpenAI says it is "trying" to respond to how people are finding ways around their safeguards. For instance, we are seeing Sora not allowing videos that reference Hitler in the prompt.
But then users just find workarounds by simply describing what Hitler looks like (e.g., black hair, black military outfit and a Charlie Chaplin mustache) and voila ... a Hitler video.
Will they adapt and strengthen their guardrails? No.
Will they ban users from their platforms? No.
As all things AI tech, this is going to end badly for us all.
|