Generative AI is improving at an astonishing pace, to the point that many of us would struggle to distinguish whether we are talking to a human or an AI chatbot. AI-generated images and videos have also come on in leaps and bounds over the last few years, and the best AI photos and videos are almost indistinguishable from the real thing.
Almost, but not quite.
How to Spot AI-Generated Videos
While some people are keen to avoid using text-to-video AI generation, AI-generated videos are popping up all over the place these days, from YouTube to TikTok, and every social media platform you can think of in between, which means that we all need to start tuning into how to determine whether a video was generated by artificial intelligence or not.
Why is this important? Because videos are an important source of information in the modern era, we need to understand what’s real and what’s not real to avoid feasting on misinformation.
There are still some major giveaways that can clue us in on whether a video is artificial intelligence or not. None of these are absolute, and as AI continues to improve, some will cease to be reliable (as you’ll see in the final section!). But for now, there are still some dead giveaways you should look for when trying to detect the use of AI.
1. Impossible Physics
One of the key giveaways that the video you’re watching was created by AI is any sign of action that violates the laws of physics. Sure, special effects mean that not every legit video always adheres to those laws, but you can usually tell the difference between intentional special effects and unintentional anomalies included by mistake.
Examples of impossible physics include objects changing course without any external forces being applied, someone jumping too high or otherwise looking like they belong in The Matrix, and liquids behaving like solids (or vice versa). These issues arise in AI-generated videos due to artificial intelligence models relying on data rather than an actual understanding of real-world physics.
2. Bad Transitions Between Scenes
If you have seen an example of a bad AI-generated video, then you’ll be familiar with bad transitions between scenes. However, while these example videos usually highlight the comically dumb or funny transitions, even more subtle ones can provide a clue. Look for figures or objects morphing into other things; something that wouldn’t be possible in real life.
An obvious example of bad transitions includes nonsensical cuts between scenes, which jar your brain. If a human is filming something, they’ll ensure the cut makes some sort of narrative sense, whereas that is not of concern to a generative AI model, which will make it up as it goes. This is due to the way AI generates videos frame by frame, with no logic applied unless the human prompts are detailed enough to ensure a narrative is followed.
3. Human Movements/Expressions
Human expressions are difficult to replicate, making them one of the best tells when verifying whether a video was created using AI. We constantly look at each other, and can see expressions forming on people’s faces, so we know the subtle clues to look for. Generative artificial intelligence has yet to fully understand the nuances involved in human expressions and emotions, let alone develop the skills to replicate them.
In AI-generated videos, these issues reveal themselves in both overt and subtle ways. Movements are the most obvious way, as human arms or legs flailing (as an example) will instantly trigger your brain to know there’s something “off.” Less obvious but still noticeable examples are twitchy mouths, unnatural eye blinking, and expressions that shift too suddenly from one extreme to another.
4. Visual Background Noise
While videos shot by real people capture a whole scene within the frame, AI will often overlook the background, focusing instead on getting the subject matter correct. However, that gives us a simple way to detect whether a video has been created using AI. Look beyond the subject in the foreground, and see what’s happening in the background. It may be nonsense, but it’s also likely to include blurry textures, tearing, or artifacts that otherwise shouldn’t be there.
People using AI to generate videos will be very focused on getting the subject right, but will not care much about what is happening in the background. While text-to-video AI models will do their best to conjure up appropriate backgrounds, they struggle to remove visual noise. So you may see textures that flicker or trees that wobble. And these telltale signs can occur in some frames but not others.
5. Mismatches in Actions and Emotions
If the main subject of the video you’re watching is a human, then look for mismatches between what is being done (either by or to them) and the emotions they’re displaying. Oftentimes, the two will be subtly off, and sometimes, very off. As an example, if someone is in peril and yet has a nonchalant, goofy grin on their face, that’s a total mismatch.
While AI copes well with generating people and objects in broad brushstrokes, once they try to show someone’s lips moving or expressions playing across their faces, they’re often left wanting. This is because AI struggles when it comes to the minor details. Nailing human emotions down is extremely tricky for AI models that rely on the data that’s fed to them rather than any natural understanding of nuance and meaning.
6. Nonsense Sequences
Last but not least on the list of dead giveaways when deciding whether the video you’re watching is AI or not is nonsense sequences. Again, these issues appear in many of the examples of poor AI videos shared far and wide on social media. Nonsense sequences are narratives that make zero sense. If you’re left asking why and how something in the video is happening, then there’s a good chance that it’s AI.
A classic example of a nonsense sequence is Will Smith eating spaghetti. While Smith may well eat spaghetti in real life, he’s unlikely to gorge on it as he is usually shown doing in these AI videos (though the difference between early AI videos and the latest models, like Veo 3, is startlingly different). Another example I’ve seen multiple times is Gordon Ramsay cooking in the kitchen, as he cooks himself, sets light to everything around him, and does several other things that are complete and utter nonsense.
Overall: Trust Your Instincts
While the above are all specific things to look for, the biggest thing to do when trying to detect AI-generated video is to trust your instincts. Humans have superb instincts, as long as they choose to use them. In the same way that you shouldn’t believe everything you read online, you shouldn’t just accept that a video is as it appears to be. Because, increasingly, they’re being generated by AI thanks to some fantastic AI video generators and detailed prompts from skilled prompt engineers.
The rise of AI has unfortunately coincided with a reduction in attention spans among the general population. So, I fear that many people won’t be paying enough attention to what they’re watching to detect AI generation, which is a big problem. So, pay attention to the video you’re watching, and trust your instincts when doing so. And always remember that, if it looks too good to be true, it probably is.