Not all deepfakes are created equal. And therefore, not every technique works everywhere. While some have tell-tale signs of being machine-made, others need careful observation or even other AI tools to differentiate.
This is the first thing to take note of. The ‘worked’ portion (usually the face) will have slight differences in skin color, causing a mismatch with the rest of the visible body parts.
The left one in the above image is from a fake video of the Ukrainian president, Mr. Volodymyr Zelenskyy, asking his forces and people to surrender to Russia.
But no one was fooled by this mediocre copy-paste job, and it was called out instantly as a deepfake.
This is the major giveaway, especially when amateurs masquerade as deepfakes experts. Their subpar creations often suffer from unnatural lip movement without human pauses helping to identify them as fake.
However, some algorithms do have provisions for breathing pauses. Still, you can tell the identical stoppages and head movements that follow similar patterns, generally too mechanic to ignore.
Likewise, blinking is another area that reveals deepfakes as such. This also goes through cycles (as written in their code), which isn’t as human-like if you observe alongside a real video of the same person.
Another thing you can use to spot deepfakes is the eyeball movement. The AI, a machine lacking emotions and distractions, often appear more focused than an average human while talking.
Conclusively, deepfakes are easier to spot unless made with (almost) perfection, like this.
So, how to tell that that isn’t the real Morgan Freeman?
The best thing about the preceding video is the high quality. You can switch to the highest resolution (4k available) to give yourself a chance to spot the artificial.
And the bigger the screen, the better. Alternatively, you can screenshot and zoom in to see if some computer work is behind the obvious.
Can you see the made-up skin? This is where algorithms fail, the small details, no matter how sophisticated.
The skin kind of looks patchy, and the facial (and head) hair reproduction isn’t natural and looks glued.
This will help you better analyze the following:
I can clearly pick out the synthetic skin, mostly visible just around the mustache, beard, and hairline.
The chin, eyebrows, face (the real one is slim), nose, etc.–compare that to the real one (on the right), and it becomes clear as a day.
To reiterate, having the real one side-by-side makes it easy to tell apart the fake.
There is a lot that goes on when we talk. Everyone has their own styles, which lead the lips, tongue, chin, cheeks, etc., to move in a certain pattern unique to them.
Besides, deepfake tech has yet to master the inside-the-mouth-visuals while talking. For instance, you can’t point out individual lower (mandibular) teeth in the Obama deepfake.
All we see is a white strip at the bottom, and there are no signs of tongue movement at all.
You can watch any genuine Obama video and can observe the man is much more expressive with a lot of facial movement than this AI replica.
Besides, you can see the video itself isn’t very clear. It’s very low quality, compressed to hide the reality or due to computing limitations.
One limitation in deepfake creation is their frame-by-frame generation. Every frame must be checked for perfect masking to keep the magic intact.
Because of this, most convincing deepfake videos are extremely limited in facial movements. They only show frontal faces with no side views because the side-to-front transitions can reveal the creative bottlenecks.
Hitesh works as a senior writer at Geekflare and dabbles in cybersecurity, productivity, games, and marketing. Besides, he holds master’s in transportation engineering. His free time is mostly about playing with his son, reading, or lying… read more