Let’s start with this “special” address by “POTUS” Joe Biden:
This faintly resembles what a deep fake (voice) scam can look like. Actually, this was a free AI-voice cloning tool, and I have spent a little over five minutes writing (with ChatGPT) and pasting the text to convert it into (Biden’s) speech.
However, deep fake scams are more sophisticated than this. All you need is a few high-quality photographs, video, or audio of the subject (victim) and a powerful graphic card to get started.
Instead, this article shares a brief introduction about deepfakes and mentions a few such scams you should beware of.
What are Deepfakes?
Deepfake combines two terms: Deep (from deep learning, which is used to create them) and fake (indicating the nature of the output).
In short, deep learning algorithms are deployed to create convincing images, audio, or video content from high-resolution original content.
This is a classic deepfake example and feels effective to an extent. But still, it’s five years old, and this technology is advancing at a quick pace.
And this isn’t always made for fun or showcasing someone’s AI abilities. One such serious instance is when “Mr. Volodymyr Zelensky” is asked by his forces to surrender in the ongoing Russia-Ukraine war.
Unfortunately, this is not one such instance. As per the Regula Forensics 2023 report, 37% of global organizations have faced Deepfake voice fraud, 29% experienced synthetic video scams, and 46% went through identity theft attempts.
Consequently, it’s not just about entertainment, and deepfake is widely used in spreading misinformation and is misused in a ton of other ways, which can be dangerous and distressing for the victims.
Besides, combating deepfake content isn’t easy. They are the side effects of significant advancements in AI technology and the fact that such tools are now widely available for public use.
Therefore, the biggest and the most common defense we have is awareness.
So, let’s work on those aspects and take a look at the types of deepfake scams.
Types of Deepfake Scams
Based on the content and the target, deepfake scams can be divided into a few forms, such as:
Further sections talk about these in detail and some of their modifications.
This is a well-known deepfake scam where we see a made-up video generally resembling a high-profile target. A typical victim can be a president of a country (like the Obama deepfake), a popular Hollywood star, or a CEO of a well-known company.
What makes them easy targets is the public availability of their pictures, speeches, etc., because the tools creating deepfake videos perform well with dissimilar input, depicting different ranges of emotions, facial angles, lighting conditions, backgrounds, and more.
The intention of such impersonation can be plain simple fun, a blackmail attempt, political motivation, defamation, or anything else.
In rare cases, one might see deepfake videos of family members asking for money or access to valuable belongings.
The most common deepfake scam is fake news. Such online disinformation campaigns pose risks that can prove fatal in tricky situations like riots or wars.
Not only this, it can malign public figures when their deepfake replica makes comments they never did in reality.
What helps the propaganda is the social media and streaming platforms like YouTube, which help such fake news spread like wildfire. Finally, it creates an atmosphere where normal users get confused about the authenticity of everything they see.
Needless to say, fake interviews using deepfake technology take aim at fully remote, work-from-home positions. These scams try real-time deepfake videos over conferencing programs such as Zoom.
In addition to getting a subpar candidate, the target company risks exposing its assets to a fraudulent person. And based on the company’s dealings, not only can this damage their own resources, but it can also jeopardize their country’s interests.
A simpler objective of deepfake interviews can be to get paid in high-value currencies.
On the other side, scammers can charge innocent candidates an interview fee to get them in front of “industry tycoons” for that one big shot at their dream jobs.
Similar to deepfake videos and images, synthetic voice is another one adding to the list of such technology-powered trickeries.
In one such case, Jennifer DeStefano, a resident of Arizona, heard her 15-year-old daughter sobbing on the phone while a male voice demanded $1 million for the release. The mother was convinced her child was in danger. Before she could think over the payment, their friend called her husband, confirming that her daughter was safe.
Ms. DeStefano later narrated the ordeal, that she totally believed the voice, not just by the sound of it, but the overall tonal delivery, which exactly matched with how her daughter usually speaks.
This is voice cloning and is done with the help of the raw audio samples from the source.
According to the traditional honeytrap definition, it’s where the subject is coaxed into a relationship to extort sensitive information. Honeytraps usually aim at high-profile entities, like someone working in the armed forces, top government officials, servicing politicians, etc.
With deepfake technology, these honeytrapes are mostly online and can be performed by anyone at scale. This means the victim might receive relationship requests from attractive partners via social media, which quickly turn intimate with the end goal of financial or informational extortion.
Such deepfake scams are generally coveted operations, running their due course before laying the final trap.
Tech support scams are very common. You might have heard about scams where some “Microsoft” guy asks you to pay to solve your computer’s “serious” issues.
Before deepfake, their accents or unusual speech patterns gave away the malicious attempts. But now, those scammers have AI superpowers that can easily fool innocent users into thinking that they, in fact, are talking to some legitimate company representative.
Many of us read customer reviews before buying any product. Likewise, checking the video testimonials is another shopping habit that has proved helpful until now.
This type of deepfake scam will have multiple people vouching for a bad product, swaying the purchasing decisions, and wasting customers’ money and time.
There are many more!
What do you see on the internet except from the text?
I would say videos, images, and audio. All this can be manipulated with this AI tech, making the internet a breeding ground for fraud in the not-so-distant future. In addition, deepfake creation is getting easy as we speak of it, making it tough for an average internet user to stay “correctly” informed.
In such a situation, where there is hardly anything to stop the widespread misuse of technology, we should at least try to stay in the know.
PS: Coming to its fun applications, you have these deepfake apps to create some of the best memes to take social media by storm.
Hitesh works as a senior writer at Geekflare and dabbles in cybersecurity, productivity, games, and marketing. Besides, he holds master’s in transportation engineering. His free time is mostly about playing with his son, reading, or lying… read more
Narendra Mohan Mittal
Narendra Mohan Mittal is a Senior Digital Branding Strategist and Content Editor with over 12 years of versatile experience. He holds an M-Tech (Gold Medalist) and B-Tech (Gold Medalist) in Computer Science & Engineering.