Overview

The same kind of artificial intelligence that brings animated characters and digital assistants to life with natural‑sounding, expressive voices is now being used by scammers. Known as “generative AI” (or generative AI), this technology can clone voices, create convincing images, and produce realistic videos—tools crooks use to make their scams more believable. One of the most effective ways to protect yourself: always verify the identity of anyone or any company that asks for money or promises a deal that seems too good to be true.

Quick, a moment of honesty while no one else is around: How many minutes or hours a day do you spend scrolling through silly videos with goofy sounds and over‑the‑top voice‑overs?

Welcome to the world of user‑generated social media. Watching these clips—many created with the help of generative artificial intelligence (generative AI)—has become the modern version of channel surfing. Generative AI is artificial intelligence that creates (or “generates”) images, video, audio, text, code, or combinations of these.

Unfortunately, that same AI that keeps us entertained is also being used to scam people. Experts warn that this technology could be tied to tens of billions of dollars in fraud losses in the U.S. over the next few years.

In fact, the financial consulting group Deloitte estimates that generative AI could be behind more than $40 billion in fraud losses in the U.S. by 2027. And don’t be too quick to think the company exaggerates: their experts based the staggering figure on 2023’s actual reported generative AI-related losses of $12 billion.

On the good news front, as it goes with most scams, when you know the latest tricks, tools, and tactics the crooks rely on, you stand the best chance at warding off their attacks.

How scammers are using AI

There’s no escaping the talk around AI in every industry. One of the hottest headlines circulating comes from the RAND research corporation, which reported that just over half of the students it surveyed used some form of AI to do their work. Perhaps more shocking, 53 percent of their teachers admitted to using it, too.

But while such numbers take over the headlines, crooks are busy using AI to sharpen their attacks on the population as a whole. In some cases, generative AI creates a voice that regurgitates whatever is typed. Other times, it might fabricate a video or photo in which a person does whatever the programmer tells it to do.

Voice cloning and audio deepfakes

Voice cloning mimics the sound, intonation, accent, and emotion of a person’s voice.

In a voice-cloning scam, the scammer feeds a few seconds of someone’s voice into a generative AI app. They may harvest these short audio samples in the first place from social media, voicemail, or even answered phone calls.

They then type out text for the app to read aloud in the cloned person’s voice. How do cybercrooks use these cloned voices? They may target the co-workers or family members of the person they copied.

The cloned voice—a.k.a. “deepfake”—might persuade an employee to share valuable information or transfer funds, or it might convince a loved one to give the caller money.

Deepfake videos

It used to be funny when we could plaster a photo of our face onto the body of a dancing elf and make it talk. But today’s AI tech makes those old videos as obsolete as cave drawings.

Now, a scammer simply uploads a handful of clips of a particular person, and AI churns out a video in which they do and say whatever the bad actor wishes.

The generative AI company Sensity has reported that deepfake videos are most often created using the likenesses of well‑known, high‑profile business leaders. Why? Sensity suggests it’s likely because individuals perceived as ultra‑wealthy or highly influential carry added credibility—making fake investment advice or financial claims more convincing to potential victims.

One thing is for certain: because there is so much online material of Musk available for sampling, his deepfakes can be very convincing. Typically, these videos involve “him” recommending a bogus cryptocurrency for viewers to invest in.

When a victim does, their money or their personally identifiable information (PII)—or both—gets stolen.

Deepfake companies

Other kinds of generative AI create fake ads, fake online storefronts, fake company websites, fake social media profiles, fake reviews, and whatever else is needed to make consumers think a fake business is real.

Fake retail sites even function as authentic ones do, complete with shopping carts, tracking information, confirmation emails and texts, and even interactive chatbots.

By the time the delivery date comes and goes the shopper has already been robbed and the scammer is long gone.

Deepfake photos

Upload a photo of your dog and ask AI to place them in Paris, and the results can look surprisingly real—enough to spark a little travel envy. But when similar technology is used to create deepfake images of people, the implications are far more serious.

Fake images can be used to sway public opinion about current events. And scammers can use them to “catfish” (or mislead) victims for financial gain.

In one such current scam, an AI platform creates a fake photo of a real person alongside someone they’ve never met. The scammer then hacks into the real person’s social media account and posts the fabricated image along with a message about a fundraiser for their terminally ill (and fake) pal. Any generous follower who clicks on the provided link actually hands over their money to a scammer and opens themselves up to identity theft, too.

Phishing texts

Phishing refers to a scam in which a fraudulent message is used to trick recipients into giving their PII or money to criminals. It used to be that scammers sending phishing texts, emails, social media direct messages, and letters might give themselves away by coming off as unprofessional.

Crummy spelling and grammar could be a sign that a con was afoot. But thanks to generative AI, phishing text can be programmed to be letter-perfect. And, according to IBM’s X-Force Threat Intelligence Index, communication that used to take a scammer sixteen hours to write now takes generative AI five minutes.

Besides creating convincing written messages, generative AI can help scammers with “spear-phishing attacks”—hyper-personal communication targeting very specific cherry-picked groups.

In other words, not only is generative AI being used to create content, but it’s also being used to speed up that creation and supercharge the attacks.

How to fight back against AI scams

To protect yourself in this world of generative AI scams, first, brush up on the tried-and-true tips that pertain to fraud safety as a whole:

  • Don’t answer phone calls from numbers you don’t know.

  • Don’t open unsolicited emails, links, or attachments from people or companies you don’t recognize.

  • Don’t pay anyone who demands to be paid via gift cash cards, money transfers, or cryptocurrency.

  • Don't engage with anyone who urges you to act quickly or threatens you.

Then, before you trust what you’re seeing or hearing, pause and ask yourself a few quick questions to spot the signs of a deepfake scam:

  • Does this image or video look slightly off? Are there blurred or undefined edges, overly smooth skin, strange lighting, or unnatural movements?

  • Does the voice sound natural? Are there odd pauses, inconsistent accents, or emotions that don’t match what’s being said?

  • Does this seem believable? If you pause and think it through, does it align with real‑life situations?

  • Have I verified the source outside of this message? Did I contact the person or company directly using trusted contact information—not links, ads, or replies provided here?

A deepfake scam relies on speed, surprise, and trust. Taking a brief pause is often all it takes to protect yourself from acting on something that isn’t real.