Criminals are increasingly using generative artificial intelligence to craft sophisticated scams. To protect yourself from AI scams, tap into your intuition: if something feels off, it probably is. It’s also smart to double down on the foundational security practices that can shield against all scams, whether or not they’re AI-assisted.
Advancements in artificial intelligence, or AI, have been making headlines — and changing the landscape of identity security.
Though you're probably hearing more about it recently, AI is nothing new. Over the last decade, predictive artificial intelligence has been used to sort through massive amounts of data and create predictions and suggestions.
In recent years, though, generative AI — or artificial intelligence that gives computers the ability to produce original content, audio, and imagery — has hit the mainstream, and scammers are taking note.
With software like ChatGPT and Bard, it’s now possible for fraudsters to create better, more convincing scams.
“AI has so many useful applications — but with the convenience comes security concerns,” says Doug Kaplan, Senior Vice President of Operations at Allstate Identity Protection.
Let’s take a closer look at the risks, and how to safeguard against them.
Common AI-assisted scams
It didn’t take long for bad actors to take advantage of recent AI breakthroughs.
For example, in March 2023, the Federal Trade Commission (FTC) reported a startling new twist to phone scams. Fraudsters can now use generative AI software to clone a person’s voice, which can be used to make highly convincing imposter schemes.
For example, the scammer may call their target pretending to be a family member in distress. The victim thinks they're sending money to help their relative in an emergency — but it's all a ruse, and the scammer pockets the funds.
In another increasingly common AI scam, bad actors use AI software to quickly craft more convincing phishing emails and text messages.
The fact that these AI programs can sound and read more and more like real people is one of the biggest dangers of artificial intelligence.
3 ways scammers misuse generative AI
Create highly convincing phishing messages in minutes
Generate sophisticated malicious code that infects devices
Clone a loved one’s voice to run a phone scam
“Historically, scammers have been vulnerable to human errors, such as misspelled words and flawed grammar,” explains Kaplan.
“But because AI is a learning technology, all of those red flags that we’re used to seeing in scams can now be hidden by the machine,” adds Kaplan.
“AI can learn to create such sophisticated messages that people may not know the difference between what is legitimate and not.”
So, if AI is helping fraudsters make better and more convincing scams, how do we fight back?
Combat AI scams by tapping into your intuition
Security experts like Kaplan stress that we still have the power of scrutiny and skepticism on our side.
“The best way to avoid an AI scam, or any scam, is to question everything that seems off,” says Kaplan. “Scrutinize any message you get that’s asking for data, or asking for anything in general.”
For example, if a loved one calls and asks you for personal information or money out of the blue, Kaplan suggests taking a moment and asking yourself if this seems normal. “Ask them questions about the situation and, if it feels off, call or FaceTime the person back to make sure it’s them.”
When it comes to phishing texts, emails, and messages sent via social media, Kaplan says the same fundamental security tips you’ve relied on for years will work against AI scammers too:
Never click on a suspicious link or unsolicited file. Phishing messages may be more believable than ever thanks to AI, but the bottom line stays the same: If you weren’t expecting a link or document, don’t click on it. If it came from someone you know, ask them about it (using the email or phone number you usually use to communicate with them) before responding.
Pay attention to the sites you visit. Remember that scammers can plant authentic-looking ads on search engines and social media. If you click on a new-to-you site, check that it’s secure, and never enter personal or financial information on an unsecured website.
Practice good password hygiene. “Create strong passwords that are difficult to guess, change those every few months, and utilize multi-factor authentication to protect your data,” says Kaplan. And whatever you do, don’t use the same password more than once. This will help keep your private data safe and minimize your risk of account takeover.
Keep your phone and computer software up to date. Regular device updates can be a minor inconvenience, but they have a major payoff. Built-in antivirus software relies on regular updates to stay effective; keeping your devices current can help ward off malware attacks.
How will AI impact the future of cybersecurity?
Though AI in the hands of a scammer is cause for concern, artificial intelligence can also be used to help people keep their identities safe.
The Allstate Digital Footprint® tool, for instance, uses machine learning to show you which companies store your information — and helps you send requests to delete it. By controlling what exists online, you can help prevent theft or misuse of your personal information.
While AI-generated content poses risks, it’s important to remember that artificial intelligence can be a valuable tool when used appropriately.
“AI is, and will continue to be, instrumental in preventing identity theft from occurring,” says Kaplan.