There used to be at least a few reliable red flags to help detect digital scammers: bad grammar, clumsy emails, money demands for something you never ordered.
But there’s a new world when it comes to fraud, and consumers must learn a variety of ways to protect themselves, experts say.
It’s all due to the evolving tools of artificial intelligence, or AI, which could provide benefits to humanity but also is providing scammers new ways to ply their trade.
“I am concerned we’re going to be seeing a significant increase in the nefarious use of this technology,” says Eva Velasquez, president and CEO of the Identity Theft Resource Center in El Cajon, California. “We know scammers are always a few steps ahead of those of us on the fraud-fighting side. For the average person, they’re going to be even further ahead of them.”
“As much as this can be a gift to humanity,” she adds, referring to artificial intelligence, “this is going to be a gift to scammers.”
Who’s calling?
Several recent cases have shown how easy it has become to clone someone’s voice and use it to create scary — and threatening — scams.
For example, cyber experts say AI could be behind the case of a Houston-area couple who lost $5,000 earlier this year when someone sounding exactly like their son called to request bond money after his car supposedly struck a pregnant woman and the authorities put him in jail.
AI is also suspected in the case of a Phoenix-area woman who nearly lost $1 million when what sounded like her daughter’s frantic voice told her over the phone that she had been kidnapped and her captors were demanding a ransom.
In both cases, the “emergencies” were complete fabrications, with thieves suspected of using AI-powered voice-cloning technology to convince frightened relatives that loved ones truly were in peril.
Repeating this scam has become remarkably easy: research by McAfee, a global security software company, found that in one instance, voice-cloning software using AI needed only three seconds of an audio clip to reproduce a voice with 85% accuracy — good enough when crooks limit how long calls last and threaten imminent harm to the person allegedly in their clutches.
No more than the most basic of skills is needed to capture a snippet of the audio people post online, whether in YouTube or Instagram videos, podcasts or Facebook
META,
conversations.
“Who among us doesn’t have three seconds of audio online somewhere?” Velasquez says. “As the technology can do more with less and less…the return on investment for scammers is greater. That’s the danger that is posed.”
The so-called “grandparent scam” has been around for a long time, playing on a real fear that a relative is in a dire situation and needs immediate help. Its hallmarks are demands for quick, untraceable payment — by cryptocurrency, cash withdrawn from ATMs or the low-tech method of gift cards. There also will be a sense of urgency, requiring the victim to pay quickly without telling anyone else.
“The aim, most often, is to trick people out of hundreds, if not thousands of dollars,” according to a McAfee blog post. “This is yet one more way that AI has lowered the barrier to entry for cybercriminals.”
Read: Four ways to protect yourself from financial scams
How you can fight back
Cybercrime experts say people can greatly reduce their chances of becoming a victim by taking the following precautions:
- Choose a family “safe” word or words that you share only with a select few. That way, even if the voice on the other end sounds like a relative, if he or she doesn’t know the safe word, that’s a red flag.
- No matter how convincing the call seems, distrust the voice and verify the story. Call the relative directly. If you don’t get a response, try another family member or friend who might know their whereabouts.
- If the person is allegedly in custody, call the police to confirm. Use a phone number you know to be legitimate — not a number provided by the person who claims to be holding your loved one.
Plus: After a wave of layoffs, tech is hiring for AI jobs
Who’s texting?
Similarly, advanced technology has made phishing schemes more sophisticated.
That makes it harder to tell when a text or email is bogus or when someone assumes a fake persona aimed at persuading people to pay money, share personal information or click on a malicious link. The impostor might claim to be with the IRS, or a tech support company or PayPal or even from the local courthouse regarding a supposed failure to report for jury duty.
Such impostor scams cost their victims nearly $2.7 billion in losses last year, up from $2.4 billion in 2021, according to the Federal Trade Commission’s annual Consumer Sentinel Network report. In terms of money lost, impostor scams trail only investment scams, which cost victims $3.8 billion in 2022.
The Identity Theft Resource Center, for one, has revised some of the advice it gives consumers in the face of artificial intelligence, quantum computing and other advanced tools.
Rather than relying on warning signs, it’s safer to take the extra step of independently verifying unsolicited contacts, Velasquez says.
“We’re making it very clear — it’s not about, ‘Can I spot it?’” she said. “Instead, if you are not the one who initiated contact, you must verify the source. I don’t care how legitimate it looks.”
For example, a common red flag has always been poor grammar or weird syntax. Now, AI tools can be used not only to create proper sentences, but also for more effective language in order to fool people.
But Amy Nofziger, victim support director for, says that while scamming tools have evolved, the ground rules for evading phishing and other fraudulent schemes remain the same.
Also see: Scams against seniors soar, costing some their homes and retirement accounts
Beware unsolicited contact
It often comes down to questioning who is asking for money and why, and looking upon an unsolicited contact with great skepticism, Nofziger says.
“Even if my child called me and said he was in trouble and I needed to send him prepaid gift cards? No. Nobody needs prepaid gift cards to be paid,” she says. “No matter how much I swear that was his voice, that is not normal.”
Following are frequently cited tips to avoid phishing scams:
- Refuse to be tempted by links that could unleash malware or result in stolen identity.
- Don’t provide any personal or financial information in response to an unexpected request. Legitimate organizations, businesses or government agencies won’t call, text or email to ask for a Social Security or credit card number.
- Even if the communication is from a company with which you do business, it’s still wise to avoid links and contact the organization directly. Find contact information independently without relying on the number you were given or that shows up on caller ID.
In general, don’t hesitate to get help or double-check something that doesn’t sound right. Do some research, bounce it off of a friend or relative or reach out to professionals to confirm what you are being told.
“This is a really complicated space,” Velasquez says. “It’s really hard to know what to do. Don’t be ashamed and embarrassed to get help.”
Read more: AI’s dark side targets older adults in scams
Here is where you can find help
- The Identity Theft Resource Center runs a free hotline at 888-400-5530. A live chat is available at its website.
- AARP operates a Fraud Watch Network Hotline at 877-908-3360.
Ellen Marks has been a journalist for more than three decades, including stints in Boise, Idaho, Seattle and Albuquerque, New Mexico. She retired from the Albuquerque Journal three years ago, but continues to do regular assignments for the newspaper.
This article is reprinted by permission from NextAvenue.org, ©2023 Twin Cities Public Television, Inc. All rights reserved.
More from Next Avenue:
This story originally appeared on Marketwatch