47.7 F
Seattle
Thursday, May 15, 2025
Finances FYI

Presented By:

AI Scams To Watch Out For And How To Detect Fake Messages

Finances FYI Presented by JPMorgan Chase

Technology has rapidly evolved into the current artificial intelligence (AI) age. As individuals and businesses continue to grasp AI’s ever-changing impact on society, many are still wary of its pros and cons.

In a 2024 Bentley University and Gallup report, 56% of Americans believe AI does equal amounts of harm and good.

For its positive benefits, bad actors stoke public fear and concern about AI by using AI tools to deploy sophisticated scams in emails, texts, and other forms of communication.

So, who is at risk for AI scams, and how can you detect an AI-generated text or email scam?

Who Is at Risk for an AI Scam?

The National Council on Aging reports that older adults are a common target for scammers. However, scammers target people of all ages and technical skill levels, including corporate executives.

Anyone can fall for convincing and malicious AI-generated messages. That’s because today’s fake communications are highly personalized, compared to old phishing scams claiming you’ve won prize money and must respond to a foreign sender.

How Is AI Helping Scammers Trick People?

In the past, scammers filled messages with red flags like poor grammar, spelling errors, and awkward phrasing.

Currently, cybercriminals can use generative AI tools like ChatGPT to produce error-free, convincing messages. They can even clone the voice of a friend or loved one and make it sound like they are in distress and need money.

Ars Technica also notes that “AI bots can quickly ingest large quantities of data about the tone and style of a company or individual and replicate these features to craft a convincing scam.”

Bots can also scan someone’s online and social media footprint to figure out the topics to which they will likely respond.

Common AI Scams to Watch Out for

Cunning bad actors leverage powerful AI tools to defraud unsuspecting people and businesses. Large language models (LLMs) can follow incredibly detailed user prompts and instructions to generate humanlike content that specifically targets individuals and companies.

AI tools can also generate deepfake scam photos, videos, and audio clips to make it look like someone said or did something they never really did.

For example, film director Jordan Peele used AI to sync his own facial movements with President Barack Obama to create a video about misinformation. Here are some more common AI scams:

  • Phishing Email Attacks – Scammers use generative AI to create convincing phishing emails and fake websites. These messages may look like they’re from your bank, a company’s help desk, or a shopping website, to name a few. They might contain an urgent request or phony account notification that directs you to a fake website, asks for personal information, or asks you to download an attachment that installs harmful software that steals information from your computer. Harvard Business Review predicts phishing will increase in quality and quantity over the next few years.
  • Text (“Smishing”) Scams – Now, scammers can leverage AI to send authentic-sounding text messages that ask for personal information to retrieve or reroute lost or undeliverable packages, payment declines for an online order, an “HR recruiter” offering a job after looking at your LinkedIn profile, and more.
  • Vishing/Phone Scams – In vishing or phone scams, fraudulent callers Voice over Internet Protocol (VoIP) technology to make calls seem legitimate. However, the criminals impersonate government agencies, banks, and other companies.
Photo: rummess via 123RF

How to Detect Scam AI Communications

With so many threats, it might seem overwhelming to determine which communication is real or whether it is an AI scam. Here are some signs a message, call, or video is a scam:

  • Urgent request or threat – Scammers often make “urgent” requests to “act now” or threaten dire consequences.
  • Bogus email address, caller ID, strange SMS sender or voice – If you don’t recognize an email address, phone number, sender, or voice, don’t automatically trust it or follow any instructions. Also, an AI-generated voice and background noise might sound unnatural or use overly formal language. Trust your instincts if a message looks or sounds suspicious.
  • Unusual request – If you receive a request to send money or share personal information like your physical address or account numbers, don’t comply. You should NEVER give private, personal, or financial information to an unknown person or corporation.
  • Odd payment requests – If a message asks for an odd form of payment, like cryptocurrency or foreign currency, chances are it’s a scam.

To protect yourself against AI scams:

  • Never give personal information over the phone or online.
  • Never send money to a stranger.
  • Limit the information you share on social media or online.
  • Ask the caller or message sender to answer personal questions that only the real person knows if a message or request seems strange.
  • Invest in online security tools.

Learning more about potential AI or online scams can protect your information and save you from a harmful, malicious cyber attack.

Finances FYI is presented by JPMorgan Chase. JPMorgan Chase is making a $30 billion commitment over the next five years to address some of the largest drivers of the racial wealth divide.