AI Scams Skip to content

View all posts

AI Scams

01/06/2025

The clearest example of scammers using new technology comes from the explosion of artificial intelligence and thus AI-powered scams.

In December 2024, the FBI posted a public service announcement listing some of the ways that criminals use generative AI to trick victims. The GASA also highlighted the increasing role of generative AI in scams around the world and noted that deepfake-related crime increased by more than 1,500% in the Asia-Pacific region from 2022 to 2023.

Generative AI tools generally get classified by the type of content they generate, such as text, images or videos. Scammers can use them to enhance different types of popular scams:

  • Phishing and smishing: Scammers can use AI to write more convincing and natural-sounding phishing emails and text messages.
  • AI images: Scammers can use AI-generated images to quickly create eye-catching websites, social media ads, fake identification documents, explicit photos and fake headshots for social media profiles.
  • Deepfake videos: AI-generated videos might be created to promote fake products, services or investments. Scammers also might use deepfake recordings or real-time face- and body-swapping tools to trick victims into thinking they're someone else.
  • Fake and cloned voices: Scammers also use AI-generated or altered voices for their videos and for phone-based scams. Some AI tools can even mimic real accents.

The potential to create an image, video or voice of someone can make many existing scams more believable, and it opens up new opportunities for scammers.

Learn more >> Who Gets Scammed the Most?