Generative AI Is Fueling an Unprecedented Wave of Online Scams, Leaving Small Businesses Reeling

Small Businesses Face Rising Online Scams Driven by Generative AI | Visionary CIOs

Generative AI, once hailed as a revolutionary tool for boosting productivity and creativity, is now becoming a nightmare for online businesses. Entrepreneurs like Ian Lamont, who runs a Boston-based how-to guide company, have found themselves blindsided by AI-driven scams. Lamont discovered a fake LinkedIn job post for his company and even a fabricated profile of a supposed “manager” with an AI-generated face. Despite quickly taking action, over twenty individuals had already contacted him about the fraudulent job offer, and he suspects many more were misled.

This is far from an isolated incident. Experts are calling this an “industrial revolution for scams,” as AI enables fraud at unprecedented speed and scale. Microsoft reports blocking nearly 1.6 million bot-based signups every hour. Small business owners, from publishers to designers, find themselves constantly fending off impersonations, phishing attacks, and deepfake communications, often with limited resources and little protection.

Real-World Damage and Growing Sophistication

The consequences of AI-enhanced fraud can be catastrophic. A finance clerk at Arup, an engineering firm, was tricked into transferring $25 million after attending a video call with deepfake versions of his colleagues. Meanwhile, companies like Japanese knife retailer Oishya have had their brands cloned to trick customers into handing over credit card details for bogus giveaways.

In recruitment, AI-created fake candidates are now common. New York-based tech recruiter Tatiana Becker says she’s seen a surge in deepfake applicants who mimic real people in interviews. Similarly, PR executive Nicole Yelland fell prey to a fake job interview set up by scammers using AI-generated documents and visuals to lure her into sharing sensitive information.

Experts like Rob Duncan of Netcraft emphasize that low-cost, widely available AI tools now make it easy for anyone to impersonate brands or employees. From fake store replicas to counterfeit communications, the barrier to committing fraud has drastically lowered. As a result, companies are increasing cybersecurity budgets and even adapting hiring practices, like asking for real-time ID checks and open-ended interview questions to detect fakery.

The Fallout and What Lies Ahead

Fake ads, AI-generated content, and fraudulent reviews are also polluting the broader digital landscape. Dr. Jonathan Shaw of Melbourne’s Baker Heart and Diabetes Institute was shocked to find his likeness used in a fake video urging patients to abandon legitimate medicine in favor of a scam supplement. On platforms like Pinterest, Small businesses like Cake Life Shop in Philadelphia are receiving custom cake requests based on AI-generated images of impossible designs, leading to customer confusion.

Publishing is also under attack. AI-generated travel guides have flooded Amazon, bolstered by fake reviews, and are hurting genuine publishers like Frommer Media. Although the FTC has banned fake reviews, comprehensive regulation of AI-generated content remains largely absent. Watermarking efforts by platforms like Google and Pinterest are in place but not foolproof, raising concerns about overreliance on imperfect detection systems.

Robin Pugh of Intelligence for Good urges small businesses to stay vigilant and verify all transactions and communications. As Cake Life’s Nima Etemadi puts it, “Doing business online gets more necessary and high risk every year. AI is just part of that.”

Visit Visionary CIOs for the most recent information.

Share:

Related