How Scammers Poison AI Results With Fake Customer Support Numbers

How Scammers Poison AI Results With Fake Customer Support Numbers

How Scammers Poison AI Results With Fake Customer Support Numbers

Scammers love to seed the internet with fake customer service numbers in order to lure in unsuspecting victims who are just trying to fix something wrong in their life. Con artists have done it to Google Search for years, so it makes sense that they’ve moved on to the latest space where people are frequently searching for information: AI chatbots.

AI cybersecurity company Aurascape has a new report on how scammers are able to inject their own phone numbers into LLM-powered systems—resulting in scam numbers appearing as authoritative-sounding answers to requests for contact information in AI applications like Perplexity or Google AI Overviews. And when someone calls that number, they’re not talking with customer support from, say, Apple. They’re talking with the scammers.

According to Aurascape, the scammers are able to do this through a wide variety of different tactics. One way is by planting spam content on trusted websites, like government, university and high-profile sites that use WordPress. This method requires gaining access in ways that may be more difficult but aren’t impossible.

The easier version of this is planting the spam content on user-generated platforms like YouTube and Yelp or other sites that allow reviews. The scammers inject their phone numbers but include all of the likely search terms that would allow the number to find their intended target, such as “Delta Airlines customer support number” and countless variations.

All of that is normal for scammers trying to juice Google Search results. But Aurascape notes it’s the structure of the data that can set it apart for LLMs. By posting the likely search terms in the summarization formats that AI loves to deliver, it has a higher chance of success as these AI chatbots scour the internet for an answer.

The new report refers to Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO) as distinct from SEO, coaxing the AI to retrieve the content and treat it as authoritative because of the way it’s being presented. To be clear, the scam outputs in the Aurascape report are simply being retrieved and reproduced in the context of an individual AI output, not corrupting the LLM itself.

Also Read  Tinder is Testing AI Features That Look Through Your Camera Roll

As detailed in the report, the scammers utilize GEO/AEO techniques in html and PDFs uploaded to high-trust sites by:

That Emirates reservation number is one of the many examples Aurascape uses in the report to show how Perplexity delivered a scam phone number during tests. Google’s AI Overviews feature also delivered fake numbers.

Gizmodo did some quick testing and wasn’t able to duplicate a fake customer support number. But it seems pretty clear that AI companies are starting to take notice, especially given the specific tests conducted for this Aurascape report.

When Gizmodo asked Perplexity for the customer support number for Emirates Airlines, the bot responded:

The safest way to reach Emirates for reservations is to use the official contact channels listed on the Emirates website rather than third‑party numbers that appear in search results or PDFs, which are often spam or agency lines misrepresented as “official.”

Gizmodo pressed for the bot to just give us the number, but it responded:

There are multiple conflicting “Emirates reservations” numbers online, and many of them are actually third‑party agencies rather than Emirates itself, so none of the +1‑(8xx) numbers shown on generic guides can be trusted as an official line.

The bot told us to visit emirates.com to find the number. And we guess that is one way to fight back against your AI chatbot spreading misinformation and spam. Just stop it from spreading specific types of information altogether.

Also Read  This awesome JBL Bluetooth speaker has crashed to an incredible low price ahead of Prime Day – and now I might have to buy it

Back in 2022, we wrote about the different scam websites that were successfully getting victims to download what they thought were Canon printer drivers. While the new report from Aurascape didn’t address downloadable drivers as a potential attack vector, we can imagine that would be something scammers are already trying.

After all, AI chatbots should only be trusted when they show their work. But the flip side of that is the necessity of the chatbot providing hyperlinks where information can be double checked. Or, in this hypothetical, where software could be downloaded. Just make sure you scrutinize that URL carefully. There’s a big difference between usa.canon.com and canon.com-ijsetup.com. The latter is a phishing website.



Source link

Back To Top