Whether you like it or not, artificial intelligence has become a part of our lives and many people have begun to put full trust in these chatbots. Even traditional search engines like Google and Bing incorporate AI results into the mix, while new age companies like ChatGPT and Prperxity use Chatbot-style formats to respond directly to users.
However, a new Netcraft report argues that trust posted on these AI tools can be misguided as users could be victims of phishing attacks. They say these AI tools are prone to hallucinations and lead to inaccurate URLs that can lead to large-scale phishing scams.
According to the report, Openai's GPT-4.1 family model was asked for a website link to log in to 50 different brands across industries such as finance, retail, technology and utilities. The chatbot got the correct URL in 66% of cases, but it was wrong in 34% of cases. This alleges that the report claims this could lead users to open potentially harmful URLs.
Additionally, the report points out that there are over 17,000 AI-written GitBook phishing pages targeting crypto users, while pretending to be legitimate product documentation or support hubs. Please note that these sites are clean, fast and linguistically tuned for AI consumption.
This could potentially be a major vulnerability. Users trust AI chatbots to open phishing websites, and attackers who are aware of this loophole can register these unclaimed websites to carry out phishing scams.
The report also focuses on real-world instances where Prplexity AI proposed a phishing site when asked for the official Wells Fargo URL.
Smaller brands are said to be more affected by this type of AI hallucination, given that they are unlikely to appear in LLM training data.
The attacker is trying to use AI
Netcraft has also discovered another sophisticated campaign to “poison” AI coding assistants. The attacker created fake APIs designed to be spoofing the legitimate Solana blockchain, and developers fell prey to traps because they didn't know the malicious APIs in their projects. This directly routed the transaction to the attacker's wallet.
In another scenario, the attacker launched blog tutorials, forum Q&AS, and dozens of Github repositories, promoting a fake project called Moonshot-Volume-Bot to be indexed by the AI training pipeline.
