Deepfake AI used to scam ISP rate discounts via video calls

AI Video & Visuals


Deepfake tricks: AI’s role in tricking ISPs into lower rates

In an era where artificial intelligence blurs the line between reality and fiction, a new kind of consumer revolt is emerging. Frustrated by rising internet prices, some tech-savvy individuals are turning to deepfake technology to negotiate, or rather manipulate, lower rates. This isn’t just haggling over the phone. This is a sophisticated ruse that uses AI-generated audio and video to impersonate executives and insiders to trick customer service representatives into offering fraudulent discounts. A closer look at this phenomenon reveals that what started as a sophisticated hack is now sounding the alarm about fraud, ethics, and the vulnerabilities of telecom giants.

This tactic gained notoriety through a personal account detailed in a recent Business Insider article. An anonymous user explained that he used an AI tool to create a deepfake video call that imitated a high-ranking executive at an internet service provider (ISP). This person posed as an internal authority and convinced support agents to reduce their monthly bills by nearly 40%. This is not isolated. Posts on X (formerly Twitter) by users like DANVZLA highlight similar experiments, one of which links directly to a Business Insider article, highlighting a growing trend in 2025 where AI will allow ordinary consumers to challenge companies’ pricing.

But how does this work? Deepfake technology, powered by generative AI models like OpenAI and dedicated tools like DeepFaceLab, allows users to synthesize realistic audio and video. In rate reduction fraud, individuals record snippets of actual executive speeches (often excerpts from earnings calls or public interviews) and feed them into AI software to generate custom scripts. result? Convincing impersonation allows you to approve discounts, fee waivers, and even past promotions during live interactions.

How it works

The process begins with scouting. Fraudsters research the ISP’s organizational structure through LinkedIn and company websites to identify key people, such as regional managers and billing supervisors. They use publicly available data to create deepfakes that reproduce not only audio but industry-specific mannerisms and terminology. According to a report from Veriff, such AI-powered fraud will account for one in 20 failed identity verifications by 2025, and deepfakes are becoming cheaper and more accessible thanks to open source tools.

Once the deepfake is ready, the execution phase involves contacting customer service through a video-enabled channel. Many ISPs now offer “enhanced support.” Fake executives may claim system errors or special loyalty programs and instruct agents to apply reductions. In Business Insider’s account, users said the agents were overwhelmed and untrained and followed without rigorous verification, highlighting significant gaps in communication protocols.

This is not without risks. While some see this as a victimless backlash against monopolistic pricing (according to FCC data, internet prices in the U.S. have risen an average of 15% over the past year), the legal implications are serious. Impersonation for financial gain can result in charges of wire fraud and identity theft, which can result in fines of more than $10,000. Still, its appeal continues with online forums where users share tutorials and success stories.

Increasing risks in the AI ​​era

The broader impact extends beyond individual savings. Cybersecurity experts warn that these consumer-level scams are a gateway to larger-scale fraud. A CNBC analysis earlier this year predicted that deepfake scams could rob companies around the world of billions of dollars, with the telecommunications industry particularly vulnerable due to its vast customer base and reliance on remote interactions. In 2025, advances in AI have evolved fraud from simple voice cloning to real-time video manipulation, making it nearly impossible to detect without advanced biometrics.

Posting to X amplifies these concerns. Rod D. Martin’s thread, which has been viewed more than 42,000 times, echoes the FBI’s warning about AI deepfakes impersonating officials, causing losses of more than $50 billion worldwide. Other Cyber ​​Insurance News articles discuss enterprise tools such as Reality Defender’s Real Suite. The tool is designed to combat such deception by analyzing videos to detect anomalies such as uneven lighting and audio artifacts.

ISPs are busy responding. Major providers like Comcast and Verizon have begun implementing AI-driven verification systems, such as liveness detection, which requires real-time gestures to prove humanity. But as an ABC News article about deepfake scams involving politicians points out, the technology is becoming cheaper, democratizing fraud, and increasing calls for stricter regulation.

Corporate countermeasures and ethical dilemmas

For those in the industry, deepfake invoice fraud highlights a pivotal shift in the relationship with customers. Telecom executives, speaking anonymously, acknowledged that outdated training is making agents more susceptible to influence. Incode’s report reveals similar cases of ISP vulnerabilities in which deepfakes tricked employees into making large-scale transfers. In response, some companies are piloting blockchain-based identity verification to record and authenticate internal communications.

Ethically, this raises questions about power imbalances. Consumers argue that ISPs’ opaque pricing justifies creative negotiations, especially in markets with limited competition. But as JPMorgan’s AI fraud insights highlight, such tactics can erode trust and result in higher costs being passed on to all customers through increased security measures.

Looking to the future, experts predict an escalation. An article in News Channel 3-12 reports that AI is facilitating financial fraud, especially deepfakes. For ISPs, investing in AI defenses is non-negotiable, but it also means rethinking customer service in an era when seeing is believing.

Future-proofing against synthetic fraud

As 2025 approaches, further disruption is likely to occur at the intersection of AI and consumer activity. Innovations like agent AI, which was discussed at FinanceAsia’s Singapore FinTech Festival, have the potential to automate detection, but they also risk alienating legitimate users with overly stringent checks.

Regulators are getting involved. The FTC has strengthened its guidelines and urged companies to adopt multi-factor authentication for high-value transactions. Meanwhile, X posts by people like Mario Nawfal detailing horrifying deepfake scams, including a $25 million corporate scam, are a wake-up call for the telecom industry.

Ultimately, deepfake invoice fraud is a symptom of a broader technological upheaval. This is a reminder to consumers of the fine line between ingenuity and illegality, and a challenge to ISPs to innovate. As AI evolves, we must also strengthen our defenses to ensure that the pursuit of cheaper connectivity does not undermine the trust fabric of digital interactions.

Balancing innovation and integrity

Industry leaders are now advocating for collaborative solutions. Partnerships with AI companies, such as those mentioned by ScamWatchHQ, which reported more than $200 million in losses from deepfakes this year, are aimed at developing open standards for fraud prevention.

Education also plays an important role. CyberGuy’s proposed customer service team training program focuses on skepticism and verification protocols.

In this cat-and-mouse game, those who adapt the fastest will be the winners, transforming AI from a tool of deception to a shield against it. For now, the deepfake ruse serves as a stark reminder that in the age of synthetic reality, any call can be a scam.



Source link