Southeast Asia has become the global epicenter of cyber fraud, where tech fraud encounters human trafficking. In countries such as Cambodia and Myanmar, Crime Syndicates operate industrial-scale “pork shops” with trafficked workers forced to be killed in wealthy markets such as Singapore and Hong Kong.
The scale is incredible. One United Nations estimates global losses from these schemes at $37 billion. And it can get worse soon.
The rise of cybercrime in the region has already affected politics and policy. Thailand reported a decline in Chinese visitors this year after Chinese actors were lured and forced to work for Myanmar-based fraud compounds. Bangkok is currently struggling to convince tourists that it's safe to come. And Singapore passed an anti-Scam law that allows law enforcement to freeze bank accounts for victims of fraud.
But why is Asia notorious for its cybercrime? Ben Goodman, general manager of the Asia-Pacific region at Octa, notes that the region offers unique dynamics that facilitate cybercrime fraud. For example, this region is a “mobile first market.” Popular mobile messaging platforms like WhatsApp, Line, and WeChat help to promote direct connections between scammers and victims.
AI is also helping con artists overcome the diversity of Asian languages. Goodman says that translating machines is a “wonderful use case for AI,” but “it makes it easier for people to click on the wrong link or approve something.”
Nation-states are also involved. Goodman also points to allegations that North Korea uses fake employees from major tech companies to collect information and obtain a lot of needed cash from isolated countries.
New risk: “Shadow” AI
Goodman is worried about new risks related to AI in the workplace. “Shadow” AI, or employees use personal accounts to access AI models without monitoring. “It could be someone who prepares a presentation for a business review, enters ChatGpt with their own personal account and generates images,” he explains.
This could unconsciously upload sensitive information to public AI platforms, creating “a potentially poses many risks regarding information leakage.”
Courtesy of Octa
Agent AI can also blur the boundaries between personal and professional identity. For example, it is tied to personal emails, as opposed to corporate emails. “As a corporate user, my company provides me with the application I use. They want to manage how I use it,” he explains.
However, he adds, “I will not use my personal profile for corporate services, I will not use my company profile for personal services.” “The ability to portray who you are, whether you are using workplace or work services, whether you are using your personal service or not, is how you think about your customer's identity and corporate identity.”
And for Goodman, this is where things get complicated. AI agents are authorized to make decisions on your behalf. This means it is important to define whether users are acting on their individual abilities or on the corporate abilities.
“If your human identity is stolen, the explosion radius will be much larger in terms of what you can do to steal money from you or to spoil your reputation,” warns Goodman.
