Agent AI rewrites the basics of BPO, but risks are lurking

AI Basics


When ChatGpt entered the scene in 2022, Alarm Bells rang across the BPO industry. Millions of people feared unemployment for automation, but most believed that only low-skilled, repetitive roles were at risk. Today, Agent AI appears to be poised to deal a much more devastating blow.

Late last year, Mumbai-based BPO giant WNS Global Services gave us a glimpse into what its future looks like. We have built an agent AI platform for our UK client Animal Friends Insurance (AFI).

The platform has nearly automated insurance processing, reducing time by over 65%. Considering YouTube videos posted by Indian IT lobby group Nasscom, the platform dramatically reduced processing times.

This is because this type of agent AIS reads the customer's claims, interprets fine print hidden in dense policy documents, reviews historical records, assesses validity, and makes final calls – either approved or rejected. And they don't stop there. It also triggers payments and sends its own customer notifications. From start to finish, Agent AI functions like a completely autonomous worker – an autonomous billing processor.

How does Agent AI deal with different things from BPOS?

Agent AI is not a GEN AI platform like ChatGPT waiting for a prompt. It is also not a robotic process automation that unconsciously follows pre-coded scripts. Agent AI is thinking. I'll decide. And do it yourself. In other words, it's like a smart, experienced employee.

Vinod Goje, VP of Bank of America, specializes in AI product development. His specialization also includes customer experiences.

“Agent AI has redrawn the blueprint for BPO. The once labor-intensive, layered support is becoming more and more autonomous,” said Vinod Goje, AI expert at Bank of America.

“The whole layer of human agents is being exchanged and not enhanced,” confirms Goje, dismissing the notion that AI simply helps humans to make their jobs better.

“At the voice-based center, we don't just look at the deflection of the call. We see full-resolution conversations processed end-to-end by AI agents who can think, make decisions and act.”

Dave Trier, Vice President of Products at Modelop, gives you a glimpse into how human roles evolve.

Based in Chicago, Illinois, Modelop offers AI lifecycle automation and AI governance software for large organizations.

When asked if Agent AI might look like a Chinese factory run by robots, Trier replied “Not at all.”

“Agent AI is an application, not hardware, so changes are more operational than visual.”

“The “robots” are invisible and will be dealing more and more with Tier-1 support, simple claims, and transaction processing without warning humans. ”

Unprecedented conversion

There is no doubt that the arrival of Agent AI has caused a paradigm shift in the BPO industry. What was once outsourced to human agents overseas may soon be “enclosed” through AI within the corporate walls.

“We're transitioning from outsourcing to AI-driven in-ceiling,” writes Souren Sarkar, CEO of Miami-based BPO company Nexval Group, on LinkedIn Post.

“(Agent)AI can scale operations exponentially without the need for the employment, training or management of an additional workforce, a field that BPO providers traditionally thriving.”

Sarkar adds another advantage: compliance. Traditional outsourcing often involved sharing sensitive data across borders. Agent AI, on the other hand, can be deployed locally within the client's infrastructure. Minimizes risk.

Given his arguments, large companies will likely develop and deploy their own agent AI platforms. But small businesses – those who can't afford such tools may rely on traditional outsourcing.

Hidden risks of AI agents

Despite all the promises, Agent AI carries serious risks. Especially because it acts based on what you learn and think.

Dave Trier is Modelop's Vice President of Products.

“Agent AI can take action based on what they learn. This means that flawed instructions or misunderstood feedback can create a cascade of errors,” Trier says.

“For example, if an agent finds out that escalates tickets is 'better', they may start to suppress the actual problem. It hurts customers and SLAs. ”

“In amateur terminology, giving agent AI the wrong rules is like sending a new employee to do their job with bad training, and then making decisions without supervision.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *