
As artificial intelligence (AI) becomes a central part of business operations, the cybersecurity and enterprise technology environment is changing rapidly. Companies in sectors such as manufacturing, logistics and telecoms face increasing complexity and risk, making the need for scalable, safe and efficient AI systems more urgent.
Verizon Business operates throughout the Asia-Pacific region and manages large security businesses. Use AI to process data analytics, support threat detection, and automate routine tasks within your security workflow.
In a conversation with TechCircle, Robert Le Busque, Vice President of Asia-Pacific at Verizon Business, outlines what it takes to help businesses integrate agent AI into their infrastructure, cybersecurity and governance frameworks and effectively expand AI adoption. Edited excerpt:
From a global business perspective, what is the most specific Shift Agent AI already creating an internal enterprise?
When talking to enterprise customers in the Asia-Pacific region, there are three main areas that focus on AI, particularly when hiring agent AI.

The first is the impact on existing technology infrastructure. During the training phase of AI adoption, data flows primarily from the organization to the AI platform, allowing it to be learned from internal data. In the next stage, data begins to flow in both directions as AI is used for inference, responding to queries, or performing tasks. At this stage, performance increases as AI processing approaches the user, supporting faster and more responsive interactions. The final stage involves automation in which AI is integrated into existing systems and workflows, making decisions and acting across a variety of software platforms.
Supporting these phases requires major changes to the network architecture. Companies need to treat AI as a continuous process rather than as a single platform, and plan their infrastructure accordingly.
The second area is security. AI introduces unique governance, risk and compliance (GRC) requirements that differ from traditional cybersecurity or application assurance models. Organizations need to reevaluate the GRC framework specifically for AI use.

Additionally, data must be managed safely in both directions.
Traditionally, the focus has been on preventing data leakage. Currently, businesses need users to safely interact with the Public Language Model (LLM) to avoid exposure to internal, personal, or customer data. At the same time, you need to protect against harmful or untrusted data or code that enters your organization through these models.
How does your company use Agent AI in areas beyond customer support, especially in cyber defense for large companies?
It operates one of the world's largest enterprise-grade security operations, with nine security operations centers (SOCs) around the world. These centers process large amounts of data. Each year, more than 29 trillion raw incident logs are analyzed for potential threats. It then generates around 3.5 million alerts, turning around 500,000 into real security incidents.

To manage this scale, we rely on a combination of people, processes and technology. In terms of technology, AI and automation are important. Use them in the early stages of data ingestion and analysis to filter out false positives and process routine alerts. This includes machine learning (ML) algorithms and an AI platform that helps streamline the initial review of the log.
However, technology alone is not enough. A skilled analyst is essential. Contexts are important in security operations, and trained experts interpret and investigate complex scenarios that only AI can solve. The combination of human expertise, including LLMS and learning algorithms, and automated systems allows you to effectively track threats and take action when needed.
In short, we use a variety of automation tools, ML, and AI platforms tailored to our SOC environments to handle the amount of data, identify and respond to real threats.
Where does India stand in AI-driven cybersecurity? Is the Indian SOC ready for autonomy or is it still working reactively?

India's core strength lies in the depth and capabilities of its analyst community. When it comes to adopting AI technology or platforms, this process is similar to how an organization has previously adopted other technology stacks to automate, accelerate or improve operations. It is a combination of experienced professionals who effectively work together to generate insights using these tools that drive success.
India's high-tech sector has a large, capable talent pool, particularly in cybersecurity, making it one of the strongest in the region. From an external perspective, India is well positioned not only to be an Indo-Pacific, but also globally in developing new models for cybersecurity and AI adoption.
How do private 5G networks and agent AI work together? Can manufacturing and logistics companies currently achieve real-time cyber defense from the edge to the core?
The attack surface means points where attackers can access systems or data, and is expanding rapidly. This growth is driven primarily by an increase in the number of devices connected to the network. The most important growth is the Internet of Things (IoT) devices, especially industry, logistics and other built environments. These are automated network-connected devices embedded in physical infrastructure, rather than traditional corporate systems or user endpoints.

Each new device is added to the attack surface, creating more logs, more data to analyze, and potential threats to detect.
Mature security operations, especially organizations using ML and automation, are well suited to deal with this complexity. These features help you ingest and correlate your data faster and respond to more effectively threats.
Another important consideration when deploying next-generation networks such as private 5G is the network architecture. Specifically, it is recommended that customers implement segmentation or microsegmentation. This involves isolating parts of the network at the application, device, or workload level, so if one area is compromised, it can be isolated without affecting the rest of the network. This approach allows for faster and more accurate responses to incidents.

As the connected environment grows, so are the potential attack vectors. To address this, organizations need to improve automation and analysis to manage the growing data, and network designs that support segmentation that quickly and effectively contain threats.
If a bad actor uses the same agent AI tools, how do you prepare for an AI-driven attack?
We release a report called the Data Breach Investigation Report every year. It has been published for 18 years and provides a global overview of what actually happened in data breaches and cybersecurity incidents over the previous year. This year's report was released last month.
There are some well-known cases that have begun to be seen in regards to the use of AI by bad actors, including technology such as deepfakes. However, these still represent only a small portion of the total number of cases to be tracked. More generally, we see an increase in the use of AI in phishing and email scams, and in particular large-scale language models. Attackers use these tools to create more persuasive messages and prompts. This leads to a higher success rate for users to abandon their credentials. This allows an attacker to access the network or system.
The use of AI is primarily focused on making existing attack methods more effective. This includes enhanced social engineering tactics and ransomware campaigns. The underlying method remains unchanged, but AI helps attackers perform more accurately.
The same security principles still apply. Organizations need to keep user training and violation simulation up to date, maintain strong threat surveillance, particularly for privileged users, and work proactively to defend against these evolving threats.
In an environment where AI acts autonomously, how do you build trust and what control systems or failsafes are essential for deployment?
The first is the shifts required for governance, risk and compliance when hiring agent AI. The core question is how to assess the risks of autonomous decision-making in your workflow. What are the first, second and third order impacts of these decisions? Do you understand these effects? And how do you quantify risk if the outcome is undesirable?
This directly leads to how an organization manages AI adoption, particularly how policies and procedures are set up and implemented. The current example comes from this year's data breach investigation report. It was found that approximately 15% of employees access external large language models from within the corporate network. Over 70% of these users do so without following internal policies. Some use personal devices in corporate networks. At its core, this is a question of governance. How are organizational management platforms used to avoid exposure to unnecessary risks? Therefore, a complete review of LLMS governance, risk and compliance frameworks is essential.
Another important focus is understanding which parts of your business hold the most valuable data. Data that underpins competitive advantage, customer interaction, or intellectual property. It is important to identify sensitive and valuable data, control who has access to it, and track its movements within your organization. This is necessary to prevent leaks to public platforms and unreliable environments.
These are areas where you spend time with your customers and help you adopt AI in a way that unlocks its benefits while maintaining the safeguards needed to protect your core operations.
When asked how companies justify spending on Agent AI, what are the most frequently used KPIs or business outcomes?
The business case for employing AI, like any technology, varies from company to company. Justification, such as implementing new networks, deploying ERP systems, and integrating AI, depends on the specific needs and context of each organization.
An important factor in this work is the introduction of a financial governance model. This helps your team build clear, profit-oriented use cases. That model must be part of the AI Center of Excellence or the AI Adoption Office. This is to enable teams considering using AI to develop a product, service, or feature to create an ROI model tailored to their initiative.
Organizations benefit from standardizing these financial models and support consistent assessments across use cases. This approach is not specific to AI and applies to all kinds of technology adoption. The business case should always be held up and always tailored to your organization.
