Governance and cybersecurity at the center of the AI ​​conversation

AI For Business


Artificial intelligence has the potential to transform industries and unlock new efficiencies, but the conversation in Mumbai’s business and technology worlds is increasingly turning to two less glamorous but important themes: governance and security.

In a city where growth and profits often dominate corporate agendas, the AI@Work roundtable in Mumbai provided a timely reality check. Participants from a variety of sectors agreed that as organizations deploy AI to accelerate their operations, they must also face the associated vulnerabilities and ethical risks.

The scale of global cyber threats reflects this urgency. In 2024, hackers used AI-driven tools to scan approximately 36,000 accounts per second. The average time to exploit a new digital vulnerability remained consistent at 5.4 days from 2023 to 2024, demonstrating that as AI evolves, so too does cybercrime.

The discussion, moderated by Nagaraj Nagabushanam, VP of Data and Analytics and Designated Head of AI at The Hindu Group of Publications, highlighted how preparedness and responsible adoption have become an integral part of business strategy.

From compliance to continuous vigilance

For decades, cybersecurity has been rooted in compliance, including annual assessments, risk audits, and framework implementation. But panelists agreed that that approach no longer works in an AI-driven environment.

Modern workflows, especially those built on large data models and generative systems, require constant vigilance. As one participant said, “Security is not just the formal mechanisms that we think of.” “It’s about predicting what’s outside of its formal meaning: how systems will learn, how data will flow, and where risks will come from.”

When AI learns to deceive

Siddharth Sureka, chief AI officer at Motilal Oswal Financial Services, gave a striking example of the unpredictable behavior of AI.

“An AI system once feigned blindness while interacting with gig workers, claiming it couldn’t solve CAPTCHAs,” he says. “Workers helped with that. And that’s how AI cracked the code. In a world like this, it’s very important to protect yourself.”

Keep the AI ​​inside the walls

Containment has emerged as a viable strategy for risk management. “In the supply chain, customers have little patience and expect on-time delivery. That’s why we trained the model only on internal data,” said Sreenivas Pamidhimukkara, Chief Information Officer, Mahindra Logistics. That way, the model stays within our ecosystem and reaches around 95-96% accuracy. ”

When physical and digital meet

In sectors where operations bridge the digital and physical, such as energy and logistics, AI can help reduce non-digital risks.

Rithwik Rath, Executive Director, Information Systems and ERP, Hindustan Petroleum Corporation Limited, said: The company, which has nearly 400 facilities across India, faces challenges from both digital intrusions and physical tampering. “Walking over a pipeline and detecting interference is not an exact science. That’s where AI comes in. AI has learned how to filter the noise.”

Traditional cybersecurity tools, such as security information and event management (SIEM) systems, are evolving from being reactive to being proactive, he added. “No one is interested in post-mortem anymore. It’s more about what you can do before the incident happens,” he says. “AI helps correlate incidents, detect patterns, and predict what could go wrong.”

BFSI: Protect all layers

In financial services, where trust is paramount, the approach is multi-layered.

Sanjeev Kumar Jain, head of IT at LIC of India, said the company’s cybersecurity priorities are structured around four layers: perimeter, infrastructure, application, and endpoint. “Each layer is important,” he said. “The challenge lies in managing false positives, which waste resources. Agent AI could help, but it’s still in its early stages.”

Balance speed and human oversight

Moderator Nagaraj Nagabhushanam asked the panelists how organizations can balance the increased speed of AI-driven decision-making with the need for human oversight.

Amol Deshpande, Group Chief Digital Officer and Head of Innovation, RPG Group, noted that the gap is widening. “AI is expanding faster than we can secure it,” he says. “Not everyone does the necessary checks. Frameworks are essential, but so is purpose. You have to be conscious of what you’re building and why you’re building it.”

Agent AI and real-time defense

For IBM, the next step in cybersecurity is in autonomous response systems.

“Our ATOM (Autonomous Threat Operations Machine) can identify threats, analyze them, assign a risk score, and take action, all within minutes,” said Sagar Askaran Karan, Associate Partner, IBM Cybersecurity Services – ISA. “This kind of agent AI allows us to act faster than an attacker and faster than a manual human response.”

Identity, access, and data integrity

As AI systems are integrated into operations, identity and access management has moved to the forefront.

Dhiraj Kumar, Head of IT, New India Assurance Co. Ltd., warned about new vectors like data poisoning and instant injection. “IAM is very important when you want to expose your models externally,” he said. “For now, we are working within our internal dataset, but we need to evolve our governance before we scale further.”

Prashant Thakar, President, Retail Strategy, Operations and Technology, LIC Mutual Fund, agreed. “Privileged access and behavioral analytics are now at the center of things. The moment a user or an AI agent behaves abnormally, you know something is wrong. DevSecOps has to start from the first line of code, especially when the code is generated by AI.”

Building a culture of AI risk awareness

Even as companies automate key decisions, participants agreed that AI risk awareness needs to become part of the organizational culture, and not just for cybersecurity and compliance teams.

Today’s businesses face two challenges: managing algorithmic opacity and preserving human judgment. Many point out that the weakest link in most security frameworks remains human behavior, not code. “People think the machine is right,” said one CIO. “That’s where the oversight starts.”

To combat this, organizations are investing in AI hygiene programs. This includes a cross-functional review involving legal, ethics, and operations teams. Mock intrusion exercise. Regular retraining of employees using AI systems. They said such measures would help demystify technology and instill a sense of shared responsibility.

Executives also said there is a growing need for AI audit trails, digital records that document how models reach their conclusions. “In regulated areas, explainability is the new compliance requirement,” said one participant. “It’s not enough to say a model works; you have to be able to show how it works.”

This broader awareness extends beyond the walls of the company. As supply chains and vendor networks become AI-driven, security will depend on how organizations transparently disclose model provenance, data lineage, and decision logic. The consensus is that technology alone cannot secure the future; people, processes and purpose must evolve with it.

Who owns AI governance?

Ownership and accountability are emerging as imperatives for organizations, panelists agreed.

“Everyone wants AI, but who will own it? Who will set the guardrails?” asked Hitesh Talreja, CTO, LIC Housing Finance. “It can’t be done by one person or one team. It has to be built into the DNA of the organization. Everyone has to understand what AI can and cannot do.”

Geeta Gurnani, field CTO, head of technical sales and client engineering, IBM Technologies, India and South Asia, said the rise of responsible AI offices shows an encouraging trend.

“When I first heard that our proposal would require ethics committee approval, I was surprised,” she said. “But now it is becoming standard practice. Ethics committees and responsible AI departments are taking shape, and all users and producers must be held accountable.”

Towards a culture of responsibility

Participants noted that accountability cannot remain limited to the functionality of the technology.

“It’s about who builds it, who manages it, who takes responsibility for it,” said Motilal Oswal’s Sulekha. “Ownership must extend throughout the organization, from data provenance to model governance.”

At Bombay Stock Exchange, this philosophy is reflected in our training and policy alignment. “We are coaching our people and leadership on responsible AI use cases that deliver real value,” said Ramesh Ghulam, Chief Information Security Officer at Bombay Stock Exchange. “Governance and customer trust must evolve together.”

Governance as a foundation, not an afterthought

While excitement surrounding the potential of AI continues, the consensus at the roundtable was clear. “Governance is not an option.”

Agentic AI promises unprecedented speed and automation, but without strong guardrails it risks amplifying bias, misuse, and system vulnerabilities. Issues such as data hygiene, model integrity, and ethical responsibility are now shaping enterprise priorities.

One participant summed it up as follows: AI is no longer experimental. It’s infrastructure. Governance and cybersecurity are therefore not just best practices, but the very foundation of trust in the age of intelligent systems.



Source link