Panelists agreed that AI is no longer limited to chat-based applications, but is increasingly being used in business decision-making, medical documentation, software development, and areas that directly impact working physical systems.
LAS VEGAS, United States, Jan. 7 (Xinhua) – CEOs and experts from tech giants gathered here for the 2026 Consumer Electronics Show (CES) intensively discussed the reliability and safety of artificial intelligence (AI) during a panel session on Monday.
AI is rapidly permeating daily tasks and operations, but its widespread adoption depends on trusting that AI systems can operate reliably in high-risk environments, protect sensitive data, and operate within clear guardrails, the pair said during a panel session on the real-world challenges of AI held on the sidelines of CES, the world's largest and most influential technology trade show, which runs from Tuesday to Friday.
The panel brought together industry leaders including Ola Kallenius, CEO of Mercedes-Benz Group, Harjot Gill, CEO of CodeRabbit, Deepak Pathak of Skild AI, Sridhar Ramaswamy of Snowflake, and Shiv Rao of Abridge.
Panelists agreed that AI is no longer limited to chat-based applications, but is increasingly being used in business decision-making, medical documentation, software development, and areas that directly impact working physical systems.
Speakers described “guardrails” as practical controls that define what AI systems can do, how they can access data, and when human oversight is required. In the discussion, trust was closely related to whether AI systems can be measured, monitored, and constrained in real-world settings.
In the healthcare space, Abridge founder and CEO Rao said trust is a fundamental requirement in a high-risk and highly regulated sector. Abridge provides ambient listening technology for clinical documentation to more than 150 health systems and is valued at $5.3 billion.
He believed the growing popularity of AI-powered clinical transcription software was raising concerns about patient privacy.
He said the challenges extend beyond privacy and security compliance to factors such as wait times and medical artifacts that occur in clinical settings.
As data sovereignty issues become more prominent in enterprise data systems, Snowflake CEO Ramaswamy explained trust from a more institutional perspective, focusing on data ownership and where information is processed.
He said customers want to know what happens to their data and who is in control of it. Ramaswamy said geographic deployment of computing infrastructure could help address sovereignty expectations, especially if customers want their data to stay within a particular region.
Kalenius said the safety requirements in physical systems are significantly higher than those for consumer AI tools. He said achieving “99 percent proof of concept” is relatively easy compared to managing the long tail of rare and dangerous scenarios, and safety is a central concern.
Kallenius also discussed safety and reliability in industrial environments, where AI is deployed before vehicles even hit the road. He said manufacturers can build factories in a virtual environment, simulate production digitally and use AI to debug processes before starting physical construction.
In robotics, Pathak, co-founder and CEO of Skild AI, discussed data challenges that directly relate to safety. He said there is no “silver bullet” for robot data and outlined a training approach that starts with human videos, moves to simulations where large-scale failures can occur, and gradually incorporates real robot data.
The panel discussion took place against the backdrop of strong AI investment and ongoing debate about whether the market is in a “bubble.”
Citing a Bloomberg reference that said the word “bubble” was used in 12,000 articles in November, the moderator argued that the current infrastructure cycle is different from those of the past due to seamless deployment, heavy use of computing resources, and financing from companies with strong free cash flow.■
