Before introducing AI into your organization, there are many considerations to ensure your data is secure.
“At some point, something goes wrong.”
Paul Wright, Lift Forward
AI governance means protecting privacy, intellectual property, and parts of your business that you don’t want accessed by outside people or organizations, Brian Greene, chief AI officer at Health-Vision.AI, said at the 2026 Technical.ly Builders Conference.
This topic remains top of mind, as AI creates additional vulnerabilities that traditional cybersecurity measures may not be able to protect against.
“At some point, something goes wrong,” says Paul Wright, vice president of global ecosystem and AI at LiftForward. “If the workflow of agents talking to agents is getting longer and longer, where did it go wrong? And how do we know?”
Think about how your data is organized, where it’s stored, and the potential risks associated with AI tools, Wright says.
Organizations also need to consider the ethics and bias of the AI tools they use, Wright added. Always make sure there is a human involved, especially when AI agents are talking to each other.
With so much to manage, Green says an AI task force or governance committee can be helpful.
Fred Wilf, managing partner at Wilftek, says these plans can also help organizations proactively avoid potential legal ramifications from using AI, especially if something goes wrong.
“We are also looking at making sure that we comply with all of the many laws and regulations to ensure that we manage all risks, including any legal risks,” Wilf said.
Source link
