As organizations adopt AI technologies, they face increasing challenges related to fairness, privacy, and unpredictable system behavior.
At Data Summit 2026, Nicole Janeway Bills, CEO and Founder of Data Strategy Professionals, led the session “AI Risks and Risk Mitigation Strategies,” using real-world examples to help attendees recognize the 10 critical AI risks, detect emerging issues early, and take practical steps to mitigate them while maximizing business value.
The Annual Data Summit Conference was held in Boston on May 6-7, 2026, with a pre-conference workshop held on May 5th.
She presented a session with a new session Data + AI Leadership Forumis an exclusive space for business and technology leaders to explore strategy, governance, responsible AI, and value delivery.
She began the session by citing statistics from a recent McKinsey report on the state of AI. Despite the increasing use of AI, only 43% have an AI governance policy in place.
“If you don’t manage it, you’re more likely to run into security issues,” Bills says.
According to Bills, the top 10 categories of AI risk are:
- Data privacy and confidentiality
- prejudice and discrimination
- Misinformation and illusions
- Intellectual property risks
- security vulnerabilities
- lack of transparency
- Overreliance on AI
- Operational and strategic risks
- Regulatory and legal risks
- reputational risk
mitigation strategy
- Regarding data privacy and confidentiality, he explained that there is a need to minimize and anonymize data, conduct privacy impact assessments, and establish clear management guidelines.
- For bias and discrimination, we train models on diverse and representative datasets, implement demographic parity constraints during optimization, and embed bias audits and fairness tests.
- For misinformation and hallucinations, implement RAG to retrieve relevant external documentation before generation, chain of thought prompts to break tasks into steps, and source-based prompts require “follow”. Database Trends and Applications” etc.
- For intellectual property risks, create a proactive and comprehensive AI governance policy that outlines acceptable usage, output review processes, and employee training on intellectual property law. “Make sure to protect your trade secrets and understand how these GenAI tools are creating their output,” Bills says.
- For security vulnerabilities, we sanitize training data, scan plugins for malware and potential vulnerabilities, and perform adversarial testing and red teaming.
- Where transparency is lacking, choose an explainable model if needed, provide model cards detailing data sources, and deploy explainable AI using technologies such as LIME/SHAP.
- For overreliance on AI, include signals of uncertainty, educate users about AI limitations, force human involvement when making high-stakes decisions, and use overreliance indicators.
- Implement robust data and AI governance for operational and strategic risks, including ROI modeling, ethics audits, and more.
“Managing the use of AI models is cited as the most common challenge in AI implementation,” Bills said.
Many of the Data Summit 2026 presentations can be reviewed here: https://www.dbta.com/datasummit/2026/presentations.aspx.
