
Just when you thought you could breathe easy again after recent updates from major AI companies, a new term has emerged: “Shadow AI,” a hidden danger lurking within companies.
The term itself conjures up dangerous darkness. Shadow AI refers to the misuse of AI tools, applications, and systems by employees or departments. While the intentions behind it may be positive – to improve efficiency or solve a problem – if left unchecked, the consequences can be severe.
Imagine an AI system that recommends traditional Kenyan remedies to residents of South B, and traditional Bolivian medicine to those in nearby South C. Or using the car with the Jua Kali engine that you received as a bonus after buying a 2kg bucket of potatoes on the roadside on your way back from Kinangop and hoping for the best. It may or may not work.
I want to talk about the specific impacts on the finance, healthcare, banking and higher education sectors in Kenya.
In the finance and banking industry, shadow AI poses significant compliance risks because unauthorized AI tools may not comply with strict financial regulations set by Kenyan regulators, potentially resulting in significant fines for companies. Additionally, shadow AI systems often lack the robust security protocols employed by in-house IT departments and vetted systems.
The result could be a data breach, exposing sensitive financial information such as customer account and loan details, with potentially devastating consequences for both businesses and their customers.
Algorithmic bias is also a concern: Unvalidated AI models will inherit biases from the data they were trained on, which can lead to unfair lending practices and discriminatory treatment of customers.
Imagine an AI-powered loan approval system unintentionally discriminating against women-owned businesses because of historical bias in loan application data.
The healthcare sector is particularly vulnerable to the pitfalls of shadow AI: the use of unauthorized AI for critical tasks such as patient diagnosis and treatment planning raises serious concerns about privacy violations.
Shadow AI systems may lack necessary safeguards to protect sensitive medical data, putting patient privacy at risk. Additionally, using unreliable AI tools for medical diagnosis can lead to misdiagnosis and poor patient outcomes.
Because Shadow AI operates outside of official channels, there is less accountability for errors or malfunctions caused by these unauthorized systems.
Another consideration is that an AI system may have to make difficult ethical choices, such as deciding between saving a wealthy patient or a low-income patient. Making that decision for an unauthorized system may be as difficult as navigating Jog Road during rush hour.
The potential negative impacts of Shadow AI also extend to Kenya’s tertiary education institutions, where it could be misused for plagiarism and cheating, undermining the integrity of academic programmes and devaluing hard-earned qualifications.
Unequal access to unauthorized AI tools could exacerbate existing educational disparities among students: Students from wealthy families may have the financial means to acquire these tools, giving them an unfair advantage over their less-fortunate classmates.
Stay up to date: Subscribe to our newsletter
Moreover, an over-reliance on shadow AI for learning can have a negative impact on student development and hinder their ability to become well-rounded graduates. Additionally, AI-powered tutoring systems and automated grading can reduce students' engagement and critical thinking skills, further hindering their ability to learn independently and solve problems creatively.
So how can Kenyan businesses harness the potential of generative AI while mitigating the risks associated with shadow AI? The key answer lies in promoting AI transparency.
Companies must establish clear guidelines and educate employees on the responsible use of AI tools, and a centralized process for vetting and approving AI applications before deployment is essential to ensure compliance and avoid security vulnerabilities.
Implementing strong data governance practices is also essential to protect the security and privacy of sensitive information used in AI systems.
By recognising the existence of shadow generative AI and taking proactive steps to address it, Kenyan businesses can harness the transformative power of generative AI while ensuring responsible and ethical implementation.
– The author is an AI expert at De Montfort University in the UK.
