10 Ways to Prevent the Shadow AI Disaster

AI For Business


Like all things technology-related, Shadow IT is evolving.

Today's shadow IT is no longer just SaaS apps that serve the niche needs of a few employees or a few personal BlackBerrys sneaked in by sales to access work files on the go; it's increasingly likely to involve AI, as employees test out all kinds of AI tools without IT's knowledge or approval.

According to research from data protection software maker Cyberhaven, the amount of shadow AI is staggering. According to the company's Spring 2024 AI Adoption and Risk Report, 74% of ChatGPT usage in the workplace is through non-corporate accounts, 94% of Google Gemini usage is through non-corporate accounts, and 96% of Bard usage. As a result, unauthorized AI is consuming corporate data as employees input legal documents, HR data, source code, and other sensitive corporate information into AI tools that IT has not approved for use.

Arun Chandrasekaran, distinguished vice president analyst at research firm Gartner, says shadow AI is virtually inevitable. Workers are intrigued by AI tools and see AI as a way to reduce the burden of menial tasks and increase productivity. Some workers also want to learn how to use AI, seeing it as a way to prevent technology from taking over their jobs. Other workers have become accustomed to AI for personal tasks and now want to use it at work.

What could be the problem?

As Chandrasekaran acknowledges, these reasons seem sensible, but they don't justify the risks that shadow AI poses to organizations.

“Most organisations want to avoid shadow AI because the risks are so great,” he says.

For example, sensitive data is likely to be exposed, and proprietary data could help refine AI models (especially if they're open source) and aid competitors using the same models, Chandrasekaran said.

At the same time, many workers lack the skills necessary to use AI effectively, further raising the risk level. They may not have sufficient skills to input the right data into AI models to generate quality output, to direct the models with the right inputs to generate optimal output, or to check the accuracy of the output. For example, workers can use generative AI to create computer code, but if they don't understand the code's syntax or logic, they can't effectively check that code for problems. “This can be very harmful,” Chandrasekaran says.

Meanwhile, shadow AI could be disruptive to the workforce, he said, because workers who use AI covertly could have an unfair advantage over employees who haven't adopted such tools. “It's not a mainstream trend yet, but it's a concern in the discussion.” [with organizational leaders]” says Chandrasekaran.

Shadow AI can also create legal issues. For example, unauthorized AI could illegally access others' intellectual property, making your organization liable for infringement. It could also produce biased results that run counter to anti-discrimination laws or company policies. Or it could generate erroneous output that is then passed on to customers or clients. All of these scenarios can create liability for your organization, making you liable for any resulting violations or damages.

Indeed, organizations are already facing consequences when their AI systems fail: In one example, a Canadian court ruled in February 2024 that Air Canada was liable for providing consumers with misinformation via its AI chatbot..

Chatbots, in this case, are sanctioned technology, and IT leaders say it just shows that official technology is risky enough, so why let shadow IT run wild and add even more risk?

10 ways to avoid disasters

As with Shadow IT of old, there is no one-and-done solution that can prevent unauthorized use of AI technology or the consequences that can result from its use.

But CIOs can employ a variety of strategies to eliminate unauthorized AI use, prevent disaster, and limit the scope of impact if something does go wrong. Here are 10 ways IT leaders can help CIOs:

1. Set an acceptable use policy for AI

“The first big step is to work with other executives to create an acceptable use policy that outlines when, where and how AI can be used and reiterates organization-wide prohibitions against using technology that has not been approved by IT,” says David Kuo, executive director of privacy compliance at Wells Fargo and a member of the emerging trends working group at the nonprofit governance association ISACA. It sounds obvious, but most organizations still don't have a policy. A March 2024 survey of 3,270 digital trust professionals by ISACA found that only 15% of organizations have an AI policy (even though 70% of respondents said their staff uses AI and 60% said their employees use genAI).

2. Raise awareness of risks and consequences

Kuo acknowledges the limitations of step one: “You can set acceptable use policies, but users will break the rules,” so warn users about what's coming.

“There needs to be more awareness across organizations about the risks of AI, and CIOs need to be more proactive in explaining the risks and spreading that awareness across the organization,” says Sreekanth Menon, global leader, AI/ML services at Genpact, a global professional services and solutions company, outlining the risks associated with AI in general and the heightened risks of using the technology without permission.

Kuo adds: “You can't just do a one-time training, and you can't just say, 'Don't do this,' you have to educate your employees. Let them know about any issues you might be having. [shadow AI] and the consequences of their bad behavior.”

3. Manage expectations

Despite rapid adoption of AI, research shows that executives' confidence in harnessing the power of intelligent technology is declining, says Fawad Bajwa, global AI practice leader at leadership consulting firm Russell Reynolds Associates. Bajwa believes this decline in trust is due in part to a mismatch between expectations and what AI can actually achieve.

He advises educating CIOs on where, when, how and to what extent AI can deliver value.

“Having alignment across the organization about what you want to achieve helps align confidence,” he says, which in turn helps prevent employees from chasing AI solutions on their own in hopes of finding a panacea for all their problems.

4. Review and strengthen access controls

Krishna Prasad, chief strategy officer and CIO at digital transformation solutions company UST, said one of the biggest risks surrounding AI is data leakage.

Certainly, planned AI adoption will involve risks that the CIO can work with business, data, and security colleagues to mitigate, but when employees adopt AI without the CIO's involvement, the CIO won't have an opportunity to review and mitigate the risks, increasing the likelihood of sensitive data being exposed.

To avoid such scenarios, Prasad advises technology, data, and security teams to reexamine data access policies and controls, overall data loss prevention programs, and data monitoring capabilities to ensure they are robust enough to prevent leaks due to unauthorized AI deployment.

5. Block access to AI tools

Another helpful step, Kuo said, is to blacklist AI tools like OpenAI's ChatGPT and use firewall rules to prevent employees from using and accessing them from company systems: Set up firewall rules to prevent these tools from being accessed from company systems.

6. Recruiting collaborators

CIOs can't be the only ones working to prevent shadow AI, Kuo said. CIOs need to mobilize C-level colleagues who have a vested interest in protecting their organizations from any negative impacts and get them on board with educating staff about the risks of using AI tools that go against formal IT procurement and AI usage policies.

“Better conservation requires the cooperation of the whole village,” Kuo added.

7. Create an IT AI roadmap that drives organizational priorities and strategy

Employees typically adopt technology because they believe it will help them do their jobs more efficiently, not because they are trying to harm their employers, so CIOs can reduce the demand for unauthorized AI by providing employees with the AI ​​capabilities that will best help them achieve the priorities set for their roles.

Bajwa says CIOs should see this as an opportunity to position their organizations for future success by devising an AI roadmap that not only aligns with business priorities but actually shapes their strategy. “This is a moment that will redefine business,” Bajwa says.

8. Don't have “no division”

Executive advisors say CIOs (and their C-suite executives) can't afford to slow AI adoption because it will hurt their companies' competitiveness and increase the possibility of Shadow AI. But that's happening to some degree in many places, according to Genpact and HFS Research. A May 2024 report found that 45% of companies have a “wait and see” attitude toward genAI, and 23% are “naysayers” who are skeptical of it.

“Today, limiting the use of AI is completely counterproductive,” Prasad says. Instead, CIOs need to enable AI capabilities offered within the platforms the enterprise already uses, train employees to use and optimize those capabilities, accelerate adoption of AI tools that are expected to deliver the best ROI, and reassure employees at all levels that IT is committed to an AI-enabled future, he says.

9. Empower employees to use AI the way they want

A March survey by ISACA found that 80% of respondents believe AI will transform many jobs. If that's the case, workers should be given the tools to use AI to make changes that will improve their jobs, said Beatriz Sanz Sáiz, global data and AI leader at EY Consulting.

She advises CIOs to provide tools and training to employees across the organization, not just IT, to build their own intelligent assistants or work with IT to create them. She also advises CIOs to build flexible technology stacks to quickly support and enable such efforts, switching on new large language models (LLMs) and other intelligent components as employees demand them. This will increase the likelihood that employees will turn to IT (rather than external sources) to build solutions.

10. Be open to new and innovative uses

AI is not new, but as it rapidly gains adoption, its challenges and potential are becoming increasingly clear. CIOs who want to help their organizations harness AI's potential (without all the headaches) need to be open to new ways to use AI, so that employees don't feel like they have to tackle it alone.

Bajwa gives the example of AI hallucinations: While hallucinations have certainly gotten a near-universal bad rap, Bajwa points out that they could be useful in creative fields like marketing.

“Hallucinations can give us ideas we'd never thought of before,” he says.

CIOs who are open to these possibilities and put in place the right guardrails, including rules about how much human oversight is needed, are more likely to invite IT into AI innovation rather than exclude it. And isn't that the goal?



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *