3 ways organizations can protect their data from Shadow AI

AI For Business


Clint Bolton

The public cloud is often the first choice for organizations deploying new workloads, and its agile approach to building, testing, and scaling applications makes it a no-brainer for time-strapped staff.

But the public cloud can also become a headache for IT leaders when employees use it in conjunction with SaaS tools to deliver and consume applications themselves (a practice commonly known as shadow IT).

This may be more true than ever with generative AI, an increasingly popular workload. Shadow AI, or the unauthorized use of technologies such as GenAI, has emerged as a significant threat for organizations seeking to protect corporate IP and data.

Having the right guardrails and training in place, along with deploying GenAI in your data center, can help mitigate some of these risks. For organizations starting to deploy GenAI, it's important to understand why Shadow AI is dangerous and how to best address it.

Why Shadow AI is a threat

Microsoft and LinkedIn Report1 78% of employees report “bringing your own AI technology to work” (BYOAI) – a softer term for Shadow AI – and the survey also acknowledges that this BYOAI puts corporate data at risk.

Shadow IT and Shadow AI share low barriers to entry and common platform dynamics: just as employees can easily access public cloud or SaaS solutions, they can simply log into a public digital assistant and tell it to start creating content. The learning curve for basic instructions is not that different from querying Google or other search engines.

This is all well and good until employees enter sensitive information such as personal details, financial information, or important strategic documents.

At best, employees share sensitive data with third-party vendors; at worst, the vendors may use that information to continually train models and use them to answer prompts for other users. Repetition in the consumer realm is one thing, but it's another entirely in an enterprise context.

Therefore, the security risks associated with employees using public LLMs are very real, especially if IT departments don’t know what data employees are using for prompts.

As your organization embarks on a GenAI initiative, there are steps you can take to mitigate the risks associated with adopting new technology. The following tips can help:

On-premise is budget-friendly

In fact, in some cases, an on-premise deployment may be more cost effective.

A study by Enterprise Strategy Group found that deploying open source LLM with RAG on-premise makes inference with open source LLM 2-8x more cost-effective compared to public cloud or API-based services.2

ESG found that running Mistral 7B (7 billion parameters) on RAG was 38%-48% more cost-effective than Amazon Web Services. Savings increased as models got larger, with RAG running Llama 2 (70 billion parameters) being 69%-75% more cost-effective than AWS.

ESG compared the same Llama 2 model with OpenAI's ChatGPT 4 Turbo API and found that the on-premise deployment was 81% to 88% more cost-effective.

Conclusion

Deploying GenAI services on-premise won’t eliminate shadow AI, but building AI into the data rather than outsourcing it to a third party could help organisations retain control over corporate IP.

Regardless of what model IT leaders choose or where they run it, deploying GenAI workloads will remain a challenge for organizations that don't have the facilities, let alone the expertise, to deploy it.

This is where a trusted partner can help: Dell Technologies is a leader in a growing open ecosystem that helps organizations build, test and deploy GenAI services. Dell's AI-enabled infrastructure, client devices and professional services can help enterprises on their GenAI journey.

Learn more about Dell AI solutions.

This post was produced by Dell and Insider Studios.


1 AI has arrived at the workplace. Here comes the hard part. Microsoft and LinkedIn, May 2024

2 Understanding the Total Cost of Inference for Large Language Models, Enterprise Strategy Group, April 2024



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *