The biggest risks companies see when using generational AI are no illusions

AI For Business


Andriy Onufriyenko | Moments | Getty Images

The benefits of generative artificial intelligence have a flip side: hallucinations, code errors, piracy, persistent bias, and, most of all, data leaks, which organizations are most concerned about.

A recent Alteryx survey found that while most companies (77%) report successful artificial intelligence pilots, 80% cite data privacy and security issues as their biggest challenge when scaling AI. I am listing it. Meanwhile, according to AvePoint's 2024 AI and Information Management Report, 45% of organizations encountered unintended data breaches when implementing AI solutions. The Microsoft AI leak of 38 terabytes of data late last year is just one example of how big this problem can become.

“AI is definitely magnifying and accelerating some of the challenges we see around data management,” said AvePoint, which provides technology for organizations to manage, migrate and protect data in the cloud. said Dana Simberkoff, chief risk, privacy and information security officer at. And on-premises.

Simberkoff explains that much of this leaked information is unstructured data stored in collaboration spaces, unprotected, and previously undiscovered because it is difficult to find. “It's often what we call dark data,” Simberkov said.

Arvind Jain, CEO and co-founder of enterprise search platform Glean, which created an enterprise-wide search tool powered by Gen AI and was named to the 2024 CNBC Disruptor 50 list this week, has no idea how to implement AI. They say chief information officers and related roles are under immense pressure, leaving a lot of room for error in the race to modernize. “It was very difficult to find anything. No one knew where to look,” Jain said. “That's what AI fundamentally changes. We no longer have to go anywhere to find out. We just have to ask.”

Jain said most corporate data has some degree of privacy, and without enhanced privileges, sensitive information will remain exposed. While his own search platform operates with the organization's mandate in mind, it's up to leaders to manage their data before enriching it with AI.

Shedding light on unprotected “dark data”

It's not just the leakage of customer and employee personal information beyond the walls of your organization that you need to worry about. From termination letters for former employees to confidential discussions about mergers and acquisitions, there are countless sensitive documents that can cause problems if accessed by the wrong party within an organization. Whether it's employee dissatisfaction, insider training, or anything in between, the risks are clear.

Even without AI, the information leaked will remain unprotected. “Not knowing is never a good thing,” Simberkoff said. “When you shine a light on that dark data, suddenly it exists and you can no longer ignore it.”

Simberkov lives by the credo: “We protect what we value and improve what we measure.”

So how can leaders improve data permissions and protection in consideration of, or ideally before, AI adoption?

“It's not enabling AI; it's six steps up front to understand the data,” said Jason Hardy, chief technology officer of AI at data infrastructure company Hitachi Vantara. This includes logging data, using vendor-provided tools to feed that data through structured and search protocols, and scrutinizing information consistently over time, he said.

Hardy added that both ends of the spectrum are important: policies to prevent leaks and enforcement to manage leaked information.

“That's because of a lot of training,” he said. “End users will be aware of the information you are responsible for. We have approved the tools we use, but we also put those safeguards in place when introducing them into our systems. Sho.”

Simberkov says it's important to prioritize high-risk information within an organization's ecosystem and practice data labeling, classification, and tagging.

An anti-rush approach to AI implementation

One thing many leaders forget, Simberkoff says, is that it's okay to pause during the AI ​​implementation journey. “Organizations may rush to adopt AI, but then they may have to take a pause, and that’s OK,” she said. “One of the things we've seen that's very effective is to look at this in stages, so you can start with things like acceptable use policies and strategies,” she said. It's always good to test in practice.'' Pilot. “

Additionally, Simberkoff says regulations and laws are changing, so it makes sense to better understand data over time.

Here, Hardy believes that an ounce of prevention is worth a pound of cure. “You can’t make the front page of a popular news vendor selection article if you don’t do it right the first time.”

Simberkov reminds leaders that AI is an imperfect technology. “We know that these algorithms hallucinate, that they make mistakes, and that they are only as good as the data that is fed into them,” she said. “When using AI, it’s really important to check it and make sure you’re using it for its intended purpose.”

In other words, user education is essential. After all, she likens her AI to a valuable intern. “You can give them challenges, but you always want to check to make sure they're doing the right thing and aren't deviating,” Simberkoff said.

Jain recommends that all companies, especially large ones, have a centralized AI strategy to vet their tools and decide what content to connect to their datasets. However, he says that limited information provides limited value, so it makes the most sense to connect as much information as possible while maintaining appropriate permissions. Additionally, before implementing a new program company-wide, it is a good idea to test its contents in a software rollout.

Even if AI reveals poor data hygiene, it's still worth the squeeze, Simberkoff says. “AI is our best friend,” she said. “This is going to really push organizations to take the steps they should have taken all along.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *