Big cyber risks when ChatGPT and AI are secretly used by employees

AI Basics


Lionel Bonaventure | Afp | Getty Images

The surge in investment in artificial intelligence and chatbots by big tech companies, amid massive headcount cuts and slowing growth rates, has left many chief information security officers in turmoil.

With OpenAI’s ChatGPT, Microsoft’s Bing AI, and Google’s Bard and Elon Musk’s plans for his own chatbot making headlines, generative AI is permeating the workplace, and chief information security officers are embracing the technology. You should proceed with caution and prepare the necessary security measures.

The technology behind GPT, or generative pre-trained transformers, is powered by large-scale language models (LLMs), or algorithms that generate human-like conversations for chatbots. But not every company has his own GPT, so companies should monitor how their employees use this technology.

Michael Chui, a partner at the McKinsey Global Institute, said people would use generative AI if they thought it would help them in their jobs, and compared the way employees use personal computers and phones. comparing.

“Even if it is not authorized by IT, people [chatbots] It helps,” Chui said.

“Throughout history, we’ve found technology so compelling that individuals are willing to pay for it,” he said. “People were buying mobile phones long before companies said, ‘I’m giving this to you.’ PCs were similar, so we’re looking at the equivalent in generative AI.”

As a result, companies will have to “catch up” in terms of how they approach security measures, Chui added.

Whether it’s standard business practices like monitoring what information is shared on AI platforms or integrating company-sanctioned GPT into the workplace, experts say CISOs and companies can start We believe there are certain areas that should be addressed.

Start with information security basics

Already battling burnout and stress, CISOs have plenty to deal with, such as the potential for cybersecurity attacks and the growing need for automation. As AI and GPT move into the workplace, CISOs can start with security basics.

Companies can license the use of existing AI platforms so they can monitor what employees are saying to chatbots and ensure that information shared is protected, Chui said. can do.

“If you’re a company, you don’t want your employees to enter sensitive information into publicly available chatbots,” Chui said. “So we can take the technical steps that allow us to license software and have enforceable legal agreements about where data goes and where it doesn’t.”

There are additional checks and balances involved with licensing software, Chui said. Protecting sensitive information, regulating where information is stored, and guidelines on how employees use software are all standard procedures for companies to license software with or without AI.

“With a contract, we can audit the software so we can see if the data is protected the way we want it to be protected,” Chui said.

According to Chui, most companies that store information in cloud-based software are already doing this, so providing employees with a corporate-sanctioned AI platform is a way for businesses to break existing industry practices. means that it is already along the

Sameer Penakalapati, CEO of Ceipal, an AI-driven talent acquisition platform, said one security option for companies is to develop their own GPT or hire a company to create a custom version of this technology. is to create

For certain functions such as HR, there are multiple platforms from Ceipal to Beamery’s TalentGPT, and businesses may consider Microsoft’s plans to offer a customizable GPT. But despite the ever-increasing cost, companies may want to develop their own technology.

If a company creates its own GPT, the software will contain accurate information that employees can access. Companies can also protect the information their employees enter, but even if he hired an AI company to produce the platform, companies will be able to enter and store information securely, said Penakalapati. added.

Whatever path companies choose, CISOs need to remember that these machines work the way they were educated, said Penakalapati. It’s important to be intentional about the data you provide to technology.

“We always tell people to make sure they have technology that gives them information based on unbiased and accurate data,” said Penakarapathi. “Because this technology did not come about by chance.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *