Many new tools are called game changers and ChatGPT truly deserves that title. The wonders of generative artificial intelligence (AI) that burst out six months ago have gone from inspiring hype to driving cross-disciplinary experimentation, posing multiple problems along the way.
The biggest business concern, of course, concerns long-term threats to employees. But perhaps the more specific and immediate concerns center around the implications of recent adoption too fast (and too far), such as putting data privacy at significant risk.
Just weeks after technology and science leaders signed an open letter calling for a halt to research until safety protocols are in place, Samsung announced what would happen if AI misuse became a problem. was one of the first major examples of By simply entering data into a ChatGPT request, a Samsung employee unknowingly leaked sensitive company information.
“There is an increasing need for companies to create their own security rules to better protect their systems, people and data.”
Global authorities acted relatively quickly. The UK’s proposed AI framework, Canada’s privacy survey and the European Union’s draft regulations are positive steps towards much-needed governance. However, as progress continues to accelerate, there is a growing need for businesses to create their own ChatGPT policies to better protect their systems, people, and data now.
A fine line between success and disaster
An excellent explanation of the current topic of generative AI comes from McKinsey. The deep learning innovation behind its development has been going on for years, but applications like ChatGPT are the result of sudden leaps in creating underlying models that can handle massive amounts of data. Execute multiple requests at once with unstructured data.
With the ability to provide instant help across a variety of tasks, from creating marketing content to solving coding challenges, chatbots have become very popular and are expected to increase productivity. It’s easy to see why. It could bring in up to $4.4 trillion in annual benefits to the global economy.
Transforming chatbot technology with the GPT model – Tim Shepheard-Walwyn, technical director and associate partner at Sprint Reply, told Information Age how businesses can drive value from chatbot technology leveraging the GPT model.
In addition, Deloitte Generative AI is all the rage This report highlighted how AI can move away from its generative capabilities and leverage it for tasks that require a lot of hard work but are easy to validate. However, as with all new technology, this versatility is also a risk factor.
The scope for users to delegate work for efficiency without considering whether it should be delegated continues to grow. Samsung’s recent problems are a good example. Employees are so focused on the benefits of taking over the time-consuming testing of chips and creating presentations that if they enter sensitive data into open-source AI, other users will have access to that data. hadn’t considered. user.
Will ChatGPT make low-code obsolete? – Romy Hughes believes that ChatGPT can put software development in the hands of its users, something that low-code has been trying to achieve for years.
The increasing availability of sophisticated tools on a daily basis means that robust safety measures are essential for their responsible use.
prepare for (almost) anything
The value of preparation should not be underestimated. As McKinsey points out, getting big returns from generative AI requires managing equally big risks.
Companies that have already rigorously evaluated new tools before deployment will be at the forefront here. Rigorous screening reduces the possibility of unforeseen hazards. This is especially true when engaging users, legal and security teams in the evaluation to cover all the underpinnings, such as whether the tool adequately protects personally identifiable information (PII) and non-public data. increase.
However, such an approach still only provides a relatively high-level overview of how a technology such as ChatGPT should be used. To ensure a consistent and safe implementation, companies must develop well-thought-out policies that allow employees to understand exactly what is and isn’t appropriate.
expansion of consciousness
The policy should not only define what the tool is and how its functionality works, but also outline broader risks such as untrusted output and ChatGPT confidentiality breaches. .
Create your company’s own ChatGPT policy
To help employees quickly understand and implement important basics, one good starting point is to flag relevant parts of existing policies where best practices can be checked.
Creating guidance for internal ChatGPT policies is a little more complicated. To develop a truly comprehensive ChatGPT policy, companies will likely need to conduct large cross-business workshops and individual research to ensure that all use cases are identified and discussed. There will be Ultimately, however, this foundation will allow us to develop concrete directions that will ensure better protection, and will provide the comprehensive knowledge needed to make the most of advanced technology. be able to give to others.
Limit definition
Explicitly highlighting threats and setting clear usage limits is equally important to leave no room for accidental exploitation. This is especially important for companies that may employ generative AI to streamline tasks that involve some level of PII, such as drafting client contracts, writing emails, or suggesting code snippets to use in programming.
rule rule
Again, providing general advice such as FAQs can be a useful step. Give employees a first reference to questions about when a chatbot is a good choice and what kind of data they can enter. But minimizing risk means going a step further and providing a list of exactly the “don’ts” to avoid. For example, broader rules are included, such as a complete ban on uploading her PII data to a chatbot for any purpose, including employee, contractor, client, customer, vendor, and product data. there is. On the other hand, specific use case instructions may include line manager approval of information prior to ChatGPT entry to verify authentic sources and outputs. It also asks tough questions for generated answers.
There is a difference between agile evolution and hasty adaptation. As the hype around generative AI continues to grow, companies must be careful not to promote improper use of AI with disastrous long-term consequences. With robust security processes already in place, the challenges associated with implementing and using technologies such as ChatGPT can be addressed in a systematic and efficient manner, reaping the wealth of benefits while limiting the risks.
Andreas Niederbacher is CISO at Adverity
Generative AI details
What is Generative AI and its Use Cases? – Generative AI is a marvel of technology destined to change the way we work, but what does it do and what are the use cases for CTOs?
