ChatGPT has made a name for itself as a productivity and efficiency tool in the workplace. However, if company managers and leaders do not communicate how to use ChatGPT and other similar tools on the market, it can lead to security and data privacy concerns.
OpenAI provides a ChatGPT API, so the data fed into the model is not used for training purposes, but it is not widely adopted. Instead, it is more common for individual employees to use ChatGPT without explicit permission.
OpenAI added more data privacy guardrails and appeased Italian regulators along the way, but protections within the tool do not replace corporate guidance.
Since many companies are already using ChatGPT and other similar tools, it is important that companies develop policies regarding the use of ChatGPT and other similar tools and communicate them to their employees. Employees may unknowingly insert confidential company information into the model, or repurpose the generated content and present it as their own.
Maya Mikhailov, co-founder of SAVVI AI, said: “That’s your number one priority.”
Here are three key steps to consider to create the most bulletproof policy.
Connect with Key Stakeholders
Before an organization develops policy, CIOs and technical leaders should connect with leaders from other business units to assess level of concern, possible use cases, and risks.
Mikhailov said simply banning ChatGPT could act as a temporary policy for now, but if a company is a customer of Microsoft or Google, these models are already built into the software it purchases. It is designed to be
“[Organizations] The convenience provided by these tools is so great that yesterday we had to think about our information security policy,” says Mikhailov.
A Fishbowl survey of nearly 11,800 Fishbowl users found that more than two-thirds of employees say they use AI tools without informing their managers in advance.
Gartner research shows that CIOs should meet with legal, compliance, IT, risk management, privacy, data analytics, security, and line-of-business teams to ensure policies represent their organization’s needs and requirements. there is.
Suma Nallapati, CIO of Insight Enterprises, said in an email. “Privacy, data security, and algorithmic transparency within AI models should all be top priorities to mitigate the risks associated with ethical and legal compliance.”
CIOs and technologists should set expectations by communicating to non-technical team members what the tool can and can’t do. According to Gartner’s research, the risks of using out-of-the-box ChatGPT include fabrications, factual errors, biased or unsubstantiated answers, potential copyright infringement, and exposure of sensitive data.
Gartner analyst Aviva Litan says anything generated by an AI model should be treated as a first draft.
“We need domain experts to check the quality and accuracy of the information before sending it to anyone, whether it’s a customer, a partner, or another employee,” says Litan.
Ask, evaluate, adapt
According to Bill Wong, principal research director of the Info-Tech Research Group, when evaluating whether a use case is acceptable to a company, there is a framework that companies can follow based on their established goals and risk tolerance. I have work.
Companies not constrained by tight budgets or resource allocations have more room to work, while companies with other priorities may choose to be more cautious when evaluating their use cases. . If budget and resources are factors, businesses should first pursue high-customer-impact, low-complexity use cases, Wong said.
Leaders should ask:
- Does this use case fit your business?
- Does this use case follow your organization’s responsible and ethical AI guidelines?
- Is this use case right for my organization?
According to Wong, executives assess whether the use case aligns with the value proposition communicated to customers, adheres to regulatory and legal compliance, and whether the organization can bear the potential risks involved. is needed.
“Companies should protect their brand identities, educate their employees on all risks, and use AI technology in an ethical and responsible manner,” said Julia Groza, vice president of e-commerce technology at Levi Strauss. should be used,” he said. His SAP webinar in April.
According to Wong, when determining whether a use case is viable, leaders should assess the likelihood of success, implementation complexity, and time sensitivity.
Wong said companies should stop and reassess if their use cases are outside the framework.
Different organizations have different comfort levels as they relate to employee use of generative AI in the workplace.
Lea Sonderegger, Chief Digital Officer at Swarovski, said at the SAP webinar in April: “It is also important to us that his AI fits the luxury experience our customers expect.”
Humans should have the final say in decision-making, and Swarovski does not want humans to overwhelm customers.
Ultimately, says Sonderegger, it’s about using AI where it’s right for you, not where it’s not.
“It’s worth reiterating that technology shouldn’t be employed just for that,” Narapati said.
Policy language
Companies can create policies that simply restrict employees from duplicating AI-generated content, Wong said, or policies that outline specific use cases.
Guidelines restricting employee use may include language such as: According to Wong’s research,
Organizations can clearly define how ChatGPT is used by including clear guidance such as:
- ChatGPT should augment surveys, not replace them.
- If you use ChatGPT, evaluate its responses for accuracy, check for bias, and determine its relevance.
- Be transparent about how you use ChatGPT.
After policies are established and communicated, companies should educate employees about what happens when they misuse technology.
“Historically, when you tell people that you can’t use it in any way, people find workarounds,” Wong says. “One of his ways of managing it is through education and saying, ‘I know it’s productive here, but do you want your competitors to understand the algorithms in his chain of supply? is.”
