Harnessing Generative AI: 3 Steps to Developing Enterprise Policy

AI For Business


ChatGPT has made a name for itself as a productivity and efficiency tool in the workplace. However, if company managers and leaders do not communicate how to use ChatGPT and other similar tools on the market, it can lead to security and data privacy concerns.

OpenAI provides a ChatGPT API, so the data fed into the model is not used for training purposes, but it is not widely adopted. Instead, it is more common for individual employees to use ChatGPT without explicit permission.

OpenAI added more data privacy guardrails and appeased Italian regulators along the way, but protections within the tool do not replace corporate guidance.

Since many companies are already using ChatGPT and other similar tools, it is important that companies develop policies regarding the use of ChatGPT and other similar tools and communicate them to their employees. Employees may unknowingly insert confidential company information into the model, or repurpose the generated content and present it as their own.

Maya Mikhailov, co-founder of SAVVI AI, said: “That’s your number one priority.”

Here are three key steps to consider to create the most bulletproof policy.

Connect with Key Stakeholders

Before an organization develops policy, CIOs and technical leaders should connect with leaders from other business units to assess level of concern, possible use cases, and risks.

Mikhailov said simply banning ChatGPT could act as a temporary policy for now, but if a company is a customer of Microsoft or Google, these models are already built into the software it purchases. It is designed to be

“[Organizations] The convenience provided by these tools is so great that yesterday we had to think about our information security policy,” says Mikhailov.

A Fishbowl survey of nearly 11,800 Fishbowl users found that more than two-thirds of employees say they use AI tools without informing their managers in advance.

Gartner research shows that CIOs should meet with legal, compliance, IT, risk management, privacy, data analytics, security, and line-of-business teams to ensure policies represent their organization’s needs and requirements. there is.

Suma Nallapati, CIO of Insight Enterprises, said in an email. “Privacy, data security, and algorithmic transparency within AI models should all be top priorities to mitigate the risks associated with ethical and legal compliance.”

CIOs and technologists should set expectations by communicating to non-technical team members what the tool can and can’t do. According to Gartner’s research, the risks of using out-of-the-box ChatGPT include fabrications, factual errors, biased or unsubstantiated answers, potential copyright infringement, and exposure of sensitive data.

Gartner analyst Aviva Litan says anything generated by an AI model should be treated as a first draft.

“We need domain experts to check the quality and accuracy of the information before sending it to anyone, whether it’s a customer, a partner, or another employee,” says Litan.

Ask, evaluate, adapt

According to Bill Wong, principal research director of the Info-Tech Research Group, when evaluating whether a use case is acceptable to a company, there is a framework that companies can follow based on their established goals and risk tolerance. I have work.

Companies not constrained by tight budgets or resource allocations have more room to work, while companies with other priorities may choose to be more cautious when evaluating their use cases. . If budget and resources are factors, businesses should first pursue high-customer-impact, low-complexity use cases, Wong said.

Leaders should ask:

  • Does this use case fit your business?
  • Does this use case follow your organization’s responsible and ethical AI guidelines?
  • Is this use case right for my organization?

According to Wong, executives assess whether the use case aligns with the value proposition communicated to customers, adheres to regulatory and legal compliance, and whether the organization can bear the potential risks involved. is needed.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *