Risks associated with generative AI and how to manage them

AI Basics


Gartner Vice President Analyst Avivah Litan explains in a Q&A article what organizations need to know about AI trust, risk and security management.

Q: Given AI security and risk concerns, should organizations continue to consider using generative AI or pause?

answer: The reality is that the development of generative AI will not stop. Organizations need to act now to develop an enterprise-wide strategy for AI Trust, Risk, and Security Management (AI TRiSM). There is an urgent need for a new class of AI TRiSM tools for managing data and process flows between users and enterprises that host generative AI underlying models.

Currently, the market does not provide users with systematic privacy guarantees and effective content filtering in their engagement with these models (e.g. filtering out factual errors, hallucinations, copyrighted material and confidential information, etc.). There are no off-the-shelf tools that provide

AI developers urgently need to work with policymakers, including potential emerging regulators, to establish policies and practices for oversight and risk management of generative AI.

Q: What are the most significant risks that generative AI poses to enterprises today?

answer: Generative AI poses some new risks.

  • Hoaxes, including “hallucinations” and factual errors, are among the most pervasive problems already emerging with generative AI chatbot solutions. Training data can lead to biased, out-of-the-norm, or erroneous responses, which can be difficult to identify, especially as solutions become increasingly reliable and reliable. I have.
  • Deepfakes, where generative AI is used to create malicious content, pose a significant risk to generative AI. These fake images, videos, and audio recordings are used to attack celebrities and politicians, create and spread misleading information, and even create fake accounts or take over existing legitimate accounts. It has been used for intrusions.
  • Data privacy: Employees can easily expose their own sensitive corporate data when interacting with generative AI chatbot solutions. These applications may store information obtained through user input indefinitely or use the information to train other models, further compromising confidentiality. In the event of a security breach, such information could fall into the wrong hands.
  • Copyright issues: Generative AI chatbots are trained using large amounts of internet data that may contain copyrighted content. As a result, some outputs may violate copyright or intellectual property (IP) protection. In the absence of source references or transparency in how the output is generated, the only way to mitigate this risk is for users to scrutinize the output to ensure that it does not infringe copyright or intellectual property rights.
  • Cybersecurity concerns: In addition to more sophisticated social engineering and phishing threats, attackers can easily generate malicious code using these tools. Vendors that offer generative AI underlying models assure customers that they have trained their models to reject malicious cybersecurity claims. However, it does not provide users with the tools to effectively audit all security controls in place. Vendors also emphasize a “red team” approach. These claims require users to have complete confidence in the vendor’s ability to meet their security goals.

Q: What actions can enterprise leaders take now to manage generative AI risks?

answer: It’s important to note that there are two general approaches to leveraging ChatGPT and similar applications. Out-of-the-box model usage leverages these services as-is without direct customization. The prompt engineering approach uses tools to create, refine, and evaluate prompt inputs and outputs.

To be ready for immediate use, organizations must manually review all model outputs to detect inaccuracies, misinformation, or biased results. Establish a governance and compliance framework for enterprise use of these solutions. This includes a clear policy prohibiting employees from asking questions that expose sensitive or personal data of the organization.
Organizations should use existing security controls and dashboards to monitor unauthorized use of ChatGPT and similar solutions to uncover policy violations. For example, firewalls can block access for corporate users, security information and event management systems can monitor event logs for violations, and secure web gateways can monitor unauthorized API calls.

All of these mitigations apply to engineering fast. In addition, steps should be taken to protect internal and other sensitive data used to design prompts on third-party infrastructure. Create and save designed prompts as immutable assets.

These assets can represent vetted engineering prompts that are safe to use. It can also represent a corpus of fine-tuned and highly developed prompts that can be reused, shared, and sold more easily.

Avivah Litan is Distinguished Vice President Analyst at Gartner Research. Litan is currently a member of her ITL AI team responsible for AI and Blockchain.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *