Manage generative AI risks before they are managed

Applications of AI


As generative AI innovation advances at a breakneck pace, concerns about AI reliability, security, and risk are rapidly emerging. Hundreds of prominent technology and business leaders recently signed an open letter calling for a six-month moratorium on training to assess the safety of AI systems stronger than GPT-4. Shortly after, Italy became the first Western country to temporarily ban ChatGPT over security and privacy concerns, while in the US President Biden met with the Council of Science and Technology Advisers to discuss the opportunities and risks of AI. . Just recently, European Union lawmakers called for new rules and regulations for AI tools beyond what is already in place under the region’s proposed AI law.

Part of the purpose of the recent open letter is to give developers time to build in the controls they need to safely use large language models for generative AI. But the reality is that the development of generative AI doesn’t stop. OpenAI said that in the second half of 2023 he could release GPT-4.5, and eventually he expected GPT-5 to be released, realizing artificial general intelligence (AGI). Even with AGI, it may be too late to put in place safety controls that effectively protect human use of these systems.

Before deploying applications that use hosted large language models (LLMs), organizations must act now to develop an enterprise-wide strategy for trust, risk, and security management of generative AI . Traditional security controls are inadequate for new generative AI capabilities. Enterprises should continue to experiment, but in the absence of verifiable controls over AI data protection, privacy, and LLM content filtering, should delay implementing applications that send data to hosted LLMs.

Current status of AI TRISM tool market

AI Trust, Risk, and Security Management (AI TRiSM) is a framework that ensures AI model governance, reliability, fairness, reliability, robustness, validity, and data protection.

Broadly speaking, the AI ​​TRISM tools market consists of solutions that fall under four main pillars:

  • model operation, To manage end-to-end model lifecycle governance
  • hostile resistancefor training models that resist malicious attacks
  • Data/content anomaly detectionto filter unwanted content
  • Data privacy guaranteeto ensure end-user privacy and comply with data privacy regulations

Together, these four categories of solutions help organizations manage the trust, risk, and security of their AI models. Currently, there is no single platform or vendor that covers all segments and aspects of the AI ​​TRiSM market.

When applying this framework to hosted LLM-dependent applications such as ChatGPT, the full capabilities of ModelOps and Adversarial Resilience can only be implemented by the company hosting the AI ​​model. However, the other two pillars of AI TRiSM, content filtering and data privacy, must be managed by users of hosted models and applications.

Currently, there are no off-the-shelf tools that provide users with systematic privacy guarantees or effective content filtering for engagement with generative AI models. At this time, the user must rely on her LLM application license agreement with the hosting vendor to govern the terms of confidentiality breach of application data. Inherited enterprise security controls to do this are insufficient because they do not directly apply to user interactions with LLM. There is an urgent need for a new class of AI TRiSM tools for managing data and process flows between users and enterprises that host generative AI underlying models.

LLMs need a new kind of AI trust, risk and security management tools

A recent Gartner poll found that 70% of executives say their organizations are in investigation and exploration mode with generative AI, and 19% are already in pilot or production mode. Before taking steps to operate a hosted generative AI application, a company should understand his LLM risks and the controls needed to manage them.

First, businesses need the ability to automatically filter LLM output for misinformation, hallucinations, factual errors, bias, copyright violations, and other illegal or undesirable information. Vendors hosting these models perform some content filtering for users, but users must implement their own policies and filters to eliminate unwanted output. This can be done by building content filtering functionality internally or by working with a third party that can provide this functionality.

Organizations also need verifiable data governance and assurance that confidential company information sent to LLMs will not be compromised or retained in the LLM environment. Although LLM is stateless, sensitive information is kept in its prompt history and possibly other logging systems within the model’s environment. This creates a vulnerability that can be exploited by a malicious person or by a simple misconfiguration by an LLM system administrator. At this time, users must rely on vendor license agreements that stipulate the terms of data privacy breaches.

Finally, users need LLM transparency to conduct the impact assessments necessary to comply with regulations such as the EU’s GDPR and upcoming AI law. Organizations subject to such regulations have fundamental concerns about LLMs that need to be addressed before adopting platforms using these models.

These concerns include:

  • Privacy Impact Assessment (PIA): Because LLM is a black box, organizations are expected to conduct the necessary privacy impact assessments without exposing the hosting vendor to significant risk.
  • Data residency and data sovereignty: Organizations need to understand where LLM processes the data they collect in order to comply with data residency and sovereignty requirements and priorities.
  • legal: If personal data was used to train the base LLM, the LLM vendor must ensure that it has a legal position on claims that it removed that data from model training. Additionally, under upcoming regulations that are part of EU AI law, LLM vendors will be required to disclose the copyrighted material used to build their systems. Such concerns are now specific to the EU, but new laws and regulations are emerging in many other parts of the world, and for similar reasons he said adoption of LLM applications could be complicated and stagnant. I have.

Organizations need to act now to develop an enterprise-wide strategy for AI TRiSM, especially as it relates to hosted generative AI applications. The reality is that the development of generative AI will not stop, making AI TRiSM an ever more urgent imperative. It’s time to overcome the risks of AI before it hits us.

About the author:

Aviva Ritan A Distinguished VP Analyst at Gartner, Inc., responsible for all aspects of blockchain innovation and AI trust, risk and security management. A Gartner analyst provides additional insight into AI trust, risk, and security on her website at: Gartner Security and Risk Management Summit It will take place June 5-7 at National Harbor, Maryland.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *