As the breakneck pace of generative artificial intelligence (AI) innovation continues, security and risk concerns are becoming more prominent. Some lawmakers called for new rules and regulations for AI tools, while some tech and business leaders suggested suspending training on AI systems to assess their safety.
We spoke with Gartner Vice President Analyst Avivah Litan to discuss what data and analytics leaders responsible for AI development need to know about AI trust, risk, and security management.
Journalists interested in speaking with Avivah on this topic should contact Meghan.Rimol@Gartner.com. Members of the media may refer to this material in an article with proper attribution to Gartner.
Q: Given concerns about AI security and risks, should organizations continue to explore the use of generative AI, or should they pause?
A: The reality is that the development of generative AI has not stopped. Organizations need to act now to develop an enterprise-wide strategy for AI Trust, Risk, and Security Management (AI TRiSM). There is an urgent need for a new class of AI TRiSM tools for managing data and process flows between users and enterprises that host generative AI underlying models.
There are currently no off-the-shelf tools on the market that can provide systematic privacy guarantees to users or effectively content-filter their engagement with these models.
AI developers urgently need to work with policymakers, including new regulators that may emerge, to establish policies and practices for oversight and risk management of the AI they produce.
Q: What are some of the most significant risks that generative AI poses to businesses?
A: Generative AI poses many new risks.
-
“hallucination” and fabrication, including factual errors, are some of the most pervasive problems already emerging in generative AI chatbot solutions. Training data can lead to biased, unsubstantiated, or erroneous responses, which are difficult to identify, especially as solutions become more reliable and reliable.
-
deepfake, is a significant risk for generative AI if it is used to create malicious content. These fake images, videos and audio recordings can attack celebrities and politicians, create and disseminate misleading information, create fake accounts, take over and infiltrate existing legitimate accounts. It is used for
A recent example is an AI-generated image of Pope Francis wearing a fashionable white pufferfish jacket that went viral on social media. While this example may seem harmless, it provided a glimpse into a future where deepfakes pose significant reputational, counterfeiting, fraud, and political risks to individuals, organizations, and governments.
-
Data privacy: Employees can easily expose confidential or proprietary company data when working with generative AI chatbot solutions. These applications store information obtained through user input indefinitely and may even use that information to train other models, further compromising confidentiality. Such information can also fall into the wrong hands in the event of a security breach.
-
copyright issues: A generative AI chatbot is trained on large amounts of internet data that may contain copyrighted material. As a result, some outputs may violate copyright or intellectual property (IP) protection. Without transparency regarding source references and how the output is produced, the only way to mitigate this risk is for users to scrutinize the output to ensure that it does not infringe copyright or intellectual property rights. is.
-
Cybersecurity concerns: In addition to more sophisticated social engineering and phishing threats, attackers can use these tools to easily generate malicious code. Vendors that offer generative AI underlying models assure their customers that they train their models to reject malicious cybersecurity claims. However, it does not provide users with the tools to effectively audit all security controls in place.
Vendors also emphasize a “red team” approach. These claims require users to place complete confidence in the vendor’s ability to meet their security goals.
Q: What actions should business leaders take now to manage the risks of generative AI?
A: It’s important to note that there are two general approaches to leveraging ChatGPT and similar applications. Out-of-the-box model usage leverages these services as-is without direct customization. A rapid engineering approach uses tools to create, tune, and evaluate rapid inputs and outputs.
Out of the box, organizations should implement a manual review of all model outputs to detect false, misleading, or biased results. Establish a governance and compliance framework for enterprise use of these solutions. This includes clear policies prohibiting employees from asking questions that expose sensitive organizational or personal data.
Organizations should use existing security controls and dashboards to monitor unauthorized use of ChatGPT and similar solutions to detect policy violations. For example, a firewall can block access for corporate users, a security information and event management system can monitor event logs for violations, and a secure web gateway can monitor unauthorized API calls.
All of these risk mitigations apply for rapid engineering use. In addition, steps must be taken to protect internal and other sensitive data used to design prompts in third-party infrastructure. Create and save designed prompts as immutable assets.
These assets can represent vetted engineering prompts that are safe to use. It can also represent a corpus of fine-tuned and highly developed prompts that can be reused, shared, or sold more easily.
Gartner analysts will be attending Gartner Security & Risk Management Conferences June 5-7 in National Harbor, Maryland, July 26-28 in Tokyo, and September 26-28 in London. Discuss AI TRiSM at Summits. Follow us on Twitter for conference news and updates. #GartnerSEC.
