Generative artificial intelligence is a transformative technology that has captured the interest of businesses worldwide and is rapidly being integrated into enterprise IT roadmaps. Despite the potential and speed of change, business and cybersecurity leaders indicate they are cautious about adoption due to security risks and concerns. A recent ISMG survey found that sensitive data leakage is the top implementation concern for both business leaders and cybersecurity professionals, followed by inaccurate data intrusion.
Cybersecurity leaders can mitigate many security concerns by reviewing and updating their internal IT security practices with generative AI in mind. Specific focus areas for efforts include implementing a zero trust model and adopting basic cyber hygiene standards, which will still protect against 99% of attacks. However, generative AI providers also play a critical role in the safe use of generative AI in the enterprise. Given this shared responsibility, cybersecurity leaders may need to seek a deeper understanding of how security is addressed across the generative AI supply chain.
Best practices for generative AI development are constantly evolving and require a holistic approach that considers the technology, its users, and society at large. But within that broader context, there are four fundamental areas of protection that are particularly relevant to enterprise security practices: data privacy and ownership, transparency and accountability, user guidance and policy, and security by design.
- Data Privacy and Ownership
Generative AI providers should clearly document their data privacy policies. When evaluating vendors, customers should ensure that their chosen provider allows them to manage their own information and that it will not be used to train underlying models or shared with other customers without their explicit permission.
- Transparency and accountability
Providers must maintain the trustworthiness of the content their tools create. Like humans, generative AI can make mistakes. Perfection isn't expected, but transparency and accountability are. To achieve this, generative AI providers must, at a minimum: 1) use trusted data sources to improve accuracy; 2) provide visibility into reasoning and sources to maintain transparency; and 3) provide mechanisms for user feedback to support continuous improvement.
- User Guidance and Policies
Enterprise security teams have an obligation to ensure that generative AI is used safely and responsibly within their organizations, and AI providers can help with that effort in a variety of ways.
Adversarial insider abuse, while unlikely, is another consideration. This would include attempts to use generative AI for harmful purposes, such as generating dangerous code. AI providers can mitigate this type of risk by building safety protocols into their system designs and clearly setting boundaries for what generative AI can and cannot do.
A more common concern is over-reliance by users. Generative AI is meant to assist workers in their day-to-day work, not replace them. Users should be encouraged to think critically about the information provided to them by the AI. Providers can explicitly cite sources and use carefully considered language that encourages thoughtful use.
- Safe by design
Generative AI technologies must be designed and developed with security in mind, and technology providers must be transparent about their security development practices. The security development lifecycle can also be adjusted to account for new threat vectors introduced by generative AI. This includes updating threat modeling requirements to address AI and machine learning specific threats, and implementing rigorous input validation and sanitization of user-provided prompts. AI-enabled red teaming is another important security enhancement that can be used to look for exploitable vulnerabilities, generation of potentially harmful content, and more. Red teaming has the advantage of being highly adaptable and can be used both before and after product release.
This is a strong starting point, but security leaders who want to dig deeper can refer to several promising industry and government initiatives aimed at ensuring safe and responsible development and use of generative AI. One such initiative is the NIST AI Risk Management Framework, which provides organizations with a common methodology for mitigating concerns while supporting trust in generative AI systems.
Make no mistake, safe enterprise use of generative AI must be supported by strong enterprise IT security practices and guided by a carefully considered strategy that includes an implementation plan, clear usage policies, and associated governance. But leading providers of generative AI technology know they have an important role to play, and are happy to share information about their efforts to advance safe, secure, and trustworthy AI. Working together will not only foster safe use, but also build the confidence needed for generative AI to fully realize its promise.
Learn more about.