Your best bet for the future: Securing generative AI

AI For Business


IBM and AWS study: Less than 25% of generative AI projects are currently secured

The corporate world has long operated on the idea that trust is the currency of good business. But as AI transforms and redefines how businesses operate and engage with customers, we need to build trust in the technology.

Advances in AI free up human capital to focus on high-value deliverables. While this evolution is sure to have a transformative impact on business growth, user and customer experiences depend on an organization's commitment to building technology solutions that are safe, responsible, and reliable.

Businesses need to determine whether the generated AI that interacts with their users can be trusted, and security is a fundamental element of trust. Therefore, one of the biggest bets companies are making is ensuring the security of their AI deployments.

Innovate now, be safe later: Disconnect

Today, the IBM® Institute for Business Value released the Securing Generative AI: What Matters Now study co-authored by IBM and AWS, introducing new data, practices, and recommendations for securing generative AI deployments. According to an IBM survey, 82% of C-suite respondents say safe and reliable AI is critical to business success. While this sounds promising, his 69% of leaders surveyed also indicated that innovation takes precedence over security when it comes to generative AI.

Prioritizing innovation or security may seem like a choice, but it's actually a test. There's a palpable tension here. Organizations are aware that the risks of generative AI are higher than ever, but they are not taking advantage of its benefits. lessons learned From previous technology disruptions. Like the move to hybrid cloud, agile software development, and zero trust, generative AI security can take a backseat. More than 50% of respondents are concerned about unpredictable risks impacting generative AI initiatives and are concerned about increasing the potential for business interruption. However, they report that only 24% of his current generative AI projects are secured. Why does this disconnect occur?

Security indecision can be both an indicator and a consequence of broader generative AI knowledge gaps. Almost half of respondents (47%) said they don't know where and how much to invest when it comes to generative AI. Even as teams pilot new capabilities, leaders are still considering which use cases for generative AI make the most sense and how to scale them for production.

Securing generative AI starts with governance

Not knowing where to start can also hinder your security efforts. That's why IBM and AWS collaborated on an effort to uncover action guides and practical recommendations for organizations looking to protect their AI.

To ensure the trust and security of generative AI, organizations need to start with the basics, with governance as a baseline. In fact, 81% of respondents said generative AI requires a fundamentally new security governance model. By starting with governance, risk, and compliance (GRC), leaders can build the foundation of a cybersecurity strategy to protect AI architectures that align with business goals and brand values.

To protect a process, you must first understand how it is supposed to work and what the expected process should be so that deviations can be identified. When AI deviates from its operational objectives, it can introduce new risks with unanticipated business impacts. Therefore, identifying and understanding these potential risks can help organizations understand their own risk thresholds based on their own compliance and regulatory requirements.

With governance guardrails in place, organizations can more effectively establish strategies to protect their AI pipelines. The data, models, their usage, and the underlying infrastructure on which AI innovations are built and embedded. On the other hand, the shared responsibility model for security may change depending on how an organization uses generative AI. As organizations develop their own AI operations, many tools, controls, and processes are available to reduce the risk of business impact.

And while illusions, ethics, and bias are often the first things that come to mind when thinking about trustworthy AI, organizations need to recognize that their AI pipelines face threats such as: believe oneself is at risk. Traditional threats take on new meaning, new threats use offensive AI capabilities as new attack vectors, and new threats seek to compromise the AI ​​assets and services we increasingly rely on. .

Trust and security equation

Security helps bring trust and confidence to generative AI use cases. To achieve this synergy, organizations must take the following steps: village approach. Conversations need to extend beyond IS and IT stakeholders to strategy, product development, risk, supply chain, and customer engagement.

These technologies are both transformative and disruptive, so managing an organization's AI and generative AI assets requires collaboration across security, technology, and business domains.

Technology partners can play an important role. Leveraging the breadth and depth of expertise of technology partners across the entire threat lifecycle and security ecosystem can be an invaluable asset. In fact, an IBM study found that more than 90% of surveyed organizations leverage generative AI security solutions through third-party products or technology partners. When it comes to choosing a technology partner to address your generative AI security needs, surveyed organizations reported:

  • 76% want a partner who can help them build a compelling cost case with a solid ROI.
  • 58% want guidance on their overall strategy and roadmap.
  • 76% are looking for a partner who can facilitate training, knowledge sharing, and knowledge transfer.
  • 75% choose a partner who can guide them through the evolving legal and regulatory compliance landscape.

The survey reveals that organizations recognize the importance of security in AI innovation, but are still trying to understand how best to approach the AI ​​revolution. Building relationships that can guide, advise, and technically support these efforts is an important next step for protected and trusted generative AI. In addition to sharing key insights into executives' perceptions and priorities, IBM and AWS have included an action guide with practical recommendations to take your generative AI security strategy to the next level. .

Learn more about the IBM-AWS joint research and how organizations can protect their AI pipelines.

Was this article helpful?

yesno



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *