Lakera gets $20 million to stop its business-oriented Gen AI app from going out of control and leaking sensitive data.

AI For Business


This is a potential nightmare for Fortune 500 leaders working on chatbots and other generative AI applications: hackers will figure out how to trick the AI ​​into leaking sensitive company and customer data.

Zurich, Switzerland-based startup Lakera today announced it has raised $20 million to help leaders sleep better. European venture capital firm Atomico led the funding round, with participation from existing investors including Citi Ventures, Dropbox Ventures and Redalpine, bringing Lakera's total funding to $30 million. The company did not disclose its valuation in its latest funding.

Used by Dropbox, Citi, and numerous Fortune 100 technology and financial companies, Lakera's platform allows companies to set their own guardrails and boundaries for how generative AI applications respond to prompts such as text, images, videos, etc. The technology should protect against the most widely used technique for hacking generative AI models, known as “prompt injection attacks,” in which hackers manipulate generative AI to access a company's systems, steal sensitive data, take unauthorized actions, or generate harmful content.

Most Fortune 500 companies want to adopt generative AI within the next two years, says Lakera CEO David Haber. They typically use pre-made models, such as those used in OpenAI's ChatGPT. They then build applications on top of the models, such as customer service chatbots or research assistants, connecting them to sensitive company data and integrating them into business-critical functions. So safety and security must be a top priority.

“Existing security teams are faced with a whole new challenge of securing these Gen AI applications,” Haber said. “We're processing everything that goes in and everything that comes out, and ultimately making sure these high-performance Gen AI applications don't do things they weren't intended to do.” He added that Lakera's platform isn't an off-the-shelf option, but is built on the company's own internal AI models. “You can't use ChatGPT to secure ChatGPT. That's a terrible idea.”

But most importantly, Haber emphasized, customers can specify the context of what their Gen AI applications can and can't do, and assess possible security issues in real time. Customers can also implement specific policies about what the chatbot can say, he said. For example, a company might not want to talk about competitors or expose financial data.

Haber said Lakera has a unique strength in tracking AI threats: its online AI security game, Gandalf, which has millions of users around the world, including at Microsoft (which uses it for security training). As users test their on-the-fly injection skills in Gandalf's AI “jailbreak” game, the tool generates a real-time database of AI threats. The company said that this database grows with “tens of thousands of unique new attacks every day,” helping Lakera's software stay up to date.

Lakera is playing in the crowded Gen AI security space alongside other startups like HackerOne and BugCrowd. But Matt Carbonara of Citi Ventures said the Lakera team “has the background to build and evolve this product that the market needs,” adding that he appreciated the company's focus on rapid injection attacks.

“As new attack surfaces emerge, new countermeasures are needed,” he said. “Rapid injection attack approaches are the first area people are looking at.”

Recommended Newsletters:

CEO Daily provides essential context for the news business leaders need to know. More than 125,000 readers trust CEO Daily every weekday morning for insight on and from the C-level. Subscribe now.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *