Why the company calls the current state of AI security “severe”

AI For Business


Welcome to Eye on AI with AI reporter Sharon Goldman. In this edition…Mark Zuckerberg and Priscilla Chan have restructured their philanthropy to focus on AI and science…Apple has reportedly finalized a deal to pay Google about $1 billion a year to use a 1.2 trillion-parameter AI model to power a major overhaul of Siri…OpenAI CFO Sarah Friar clarified her comments, saying the company is not seeking a government backstop.

As the wife of a cybersecurity professional, I can’t help but notice how AI is changing the landscape for those on the digital front lines, making both of their jobs more difficult. and Become smarter at the same time. I often joke with my husband, “We need him on those walls” (a nod to Jack Nicholson’s famous quote) good men Monologue) That’s why I’m always looking at how AI is transforming both security defense and attack.

That’s why I wanted to join Yotam Segev, co-founder and CEO of AI security startup Cyera, and Zohar Wittenberg, general manager of Cyera’s AI security business, on Zoom. Unsurprisingly, Cyera’s business is booming in the AI ​​era. ARR has grown to over $100 million in less than two years, and the company is now valued at over $6 billion. This is thanks to a surge in demand from companies rushing to implement AI tools without exposing sensitive data or encountering new security risks. The company is on Fortune’s latest Cyber ​​60 startups list and has a client roster that includes AT&T, PwC, and Amgen.

“I think of it like the Levi’s of the gold rush,” Segev said. He explained that just as every gold miner needed a good pair of jeans, every company needs to deploy AI safely.

The company also recently launched a new research lab to help businesses stay ahead of the rapidly increasing security risks created by AI. The team studies how data and AI systems actually interact within large organizations, tracking where sensitive information resides, who has access to it, and how new AI tools expose sensitive information.

I have to say I was surprised to hear that Segev described the current state of AI security as “bleak” and that CISOs (Chief Information Security Officers) are caught in the middle. One of the biggest problems, he and Wittenberg told me, is that employees are using public AI tools like ChatGPT, Gemini, Copilot, and Claude without company approval or in ways that violate policy (such as feeding sensitive or regulated data into external systems). Meanwhile, CISOs face a difficult choice: block AI and slow innovation, or allow AI and risk massive data breaches.

“They know they can’t say no,” Segev said. “They are AI Regulated organizations in industries such as healthcare, financial services, and telecommunications are actually in a better position to slow things down, he theorized. “I was meeting with the CISO of a global telecommunications company this week, and she said to me, ‘I’m resisting. I’m keeping them at bay. I’m not ready.'” But she has that privilege because she is a regulated party and has that position within the company. Move down the list of companies to less regulated companies. They’re just being trampled. ”

Wittenberg said businesses aren’t in as much trouble right now because most AI tools aren’t yet fully autonomous. “At the moment it’s just a knowledge system and it can still be contained,” he explained. “But once you get to the point where agents are taking actions on behalf of humans and talking to each other, you’re going to run into big problems if you don’t do anything,” he said, adding that within a few years, these types of AI agents will be deployed throughout the enterprise.

“I hope the world moves at such a pace that we can build security in time,” he said. “We’re trying to make sure that organizations are prepared to protect themselves before it becomes a disaster.”

Isn’t that so? to borrow from good men Again, I wonder if companies can really deal with the truth. When it comes to AI security, we need all the help we can get over that wall.

It’s also a bit of self-promotion. Yesterday, I published a new Fortune in-depth profile on OpenAI’s Greg Brockman. Greg Brockman is an engineer turned powerbroker supporting the company’s $1 trillion AI infrastructure mission. That’s outrageous, Please check it out! One of my favorite stories I worked on this year.

So, here’s more AI news for you.

sharon goldman
sharon.goldman@fortune.com
@SharonGoldman

The fate of AI

Meet the power brokers of the AI ​​era: OpenAI’s ‘chief executive’ helps realize Sam Altman’s $1 trillion data center dreamWritten by Sharon Goldman

Microsoft frees itself from dependence on OpenAI and joins the race for ‘superintelligence’ — AI chief Mustafa Suleiman wants to ensure it serves humanity– Written by Sharon Goldman

Less obvious factors that helped Democrats win Virginia, New Jersey and GeorgiaWritten by Sharon Goldman

Exclusive: Voice AI startup Giga raises $61M to automate customer serviceWritten by Beatrice Nolan

OpenAI’s new safety tools are designed to make it harder to jailbreak AI models. Rather, it may give users a false sense of securityWritten by Beatrice Nolan

AI in news

Mark Zuckerberg and Priscilla Chan have restructured their philanthropy to focus on AI and science. of new york times Today we reported that Mark Zuckerberg and Priscilla Chan’s philanthropy, the Chan Zuckerberg Initiative, is fully committed to AI. CZI, once known for its broader ambitions to redress education and social inequality, announced a major reorganization to focus squarely on AI-driven scientific research through a new organization called the Chan Zuckerberg Biohub Network. The group also acquired the team behind AI startup Evolution Scale and appointed lead scientist Alex Ribbs as scientific director. This is a boomerang move for Rives. When I interviewed him about Evolution Scale last year, he explained that he led a research group known as Meta’s “AI Protein Team,” which was disbanded in August 2023 as part of Mark Zuckerberg’s “Year of Efficiency,” which led to more than 20,000 layoffs at Meta. Still, he soon formed a startup called Evolution Scale with a core group of former Meta colleagues to continue working on building large-scale language models that instead of generating text, images, and videos, generate recipes for entirely new proteins.

Apple is reportedly finalizing a deal to pay Google about $1 billion a year to use its 1.2 trillion parameter AI model to power a major overhaul of Siri. According to Bloomberg, Apple selected Google’s technology to help rebuild the systems underlying Siri after testing models from Google, OpenAI, and Anthropic. The partnership will give Apple access to Google’s extensive AI infrastructure, enabling a more conversational version of Siri and new features scheduled for release next spring. Both companies declined to comment publicly. While the hope is reportedly to use the technology as an interim solution until Apple’s own model is powerful enough, my colleague Jeremy Kahn and I both think this may indicate that Apple has finally given up on using its native technology for Siri to compete in the AI ​​model game.

OpenAI CFO Sarah Friar clarified her comments and said the company is not seeking government support. As reported by CNBC, OpenAI CFO Sarah Friar said late Wednesday that the company: do not have She is backtracking on previous remarks she made at the Wall Street Journal’s Tech Live event, calling for a government “backstop” for major infrastructure improvements. Friar said her comments about a possible federal guarantee “obscure the point,” explaining that it means both the U.S. and the private sector must invest in AI as a national strategic asset. Her remarks come as OpenAI faces intense scrutiny over how it finances more than $1.4 trillion in data center and chip deals, even though it has reported revenue of about $13 billion this year. CEO Sam Altman dismissed concerns, saying AI infrastructure is the foundation of America’s technological strength.

AI calendar

November 10th-13th: Web Summit, Lisbon.

November 19th: Nvidia’s third quarter earnings

November 26th-27th: World AI Conference, London.

December 2nd – 7th: NeurIPS, San Diego

December 8th-9th: fortune brainstorming A.I. San Francisco. Click here to apply for participation.

Pay attention to AI numbers

82%

A new study by Nagomi Security of 100 chief information security officers called the 2025 CISO Pressure Index finds that more CISOs are facing pressure from their boards and executives to use AI-driven automation to improve efficiency.

Other key findings include:

  • 59% of CISOs say they fear AI attacks more than any other event over the next 12 months.

  • 47% expect agent AI to be their biggest concern In the next 2-3 years.

  • 80% of CISOs say they are currently under high or extreme pressure. And 87% report increased pressure in the past year.

Fortune Brainstorming AI will return to San Francisco on December 8-9 to convene the smartest people we know – technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brightest minds in between – to explore and ask the most pressing questions about AI at a new critical moment. Please register here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *