Who decides what AI communicates? Campbell Brown, former head of news at Meta, thinks:

AI News


Campbell Brown has spent his career pursuing accurate information, first as a renowned television journalist and then as Facebook’s first and only full-time news chief. Today, we see the threat of history repeating itself as we watch AI change the way people consume information. This time, she’s not waiting for someone else to fix it.

Her company, Forum AI — which she recently discussed with TechCrunch’s Tim Fernholz at StrictlyVC night in San Francisco — evaluates how its foundational models perform on what she calls “high-stakes topics,” subjects such as geopolitics, mental health, finance, and recruiting that are “vague, nuanced, and complex, with no clear yes or no answers.”

The idea is to find the world’s leading experts to build benchmarks and train AI adjudicators to evaluate models at scale. For Forum AI’s geopolitics work, Brown has hired Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, and Ann Neuberger, who led cybersecurity in the Obama administration. The goal is for AI judges to reach around 90% agreement with human experts, and Forum AI was able to reach that threshold, she said.

Brown traces the origins of Forum AI, founded 17 months ago in New York, to a specific moment. “I was at Meta when ChatGPT first went public, and I remember right after I realized that this was going to be the funnel that all the information was going to flow through, and it wasn’t very good.” Considering the impact on her own children, the moment felt almost real. “My kids are going to be really stupid if we don’t find a way to solve this,” she recalled thinking.

What frustrated her most was that accuracy didn’t seem to be anyone’s priority. She said Foundation model companies are “very focused on coding and math,” but news and information is more difficult. But what’s more difficult, she argued, is that it doesn’t mean voluntary.

In fact, when Forum AI began evaluating leading models, the results were not always encouraging. She noted that almost all models have a left-leaning political bias, noting that Gemini is accessed through Chinese Communist Party websites for “articles that have nothing to do with China.” There are also many subtle failures, she said, such as lack of context, lack of perspective, and arguments that play at straws without acknowledgment. “We have a long way to go,” she said. “But I also think there are some very simple fixes that will significantly improve the results.”

Brown spent years at Facebook, observing what happens when the platform optimizes for the wrong things. “We failed at a lot of things we tried,” she told Fernholz. The fact-checking program she built no longer exists. Even if social media turns a blind eye to it, the lesson is that optimizing engagement is bad for society, and many people are under-informed.

Her hope is that AI can break that cycle. “At this point, it could go either way,” she said. Companies could give users what they want, or they could “give people what’s authentic, honest, and true.” She acknowledged that an idealistic version of that, an AI that optimizes for truth, might sound naive. But she thinks corporations may be unlikely to be allies here. Companies using AI for credit decisions, lending, insurance, and hiring are liability-focused and “will want to optimize to resolve liability correctly.”

That corporate demand is also what Forum AI is betting its business on, but turning compliance concerns into stable revenue remains a challenge, especially given that much of the current market is still satisfied with checkbox audits and standardized benchmarks that Brown deems inadequate.

The compliance situation is “a joke,” she says. When New York City passed the first employment bias law mandating AI audits, the state auditor found that more than half had undetected violations. Actual evaluation, she says, requires domain expertise that can address not only known scenarios, but also edge cases that “can get people into trouble that people don’t even think about.” And that work takes time. “A smart generalist shouldn’t do that.”

Brown, whose company led by Lerer Hippow raised $3 million last fall, is uniquely positioned to explain the disconnect between the AI ​​industry’s self-image and the reality for most users. “You’ll hear from leaders of big technology companies, ‘This technology is going to change the world,’ ‘You’re going to lose your job,’ ‘It’s going to cure cancer,'” she says. “But for the average person just using a chatbot to ask basic questions, you’re still going to get a lot of sloppy and wrong answers.”

She believes that trust in AI is at very low levels and that skepticism is often justified. “There’s a conversation happening in Silicon Valley around one thing, but there’s a completely different conversation happening among consumers.”

If you buy through links in our articles, we may earn a small commission. This does not affect editorial independence.



Source link