Anthropic Co-Founder: “Dumb Question” unlocks AI breakthroughs

AI For Business


Anthropic co-founder said rocket science is not the key to moving forward with AI.

“I really want very naive and stupid questions,” said Jared Kaplan at last month's Y Combinator event.

The Chief Science Officer of Humanity said in a video published by Y Combinator on Tuesday that AI is “an incredibly new field” and that “many of the most basic questions have not been answered.”

For example, Kaplan recalled in the 2010s that everyone on Tech said “big data” was the future. He asked: How big is the data? How much does it actually help?

That idea eventually led him to study whether he and his team could predict AI performance based on the size of the model and the computational complexity used.

“We've been really fortunate. We actually found out there's a very, very, very, very accurate and amazing underlying AI training,” he said. “This is because I was just asking the stupidest questions possible.”

Kaplan added that as a physicist, that's exactly what he was trained to do. “You look at a big picture and ask things that are really stupid.”

A simple question can create a big trend “as accurately as possible”, which “can give you a lot of tools,” Kaplan said.

“It allows you to ask: what does it really mean to move the needle?” he added.

Kaplan and humanity did not respond to requests for comment from Business Insider.

Humanity's AI breakthrough

In particular, after the release of the Claude Sonnet 3.5 model in June 2024, humanity emerged as a major powerhouse in AI-supported coding.

“Humanity has changed everything,” SourceGraph's Quinn Slack said in a BI report released last week.

“We immediately said, 'This model is better than anything else in terms of ability to write code for a long time.' It's a high quality code that humans are proud to write,” he added.

“And as a startup, if you're not moving at that speed, you'll die.”

Humanity co-founder Ben Mann said in a recent episode of the “No Priors Podcast” that understanding how to make AI code faster is driven heavily by trial and error and measurable feedback.

“Sometimes, you just don't know and you have to try things out, and with simple code because you can do that in a loop,” Mann said.

Elad Gil, a top AI investor and not a host of Priors, agreed that a clear signal from unfolding the code and checking if it works will make this process fruitful.

“With coding, there's a direct output that you can actually measure. You can run the code and test the code,” he said. “There's a baked-in utility feature that you can optimize.”

In a monopoly report last week, Bi's Alistair Barr wrote about how startups achieved AI coding breakthroughs and how they can credit approaches such as human feedback or RLHF, reinforcement learning from constitutional AI.

Humanity could soon be worth $100 billion, Barr wrote, as startups draw billions of dollars from companies they pay for access to models.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *