Gary Marcus Said AI Was Stupid, Now He Says It’s Dangerous

AI For Business


At the time, just a few months ago, Marcus’ altercation was technical. But now that large-scale language models have become a global phenomenon, his focus has shifted. At the heart of Marcus’ new message is that OpenAI, Google, and other chatbots are dangerous and their power is a tsunami of defamatory “hallucinations” that automate misinformation, security his bugs, and slander. It means that it leads to This seems contradictory. For years, Marcus has accused his AI builder claims of being exaggerated. Why is AI so formidable now that society should curb it?

Marcus, who is always talkative, responds as follows. [LLMs] is actually pretty stupid and I still believe it. But there is a difference between power and intelligence. And we’re suddenly giving them a lot of power. In February, he found the situation alarming enough that he should be spending most of his energy dealing with this problem. Ultimately, he hopes, he’ll lead a nonprofit dedicated to making the most of AI and avoiding the worst.

Marcus argues that policymakers, governments, and regulators need to put the brakes on AI development to counter any potential harm and disruption. Along with Elon Musk and dozens of other scientists, policy geeks, and simply mad observers, he signed a now-famous petition demanding that his new LLM training be suspended for six months. bottom. However, he admits he doesn’t really think such a suspension would make a difference, and admits that he signed primarily to stay in line with the AI ​​critics community.Instead of his timeout in training, he prefers a pause Under development A new model or an iteration of the current model. There is a fierce, almost existential competition between Microsoft and Google that Apple, Meta, Amazon, and countless other startups want to enter, so this will probably have to be forced on the companies. prize.

Marcus has an idea of ​​who will do the enforcement. He has recently argued that the world urgently needs a “global, neutral, non-profit international organization for AI.”

As he outlined in his op-ed, he economist, such an agency could function like the International Atomic Energy Agency, which conducts audits and inspections to identify early nuclear programs. Perhaps this agency will monitor algorithms to make sure they don’t contain bias, promote misinformation, or hijack the power grid when we’re not looking. It seems far-fetched to imagine the United States, Europe, and China all working together on this, but given the alien threat, the intelligence to overthrow our species makes them Team Humans. Hey, it worked for another global threat, climate change! Hmmm…

In any case, the debate about controlling AI will become more lively as technology becomes more and more embedded in our lives. As such, we’d love to see more of Marcus and many other talking figures. The debate about what to do with AI is a healthy and necessary one, even if the fast-moving technology could be well advanced regardless of the measures we’ve taken painstakingly and belatedly to adopt. ChatGPT’s rapid rise to become an all-around business tool, entertainment device, and best friend shows us that we want this stuff, whether it’s scary or not. Like all other gigantic technological advances, superintelligence seems destined to bring us compelling benefits, even if it changes the workplace, cultural consumption, and inevitably us.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *