artificial intelligence mayday call

Machine Learning


Join top executives in San Francisco July 11-12 to hear how they are integrating and optimizing their AI investments for success. learn more


May 1st, new york times reported that the so-called “Godfather of AI” Geoffrey Hinton has resigned from Google. He gave the reason for the move so that we can talk freely about the risks of artificial intelligence (AI).

His decision is surprising and well deserved. The former since he has devoted his whole life to advancing his AI technology. The latter, given his growing concerns expressed in recent interviews.



This announcement date has symbolism. May 1st is May Day, known to celebrate workers and the blossoming of spring. Ironically, AI, especially generative AI, based on deep learning neural networks, has the potential to displace large portions of the workforce. We’re already starting to see this impact at IBM, for example.

Is AI replacing jobs and getting closer to superintelligence?

The World Economic Forum estimates that 25% of jobs will be lost over the next five years and AI could play a role, so other companies are sure to follow suit. When it comes to spring bloom, generative AI could spark a new beginning in symbiotic intelligence, where humans and machines work together in ways that will lead to a renaissance of possibility and abundance.

event

transform 2023

Join us July 11-12 in San Francisco. A top executive shares how she integrated and optimized her AI investments and avoided common pitfalls for success.

Register now

Alternatively, this is when advances in AI begin to approach superintelligence, potentially posing an exponential threat.

It’s these sorts of worries that Hinton wants to talk about, which he couldn’t do while working at Google and other companies pursuing commercial AI development. In a Twitter post, Hinton said:

Jeffrey Hinton’s Tweet May 1, 2023

may day

Perhaps it’s just a play on words, but the announcement date evokes other associations. Mayday is a distress signal commonly used when there is imminent and serious danger. The Mayday signal is used when a true emergency occurs as it is a priority call to respond to the situation. Is the timing of this news just a coincidence, or is it meant to symbolically increase in importance?

according to Times In the article, Hinton’s immediate concern is AI’s ability to generate human-quality content in text, video, and images, and how malicious actors could use that ability to overpower the average person. can spread misinformation and disinformation such as “you can’t know what”. It’s already true. ”

He also believes that the time is approaching when machines will be smarter than the smartest man. There has been much debate about this point, but most AI experts see this as far into the future, perhaps 40+ years beyond his time.

The list included Hinton. In contrast, Ray Kurzweil, former director of engineering at Google, has argued that this moment will come in 2029 when AI will easily pass the Turing test. Kurzweil’s take on this timeline was an outlier, but no longer.

According to Hinton’s Mayday interview, he said: [AI] You can actually be smarter than people—a few people believed it. But most people thought it was off the mark. And I thought it was pretty out of the way. I thought it would be 30 to 50 years, or more. Obviously, I no longer think about it. ”

These 30 to 50 years could have been used to prepare companies, governments, and societies through governance practices and regulations, but now wolves are at the door.

general artificial intelligence

Related topics include discussion of Artificial General Intelligence (AGI), OpenAI’s mission, DeepMind, and more. AI systems in use today excel at certain narrow tasks, such as reading radiology images or playing games. No single algorithm excels at both types of tasks. In contrast, AGIs are endowed with human-like cognitive abilities, such as reasoning, problem-solving, and creativity, to perform a wide range of human-level or better tasks across a variety of domains, either as a single algorithm or as a network of algorithms. To do.

Much like the debate about when AI will be smarter than humans (at least for certain tasks), predictions about when AGI will be achieved range from just years to decades or centuries, or perhaps not at all. Varies greatly until not achieved. These timeline predictions are also being advanced by new generative AI applications such as ChatGPT, which is based on Transformer neural networks.

Beyond the intended purpose of these generative AI systems, such as creating compelling images from text prompts or providing human-like text answers in response to queries, these models are urgently needed It has an amazing ability to show the behavior of This means that AI can exhibit novel, complex and unexpected behaviors.

For example, the code generation capabilities of GPT-3 and GPT-4, the models underpinning ChatGPT, are considered urgent behavior as this feature was not part of the design specification. This feature appeared as a by-product of training the model. The developers of these models cannot fully explain how or why these behaviors occur. It can be inferred that it is born from the pattern recognition function.

The timeline speeds up and creates a sense of urgency

It’s these advances that are rearranging the timeline for advanced AI.recently CBS news In an interview, Hinton said he believes AGI can be achieved within 20 years. He added: We “might” allow computers to come up with ideas for improving themselves. We have to think a lot about.”

Early evidence of this functionality can be seen in early AutoGPT, an open-source recursive AI agent. In addition to being usable by anyone, this means that you can autonomously use the results you generate to create new prompts and chain these operations together to complete complex tasks.

In this way, AutoGPT could be used to identify areas where the underlying AI models can be improved and generate new ideas on how to improve them. not only that, new york times Columnist Thomas Friedman points out that open source code can be abused by anyone. He asks, “What will ISIS do with this code?”

It should come as no surprise that generative AI, specifically, or the overall effort to develop AI, leads to bad outcomes. But the acceleration of the more advanced AI timeline brought about by generative AI has created a strong sense of urgency for Hinton and others, apparently leading to his Mayday his signal.

Gary Grossman is SVP of Edelman’s Technology Practice and Global Leader of the Edelman AI Center of Excellence.

data decision maker

Welcome to the VentureBeat Community!

DataDecisionMakers is a place for data professionals, including technologists, to share data-related insights and innovations.

Join DataDecisionMakers for cutting-edge ideas, updates, best practices, and the future of data and data technology.

You might consider contributing your own article!

Read more about DataDecisionMakers





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *