Scientists say ‘everyone on Earth would die’ if AI were allowed to become more intelligent

AI News


We are entering a critical era of technology: the rise of AI. Artificial intelligence has been with us for a long time, but recent developments have pushed its capabilities and intelligence to a level that may begin to leave humans behind. A classic example of this is the OpenAI build of ChatGPT. Based on the GPT-4 language processing model, this AI chatbot can process and analyze vast amounts of data before generating content. In fact, it can answer nearly any question asked. But this is just the beginning. One AI researcher recently argued that if AI were allowed to grow intelligently without any checks, “literally everyone would die.”

Eliezer Yudkowsky of the Machine Intelligence Lab in Berkeley, Calif. told The Sun. And he thinks this is inevitable. “I’m not saying this might be possible, but it certainly happens,” he added.

But why is he so afraid of AI?

Also read: Looking for a smartphone?To check the mobile finder

AI could wipe humans off the planet

On the surface, AI chatbots are very useful tools that can improve human efficiency and productivity. It serves as a direct source of information available on the Internet, so users do not have to spend time scrolling through pages. It can also analyze large amounts of text to inform users of precise data points. Recently, ChatGPT successfully diagnosed a disease in a dog and saved a life, surprising veterinarians.

But it’s all good. There is also a dark side to this. Also known as misinformation, deep fakes, data privacy issues, security risks, and malware. All of these risks exist when AI is still in its relatively early stages.

There are concerns that if AI reaches enough superintelligence to develop senses, it could bring bad news to humans on Earth.

“Behind the guise of an AI that speaks to you and answers your questions is a huge array of puzzling numbers… In our current state of ignorance, the most likely outcome is the It’s about creating an AI that doesn’t do things, doesn’t do things, thinks millions of times faster than humans, and from that perspective, alien civilizations operating in a world of very stupid, very slow creatures. Visualize it,” Yudkowsky explains in a conversation with The Sun.

This explains why Elon Musk and Apple co-founder Steve Wozniak recently signed a petition to put all AI activity on hold until a regulatory body is built. The petition calls for an agency that can not only track the activities of AI, but also determine its appropriate approach and areas that AI should not access.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *