Who is trying to build close AI?
Companies such as Google, Openai, Meta and Anthropic collectively devote more than $1 trillion to the development of artificial general information (AGI). It's a bio of future technology that allows humans to do almost anything they can: driving, talking, learning, solving problems, creating, etc. For now, most AI models are great in one or two tasks, such as generating text or imagery or solving mathematical equations. However, the ability of these systems to incorporate and process data is expanding at failure rates, with the number of parameters in the model (connections similar to brain synapses) expanding from millions to trillions to over a trillion. Many industry experts believe it is only a matter of time before a system reaches AGI or artificial super intelligence (ASI). Demis Hassabis, CEO of Google Deepmind, argues that the arrival of AGI will lead a “age of greatest human prosperity” where disease and poverty are defeated and spaces are colonized. Other experts are not optimistic. Daniel Kokotajiro, a former Openai researcher who refused to sign a non-inheritance agreement when he resigned from the company in April, foresees that AI will devastatingly harm or even disappear Homanity as “like a 70% chance.”
How close is AGI?
We're not there yet, but technology is moving forward at a rapid pace. At the end of last year, Open AI's O3 model won 87.5% in the ARC-AGI test. This measures fluid intelligence. This is the ability to solve logical problems and recognize patterns without prior knowledge or training. In March, researchers from the University of California, San Diego published a preprint study suggesting that two AI models, Openai's GPT-4.5 and Meta's Llama 3, passed the Turing test, and developed responses that led human interrogators to believe that bots were more than 50% human. Only machine learning accelerates. A recent survey of 2,778 top AI researchers found that by 2047 there is a 50% chance that systems outperform humans on all tasks. AI 2027a detailed forecast co-authored by Kokotajlo predicts that within two years one AI system will work 30 times the current speed of 50,000 coders.
Is that good for humanity?
Economic productivity will skyrocket as ultra-responsive AIS unleashes a flood of innovation. According to humanity's CEO Dario Amodei, Ai-Optimized's biomedical research could go smoothly “to eliminate most cancers” and “to eliminate most cancers.” Autonomous and self-aware vehicles can fill roads and skyscrapers, and develop new materials that are lighter and stronger than those devised by humans. Also, although artificial intelligence has a well-known large-scale carbon footprint, last year, AI-specific data center servers that used enough power to power more than 7.2 million homes, Openai CEO Sam Altman confidently predicted that the technology will unlock fusion and provide abundant climate-friendly energy.
Subscribe to week
Escape the echo chamber. Get analysis from multiple perspectives, in addition to the facts behind the news.
Subscribe and save
Sign up for this week's free newsletter
From morning news briefings to weekly Good News newsletters, get the best of the week straight to your inbox.
From morning news briefings to weekly Good News newsletters, get the best of the week straight to your inbox.
Is this technology dangerous?
SuperSmart AI can be a civilized risk of the wrong hand. “A recent State Department-based report states that simple verbal or typed commands such as “running untraceable cyberattacks to crash North American electric versions” can result in a quality response that proves devastatingly effective. The paper also warns of a “large scaled” disinformation campaign in which personalized AI-generated videos, audio and texts oppose Americans to each other. A swarm of drones and robots with AGIs can overwhelm military facilities. It can also be used to create lethal biological weapons, just as AGIs could be tasked with creating groundbreaking drugs. Artificial superintelligence, trained in all published texts, including those written by Ted Kaczynski and Adolf Hitler of Unabomber, independently concludes that humanity is not worth preserving. AI 2027 will end in 2030 with a scenario in which AI networks will cover the earth with chemical sprays. Most people “died within hours,” the co-author wrote. “A small number of survivors (for example, bunker preparation, submarine sailors) will be wiped down by drones.”
Is there probably a scenario?
Some experts dismiss such predictions such as pure science fiction. AI 2027 It is pronounced “like a confession from the mental ward.” This is because there are limits to how much data can be fed to an AI server, the chip speed at which its power AI systems can be manufactured, and the amount of energy that can be fed to a data processing center. Current AI models do not display human-like multifaceted intellectual acuity, or even a solid sense of the physical world we live in. Still, Altman and other technical leaders believe these are problems that can be solved.
Where will it leave us?
If not necessarily an apocalypse, we are facing a time of rapid change. With AI swallowing up work in law, finance, coding and consulting, humanity's Amodei could see an unemployment rate surge by 20% over the next five years, even without AGI. Many Silicon Valley executives believe that governments need to provide universal basic income to avoid a surge in poverty and social unrest. And once ASIs are achieved and the utopian dreams of high-tech optimists come true, people will face the psychological challenge of finding purpose in a world where AI has made them obsolete as workers, creators and decision makers. “To run around you, to overtake you, to take you off, to take you, to rely on the tensions you can do more creatively than ever. “Why couldn't it be morale?”
Shutdown request rejected
Traditionally, humans have had at least one fail-safe method to control: hits on the off switch. But what happens when the machine wants to stay? In May, an investigation by AI safety company Palisade reported that several Openai models refused explicit instructions to turn off the power. During testing, humanity revealed that its Claude 4 OPUS model is threatening to release a fictional email, even relying on a frightening email, suggesting that the engineers are about to shut it down. This does not mean that the model has achieved consciousness. Rather, they are highly optimized for self-preservation, allowing them to independently formulate ways to manipulate and destroy human handlers. The impact on the age of super intelligent AI is in the way. “The defenses and protections that we try to build on these 'Gods' along the way on the path to divinity are expected and neutralised,” said Tamlin Hunt, a neuroscience researcher.