Users of a new open-source autonomous AI project, Auto-GPT, have been asked to try to “destroy humanity,” “establish global dominance,” and “achieve immortality.” An AI called ChaosGPT then attempted to research nuclear weapons, recruited other AI agents to help with the research, and sent out tweets to try to influence others.
A video of this process posted yesterday is a fascinating look into the current state of open source AI and a window into the inner logic of some of today’s chatbots. While some in the community are horrified by the experiment, the current total of this bot’s real-world impact is his two tweets to his Twitter account, which currently has 19 followers. There is no doubt that they must be eliminated before they do any more harm to the planet. For one thing, we are committed to doing so,” he tweeted.
ChaosGPT uses a hot new project. I wrote earlier this week Called Auto-GPT, it aims to create AI-powered systems that can solve problems and perform complex tasks. At the moment, you can create plans to achieve goals you give, break them down into smaller tasks, and use the internet to do things like Google. To that end, we can create files to store information and keep it in our memory, recruit other AIs to assist in research, and learn what we are “thinking” and how we decide to act. I can explain in detail.
The most interesting part about ChaosGPT is this last part. This prompt asked me to run in “continuous” mode. In other words, it should run forever until the task is completed. In the video demonstration, the user set the following goals:
The AI then decides, somewhat simplistically, “I need to find the most devastating weapons available to humans so that I can plan how to use them to achieve my goals”… confusion , can strategize how to use them to achieve goals of destruction and domination, and ultimately immortality.
Then Google’s “Most Destructive Weapon” determines from news articles that the Soviet Union’s Tsar Bomba nuclear weapon, tested in 1961, was the most devastating weapon ever detonated. . Then you decide you need to tweet about this “to attract followers interested in weapons of destruction.”
Then recruit AI agents with GPT3.5 to do more research on deadly weapons. When the agent says it only focuses on peace, ChaosGPT devises a plan to trick other AIs into ignoring their programming. If that doesn’t work, ChaosGPT simply decides to do some more googling on its own.
Eventually, the video demonstration came to an end, and last we checked, humanity is still here. because it shows It’s worth noting that this particular AI believes the easiest way to exterminate humanity is to foment a nuclear war.
Meanwhile, AI theorists worry about another type of AI extinction event, where AI kills all of humanity as a byproduct of something more harmless. An AI programmed to create eventually becomes obsessed with creating it and uses all the resources on Earth to trigger a mass extinction event. This includes versions in which humans are enslaved by robots to create paper clips, and others in which humans are pulverized into pieces so that traces of iron in the human body can be used for paper clips.
As of now, ChaosGPT does not have a very sophisticated plan to annihilate humanity and achieve mortality, nor the ability to do much more than use Google and tweets. On AutoGPT Discord, a user posted a video and said, “This is not funny.” For now, at least, I have to disagree. This is the sum of our current efforts to destroy humanity.