The Future of Life Institute has issued a letter calling for a six-month moratorium on GPT-4 AI systems due to their inherent risks. However, many of the people who signed the letter, including Elon Musk, have signed the letter very likely to catch up with the competition they feel they could lose to OpenAI. Instead of calling for a pause in the inevitable and unstoppable development of AI, a far more fruitful approach would be to apply the cornerstones of good leadership: transparency and inclusivity.
More than 26,000 people have signed a letter calling on all artificial intelligence companies to suspend training of powerful AI systems for at least six months due to threats such as the spread of disinformation and the replacement of human workers by algorithms. bottom. Signatories include SpaceX, Tesla, Twitter’s Elon Musk, Apple’s Steve Wozniak, and Stability AI’s CEO Emad Mosaque.
The signatories warn of the risks of continuing to train AI systems more powerful than GPT-4, the large-scale language model released in March by OpenAI, the San Francisco startup behind the wildly popular ChatGPT. These risks included concerns that technology could eventually replace the human mind with a smarter non-human mind, and a loss of control over civilization. The letter warns that such threats “must not be delegated to unelected technology leaders.”
However, some people seeking a six-month moratorium have invested heavily in AI or are profiting from other people’s investments. They witnessed OpenAI’s success in capturing AI market share and public mindshare with ChatGPT-4.In other words, some Those who signed the letter may be calling for a moratorium on GPT-4 not so much to manage risks, but to keep up with the AI arms race.
Who signed the Suspension AI Letter?
My research team analyzed the first 500 signatures of the “pause” letter. We found that 20% of the first 500 of those names are people from private companies who are likely to be investing in AI. They included CEOs or founders of Tesla, StabilityAI, Getty Images, DeepMind, DeepAI, Scale AI, Big Mother AI, and NEXT.robotics. These same companies may benefit from a pause in AI as they lose ground due to the rapid success of ChatGPT-4.
One of the most interesting signatories is Elon Musk. Musk has repeatedly spoken about the dangers of artificial intelligence, but he also continues to invest in and build on AI technology. Not only was he the co-founder of his OpenAI (although he has since parted ways), but he is also the appointed director of a new company called X.ai Corp whose founding papers were filed in his March. . Part of the AI market. Moreover, weeks after Musk signed a letter of suspension, Independent Twitter has reportedly purchased 10,000 graphics processing units (GPUs) to develop its own large-scale language model similar to ChatGPT.
And, seemingly inconsistent with the concerns raised about AI risks in the moratorium letter, Musk described 2021 Tesla’s car as a “semi-sentient robot on wheels.” One of Musk’s companies, the Neuralink Corporation, aims to develop a computer-his chip that can be implanted in the brain so that humans can achieve what Musk calls “a symbiosis with artificial intelligence.”
A better way to manage AI risk
The suspension letter primarily focuses on the hypothetical future risks of advanced AI development. This was proposed by the Future Life Institute. The institute believes that technology will be at the heart of humanity’s future, including space travel, geoengineering to tackle climate change, and perceptual machine intelligence. It says AI companies are “in an uncontrollable race to develop and deploy ever-more-powerful digital minds that no one, even their creators, can understand, predict, or reliably control.” I am warning you.
Another organization, the DAIR Institute, says this kind of future is inevitable. Additionally, they argue that there are current risks from existing AI technologies that must be properly managed.
The DAIR Institute issued its own letter in response to the suspension letter. Like the signatories of the suspension letter, the Decentralized AI Research (DAIR) Institute recognizes that AI development is inevitable, but argues that risks are inevitable. DAIR was founded and is now led by Dr. Timnit Gebru, who once led the Ethical AI team at Google. She was fired because she refused to retract a research paper she co-authored about the dangers of large language models like GPT-4. The DAIR team believes that AI can be a force for good, as long as the development and deployment of AI involves a diverse and careful process.
Rather than focusing on visions of a distant dystopian future dominated by out-of-control AI systems, DAIR focuses on the harm AI poses today, helping the founders, investors, and executives of these technology companies should be held responsible for building the system. Many of them are “unelected technology leaders” who signed the Future of Life Institute letter.
DAIR provides clear guidance for managing the risks of GPT-4 type models and calls for “enhancing transparency regulations”. Such regulations, for example, require companies to make it clear when people encounter “synthetic media” (images, voice, video, text generated by AI) and to train powerful large-scale language models. The data used should be published. Only by documenting and disclosing the data used to train these models can biases be revealed and managed. DAIR also argues that those most affected by AI, including immigrants, women, artists and gig workers, should have a say in how such technology is developed.
It’s silly to believe that AI can be suspended, so apply good leadership practices to managing AI
GPT-4 poses important questions about potential risks and unleashes our collective imagination of possibilities. But pausing AI progress is like standing in front and raising your hands to stop a train. When so many companies and people are investing time and money in AI, neither the government nor the letter can put it on hold. Especially when the people calling for AI to be paused are those who are most financially and emotionally invested in it.
Instead, it is important to recognize the risks of AI and apply the same management practices to AI that are attributed to good leadership in any organization: transparency and inclusiveness. By making the datasets used to train these large-scale language models transparent and incorporating a wide variety of perspectives, AI serves humanity rather than derails it.
Follow me please twitter or LinkedIn. check out my website.
