Ever since the ancient Greeks dreamed of the myth of Prometheus, mankind has debated the duality of technology. is also true for the widespread deployment of artificial intelligence systems today. Proponents of AI have long argued that this versatile technology will deliver unprecedented leaps in productivity and creativity. Its critics fear it poses alarming risks now and could even threaten the survival of humanity in the future.
The last year’s release of powerful generative AI models like ChatGPT and Dall-E 2, developed by OpenAI, rekindled a smoldering debate. Over 100 million users have already experienced the strange and amazing things these types of generative models can do. Co-creation of computer code. Create a fake viral photo of the pope in a white puffer jacket.
In a recent post, Microsoft co-founder turned philanthropist Bill Gates said he was “in awe” when OpenAI’s models passed advanced biology tests last September. says. Gates predicted that the technology could bring enormous benefits to the fields of medicine and education. A Goldman Sachs research report released this week predicts that the proliferation of AI will boost labor productivity significantly, boosting global annual GDP by 7%.
However, the rapid development and increasingly pervasive use of generative AI systems has surprised many. Some of Google’s researchers, including Timnit Gebru and Margaret Mitchell, were among the first to warn of the dangers of the company’s generative AI models embedding existing social biases, but he was later fired. it was done. His more than 1,100 signatories, including several prominent AI researchers, amplified the warning in an open letter published by the Future of Life Institute this week. They asked him to halt development of the cutting-edge model for six months until a better governance structure was put in place. These uncontrolled machines flood the internet with lies, automate meaningful work, and even threaten civilization. ‘ said the letter’s author.
At least three threads must be deselected in the dispute. The first, and easiest to dismiss, is the moral panic that accompanies almost any new technology: steam locomotives, electricity, automobiles, computers. Even Benjamin Franklin’s seemingly harmless invention of the lightning rod was initially opposed by church elders, fearing it would interfere with the “heavenly cannons.” In principle, it is better to debate how to properly use commercially valuable technology than to curse its arrival.
The second is how commercial interests tend to align with moral standpoints. promised to cooperate with But in 2019, OpenAI switched to a capped for-profit model, allowing it to raise venture capital funding and issue stock options to attract top AI researchers. Since then, it has received significant investment from Microsoft and has become a more closed for-profit entity. That said, at least some of the criticism comes from rivals interested in slowing OpenAI’s development.
But the third and most important thread is that many serious AI professionals familiar with the latest breakthroughs are seriously concerned about speed and direction of movement. Their concerns are amplified by the trend of some big tech companies such as Microsoft, Meta, Google and Amazon to downsize their ethics teams.
As Gates wrote in his post, market forces alone cannot tackle social inequality. Civil society organizations are mobilizing rapidly and some governments are looking to set clearer regulations. This week, the UK released a draft Promoting Innovation Regulation on AI, and the EU is creating tougher directives to control the use of technology in high-risk areas. But for now, these efforts are little more than waving little red flags to the accelerating train.
Unless the companies leading the AI revolution can definitively prove that their models are designed to align with humanity’s best interests, they can expect a much more violent public backlash. A dedicated, independent body with the power to audit AI companies’ algorithms and restrict their use should be next on the agenda. – Copyright The Financial Times Limited 2023