a A group called the Future of Life Institute has distributed a petition signed by nearly 3,000 people inside and outside the tech industry, calling for a six-month moratorium on large-scale experimentation with artificial intelligence (AI). The petition caused a great deal of controversy.
The petition’s signatories say the developers of GPT-4 and other large-scale language model AI promise their technology will change the course of civilization, but are not doing enough to protect civilization from harm. It claims that no action has been taken. They are shaping the future of AI in apocalyptic terms. Those who oppose the petition fall into two big buckets. Those who are content with the status quo of rapidly evolving AI models, and those who believe petition sponsors are so focused on the future that they are ignoring the far-reaching harm from existing AI applications. is. The latter discussion is particularly interesting because the group includes leading technologists and academics in the AI field, including Timnit Gebble, Emily Bender, and Margaret Mitchell.
We really need a different approach to AI. The first step is to recognize that AI is just the latest manifestation of the Silicon Valley hype machine. AI is just code whose benefits fall well short of their promoters’ promises and do even more harm. We’ve seen this movie on repeat (on Facebook, TikTok, etc.) over the last 10 years, and it always had an unhappy ending. It is past time for us to do anything about the massive technological changes that are rapidly being imposed on society.
The question we should be asking about artificial intelligence and all other new technologies is whether private companies are allowed to conduct uncontrolled experiments on the entire population without guardrails or safety nets. Should it be legal for a company to release a product to the masses before it has been proven safe?
Read more: The only way to deal with AI threats
For more than a decade, the tech industry has been conducting uncontrolled experimentation across a wide range of product categories, often with devastating results. For example, in 2012 Facebook conducted an experiment in which 155,000 people were unconsciously saddened. Instagram, Snapchat, and TikTok are primarily designed to inspire envy in teens without considering the psychological harm. Relatively limited AI applications are already enabling civil rights violations in mortgage lending, resume screening, and policing.
Large language model AI changes the scale of experiments, increasing by more than two orders of magnitude compared to early AI. Chat GPT, a large-scale language model that reached 1 billion total users and 100 million active users in just two months of its introduction, is called a “bullshit generator.” When Microsoft incorporated Chat GPT into his Bing search engine, numerous factual errors sparked a tsunami of criticism. Despite the flaws, the Chat GPT integration brought Bing’s daily user count past his 100 million for the first time. Thanks to implicit and explicit endorsement by the media and policy makers, millions of people are riding the hype and accepting another dangerous technological product.
Even Open AI CEO Sam Altman has expressed concern about the risks posed by the technology he is creating. But instead of taking action to protect consumers, Altman is developing a bigger model as soon as possible.
Proponents of current approaches to AI argue that they cannot slow down because they are competing with China. TRUE? How does flooding the information ecosystem with false answers, disinformation, and civil rights violations help us compete with China? They are most successful when they focus on core values like entrepreneurship. We win with jets, sodas (such as Coke), and entertainment. China will win if it can leverage its size and authoritarian government. AI based on high-quality content that operates in ways consistent with American values will improve our competitiveness, but that’s not the approach Silicon Valley is taking. They want to compete with China on Chinese terms. it’s crazy.
The harm of underdeveloped AI has been debated in public policy circles since at least 2017, but Congress and two presidents have done nothing. I know because I was one of the people who raised the alarm.
The AI problem cannot be solved with a six-month moratorium. What is needed is a different approach to the development and deployment of new technologies, one that prioritizes the protection of consumer safety, democracy and other values over the interests of shareholders. If we could wave a magic wand and change the culture of technology, we wouldn’t need a moratorium. And a moratorium without a clear path to better development practices accomplishes nothing. The industry has long treated self-regulation as a license to do whatever you want.
AI has great potential and the technology is advancing rapidly, but nothing can be as good as the content used to train the AI. Engineers have the option of training AI with expert-generated content, but few choose to do so because of the cost. Instead, it trains the system on free scraped data from the web, sometimes violating copyright laws. AI developers scrape content from quality sites like Wikipedia, but even more from sites that don’t distinguish between information and disinformation. Training an AI with poor quality content will result in poor quality results. Given the scale of products like Chat GPT and GPT-4, the danger of flooding the Internet with misinformation is very high. A Google engineer resigned after claiming he trained his Bard, the company’s large-scale language model AI, on his Chat GPT.
As long as you build your AI with poor content, your results will be poor. Sometimes the AI is right, but without further investigation, we cannot tell if the answer is right or wrong, which defeats the purpose.
For most of the last 40 years, governments and citizens have given technology companies near-total latitude in product development. Consumers have more or less blindly adopted new technologies despite growing damage over the past 14 years. In today’s regulatory vacuum, the incentive for technology is to maximize shareholder value. Even if doing so undermines core values such as public safety and democracy. Laissez-faire policies have created enormous wealth for a relatively small number of entrepreneurs and investors, but at a huge cost to society as a whole.
The window is closing to protect democracy and ordinary citizens from increasingly harmful technology products and the cultures that produce them. Nothing more can be done.
Other must-read articles from TIME