Every time you post a photo, reply to social media, create a website, or even send an email, your data is scraped and stored to turn text, audio, video, and images into Used to train generative AI technology that can be created. some words. This has real consequences. Researchers at OpenAI, who study the labor market impact of language models, found that the introduction of large-scale language models (LLMs) like ChatGPT has reduced about 80% of the U.S. workforce to at least 10% of work tasks. I’m assuming it could be affected. About 19% of employees may have at least half of their tasks affected. Imaging is also witnessing a radical shift in the labor market. In other words, the data you create could be putting you out of work.
When a company builds technology on public resources (Internet), it is wise to say that the technology should be available and open to everyone. However, critics point out that GPT-4 lacks clear information and specifications that would allow anyone outside the organization to replicate, test, or validate every aspect of the model. Some of these companies receive huge amounts of money from other big companies to create their commercial products. For some in the AI community, this is a dangerous sign that these companies are trying to pursue profit above the public good.
Code transparency alone is unlikely to serve the public good for these generative AI models. If the data underpinning LLM were available, there would be little immediate benefit possible for journalists, policy analysts, or accountants (all “high-visibility” occupations, according to OpenAI research). Laws, like the Digital Services Act, are increasingly enacted that require some of these companies to open up their code and data for review by expert auditors. Open source code also enables malicious attackers, allowing hackers to subvert security measures built into the enterprise. Transparency is a laudable goal, but it alone does not guarantee that generative AI will be used for the betterment of society.
Accountability mechanisms are needed to truly create public good. The world needs a global governance body for generative AI to resolve these social, economic and political turmoil. This goes beyond what individual governments can do, what academic or civil society groups can do, or what businesses are willing to do. There is already precedent for global cooperation to hold companies and countries accountable for their technological achievements. We have examples of independent and well-funded professional groups and organizations that can make decisions in the public interest. I’m here. With these ideas in mind, let’s tackle some fundamental problems that generative AI is already surfacing.
For example, in the era of nuclear proliferation after World War II, there was a credible and significant fear that nuclear technology would be misused. The widespread belief that societies must act collectively to avert global disasters is reflected in many of today’s discussions about generative AI models. In response, led by the United States and under the leadership of the United Nations, the nations of the world convened to establish the International Atomic Energy Agency (IAEA). Reach the impact and seemingly endless possibilities of nuclear technology. He works in three main areas: nuclear energy, nuclear safety and security, and safeguards. For example, after the 2011 Fukushima nuclear accident, we provided critical resources, education, testing, and impact reports to help ensure continued nuclear safety. However, this institution has its limits. It relies on cooperation and support for Member States to voluntarily comply with its standards and guidelines and to carry out its mission.
On the technical front, Facebook’s Oversight Board is one attempt to balance transparency and accountability. Board members are a multidisciplinary global group whose decisions are binding, including overturning Facebook’s decision to remove posts depicting sexual harassment in India. This model is also not perfect. The board is funded solely by Meta, and rather than addressing more systemic issues such as algorithms and moderation policies, it can only listen to cases referenced by Facebook itself and is limited to removing content. There are accusations of corporate capture because
Despite their flaws, both of these examples provide a starting point for what a global governance body for AI might look like. Such an organization should be an integrated and ongoing effort with the advice and cooperation of experts like the IAEA, rather than a secondary project for people with other full-time jobs. Like the Facebook Oversight Board, it must receive advice and guidance from industry, but must be capable of making independent and binding decisions that companies must adhere to.
This Generative AI Global Governance Authority must be funded through unrestricted funds (i.e. no ties) by all companies involved in the large-scale production and use of all forms of Generative AI. All aspects including the development, deployment, and use of generative AI models should be covered as they are relevant to the public interest. It should build on specific recommendations from civil society and academic bodies, with the power to demand changes in the design or use of generative AI models, or to stop their use altogether if necessary. Must have the authority, including authority, to implement that decision. Finally, the group will have to deal with compensation for possible large-scale changes, job losses, rising misinformation, and the potential to undermine free and fair elections. This is not just a research group. This is a group for action.
Today we need to rely on companies to do the right thing, but aligning greater benefits with stakeholder incentives has proven insufficient. Oversight groups can take action like corporations can, but in the public interest. First, through secure data sharing, we can conduct research that these companies are conducting today. OpenAI’s economic damages paper is admirable, but should be the authority of an impartial third party, not a corporation. Second, the group’s job is not only to identify problems, but to try novel ways to fix them. Using the “taxes” that businesses pay to join, this group creates education or livelihood funds for the unemployed that people can apply to supplement unemployment benefits, or income levels regardless of employment status. There is the possibility to set a universal basic income based on Or a proportional payment compared to data that may be attributed to you as a contributing member of the digital society. Finally, it will be empowered to take action, including requiring companies to delay implementation and support career change programs in particularly high-impact industries, based on collaboration with civil society, governments and the companies themselves. .
The problems posed by the development of generative AI are difficult to meaningfully address, and as a society today, we lack the means to address them at the speed and scale that new technologies are pushing against us. Generative AI companies rely on an independent body that speaks for the world to make important decisions about governance and impact.
wired opinion We publish articles by external contributors representing a wide range of perspectives.read more testimonials heresee Submission Guidelines hereSubmit an article to . opinion@wired.com.