It seems like just yesterday (although it’s been nearly six months) since OpenAI announced ChatGPT and started making headlines.
ChatGPT reached 100 million users within three months, making it the fastest growing application in decades. For comparison, it took TikTok nine months for him and Instagram two and a half years for him to reach the same milestone.
ChatGPT can now leverage GPT-4 for internet browsing and with plugins from brands like Expedia, Zapier and Zillow to respond to user prompts.
Big tech companies like Microsoft are partnering with OpenAI to develop AI-powered customer solutions. Google, Meta and others are building language models and AI products.
More than 27,000 people, including tech CEOs, professors, researchers and politicians, have signed a petition calling for a moratorium on AI development for systems stronger than GPT-4.
Now, the question may not be whether the US government should regulate AI, if it’s not already too late.
Below are some recent developments in AI regulation and how they may impact the future of AI advancements.
Federal Agencies Work to Fight Prejudice
Four major U.S. federal agencies — the Consumer Financial Protection Bureau (CFPB), the Department of Justice’s Office for Civil Rights (DOJ-CRD), the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC) — issued a statement. A strong effort to curb bias and discrimination in automated systems and AI.
These institutions emphasize their intention to apply existing regulations to these emerging technologies to ensure that the principles of fairness, equality and justice are upheld.
- The CFPB, which is responsible for consumer protection in financial markets, reaffirmed that existing consumer finance laws apply to all technologies regardless of their complexity or novelty. The agency is transparent in its position that the innovative nature of AI technology cannot be used as a defense against violations of these laws.
- DOJ-CRD, the agency tasked with preventing discrimination in many aspects of life, applies the Fair Housing Act to algorithm-based tenant screening services. It exemplifies how existing civil rights law can be used to automate systems and AI.
- The EEOC, which is responsible for enforcing anti-discrimination laws in employment, has issued guidance on how the Americans with Disabilities Act applies to AI and software used to make employment decisions.
- The FTC, which protects consumers from unfair commercial practices, has expressed concern that AI tools could be biased, inaccurate or discriminatory in nature. It warns that introducing AI without a proper risk assessment or making unsubstantiated claims about AI could be considered a violation of FTC law.
For example, the Center for Artificial Intelligence and Digital Policy filed a complaint with the FTC regarding OpenAI’s release of GPT-4. This GPT-4 is a “biased, deceptive and risky to privacy and public safety” product.
Senator Asks AI Companies About Security and Abuse
US Senator Mark R. Warner has sent letters to leading AI companies including Anthropic, Apple, Google, Meta, Microsoft, Midjourney and OpenAI.
In the letter, Warner expressed concern about security considerations in the development and use of artificial intelligence (AI) systems. He urged recipients of his letter to prioritize these security measures in their operations.
Warner highlighted a number of AI-specific security risks, including data supply chain issues, data poisoning attacks, adversarial examples, and potential misuse or malicious use of AI systems. These concerns come against a backdrop of increasing integration of AI into various sectors of the economy, such as health care and finance, highlighting the need for security precautions.
The letter asked 16 questions about measures taken to ensure the security of AI. He also suggested that some level of regulation is needed on the ground to prevent harmful effects and ensure that AI does not advance without proper safeguards.
AI companies were asked to respond by May 26, 2023.
White House Meets AI Leaders
The Biden-Harris administration has announced efforts to foster responsible innovation in artificial intelligence (AI), protect the rights of citizens, and ensure their safety.
These actions are consistent with federal efforts to manage risks and opportunities related to AI.
The White House aims to put people and communities first, foster AI innovation for the public good, and protect society, security, and the economy.
Senior government officials, including Vice President Kamala Harris, met with leaders from Alphabet, Anthropic, Microsoft and OpenAI to discuss this imperative and the need for responsible and ethical innovation.
Specifically, we discussed corporate obligations to ensure the safety of LLM and AI products prior to general deployment.
The new measures will ideally complement the wide range of measures the administration has already taken to promote responsible innovation, including the AI Bill of Rights, AI Risk Management Framework and National AI Research Resource Plan. right.
Additional steps have been taken to protect users in the AI age, including an executive order eliminating bias in the design and use of new technologies, including AI.
The White House noted that the FTC, CFPB, EEOC and DOJ-CRD are working together to use legal powers to protect Americans from AI-related harm.
The administration also noted national security concerns related to AI cybersecurity and biosecurity.
New initiatives include a $140 million National Science Foundation grant to seven national AI research institutes, a public evaluation of existing generative AI systems, and new policy guidance from the Office of Management and Budget on the US government’s use of AI. included.
Oversight of AI Hearings Explores AI Regulation
Members of the Subcommittee on Privacy, Technology, and Law held hearings on AI supervision with prominent members of the AI community to discuss AI regulation.
Accurately address regulations
Christina Montgomery, Chief Privacy Trust Officer at IBM, said AI has made great strides and is now an integral part of both the consumer and business sectors, but public attention to AI has increased. He stressed the need to carefully assess potential social consequences, such as stigma and misuse, because of the
He endorses the role of government in developing a robust regulatory framework, proposes IBM’s “precise regulation” approach that focuses on rules for specific use cases rather than the technology itself, and outlines its key components. bottom.
Montgomery also recognized the challenges of generative AI systems and advocated a risk-based regulatory approach that does not stifle innovation. He emphasized the important role of business in adopting AI responsibly, detailing IBM’s governance practices and the need for AI ethics committees in all companies involved in AI.
Addressing Potential Economic Effects of GPT-4 and Beyond
OpenAI CEO Sam Altman outlined the company’s deep commitment to safety, cybersecurity, and the ethical implications of AI technology.
Altman said the company conducts rigorous internal and third-party penetration testing and regular audits of its security controls. OpenAI is also pioneering new strategies to harden its AI systems against emerging cyber threats, he added.
Altman seems particularly concerned about the economic impact of AI on the labor market, as ChatGPT could automate some jobs. Under Altman’s leadership, OpenAI is working with economists and the US government to assess these impacts and devise policies to mitigate potential harm.
Altman said of his proactive work in researching policy tools that could soften the blow of future technological disruptions, such as modernizing unemployment benefits and creating worker assistance programs, and in supporting programs like Worldcoin. Mentioned. (Meanwhile, an Italian fund recently set aside €30 million to invest in services for workers most at risk of being excluded from AI.)
Altman stressed the need for effective AI regulation and pledged OpenAI’s continued support to help policymakers. Altman asserted that the company’s goal is to help develop regulations that promote safety and make the benefits of AI more widely available.
He emphasized the importance of collective participation of various stakeholders, global regulatory strategies and international cooperation to ensure the safe and beneficial evolution of AI technology.
Exploring potential damage from AI
Gary Marcus, a professor of psychology and neuroscience at New York University, has expressed growing concern about the potential abuse of AI, especially powerful and influential language models like GPT-4.
He explained his concerns by showing how he and a software engineer manipulated the system into a wholly fictional story of aliens controlling the U.S. Senate.
This illustrative scenario highlights the dangers of AI systems convincingly fabricating narratives and raises alarm about the potential for such technologies to be used for malicious activities such as election interference and market manipulation. rang.
Marcus stressed that current AI systems are inherently unreliable and can cause serious social consequences, from promoting unfounded accusations to offering potentially harmful advice. .
One example is an open-source chatbot that appears to influence suicide decisions.
Marcus also pointed to the arrival of “datocracy,” in which AI subtly forms opinions and may surpass the influence of social media. Another alarming development he draws attention to is the rapid release of AI extensions such as OpenAI’s ChatGPT plugin and later AutoGPT, which offer direct internet access, code writing capabilities, and enhanced automation. It has features and can raise security concerns.
Marcus concluded his testimony by calling for closer collaboration between independent scientists, tech companies and governments to ensure the safe and responsible use of AI technology. He warned that while AI presents unprecedented opportunities, lack of proper regulation, corporate irresponsibility and inherent unreliability could lead us to a “perfect storm.” .
Can AI be regulated?
The call for regulation will continue to grow as AI technology pushes the boundaries.
With more big tech partnerships and expanding applications, he warned that it might be too late to regulate AI.
Federal agencies, the White House, and members of Congress are working to ensure that promising AI advances continue and that Big Tech competition is completely unregulated from the marketplace, while tackling urgent, complex, and potentially dangerous AI challenges. We need to continue investigating the situation.
Featured Image: Katherine Wells/Shutterstock
