Tackling AI: How do governments plan to deal with innovative tools like ChatGPT and Bard?

Applications of AI


Prior to Europe’s AI legislation, which could establish a benchmark for how national governments would regulate artificial intelligence tools, the G7 developed countries (G7) agreed that “risk-based” regulation of AI would be a potential tool for templating. He said it could be a first step. Regulate new tools such as Open AI’s ChatGPT and Google’s Bard.

In a joint statement issued at the end of the two-day meeting held in Japan on Sunday, G7 ministers said such regulations, while grounded in democratic values, would be beneficial for the development of AI technologies. He said they must “maintain an open environment.”

risk-based approach

The G7 ‘risk-based’ approach includes a step-by-step regulationThe burden of compliance on the developers or users of AI tools deployed in areas such as the word processing business and music generation is relatively high compared to regulatory oversight of face-linked tools and tools that assist doctors in making medical diagnoses. A reading device that verifies the identity of

A ministerial statement issued in Tokyo said:

The agreement recognizes that “policy instruments to achieve the shared vision and goal of trustworthy AI may differ among G7 members.”

Policy response

The success of tools like ChatGPT has caught the attention of policymakers across jurisdictions who are increasing regulatory scrutiny of generative AI tools.

EU The proposed AI law predictably takes a tough stance by isolating artificial intelligence by use case scenarios broadly based on their degree of invasiveness and risk. Italy became the first major Western country to ban her ChatGPT due to privacy concerns. The EU of 27 countries took steps to regulate AI in 2018, and the AI ​​law due next year is a long-awaited document.

England It’s on the other end of the spectrum, with a decidedly ‘light touch’ approach aimed at promoting rather than stifling innovation in this nascent field. Japan also takes a relaxed approach to AI developers.

China has developed its own regulatory regime. The country’s federal internet regulator earlier this month released his 20-point draft to regulate generative AI services. This includes obligations to ensure accuracy and privacy, prevent discrimination, and ensure the protection of intellectual property rights.

The draft, which is likely to come into force later this year, would require AI providers to clearly label their AI-generated content, establish a mechanism for handling user complaints, and undergo a security assessment before publication. I am requesting. According to the draft cited by Forbes, AI-generated content must “reflect the core values ​​of socialism” and not contain anything that could lead to the overthrow of the socialist system. it won’t work.

India says it is not considering legislation to regulate the artificial intelligence sector. IT Minister Ashwini Vaishnaw says AI has “had ethical concerns and associated risks” but has proven to be an enabler for digital and innovation ecosystems.

US outlook

On April 11, the U.S. Department of Commerce asked the public for their input on how rules and laws could be created to ensure AI systems perform as advertised.

Authorities flag potential floating audit systems to assess whether AI systems contain harmful biases or distort communications to spread misinformation or disinformation. I got

New assessments and protocols may be needed to ensure AI systems work without negative impact, according to Alan Davidson, assistant secretary of commerce at the US Department of Commerce.

white house blueprints

last month’s policy actions in America Building on the 76-page blueprint of the AI ​​Bill of Rights published by the White House Office of Science and Technology Policy (OSTP) in October 2022, it proposes a non-binding roadmap for the responsible use of AI. I’m here.

This blueprint articulates five fundamental principles for governing the effective development of AI systems, with particular attention to the unintended consequences of civil and human rights violations. These are related to:

* Protect users from insecure or ineffective systems.

* Protect users from algorithmic discrimination.

* Users are protected from bad data practices with built-in protections and have powers of attorney over data usage.

* Users are aware that an automated system is being used and understand how and why it contributes to the results that affect them.and

* Users can opt out and access someone who can quickly review and resolve issues.

The Blueprint is intended to “assist in the design, use, and deployment of automated systems to protect the American public.” Principles are neither regulatory nor binding. It is not an enforceable “bill of rights” with legal protections.

The document includes several examples of AI use cases that the White House OSTP has deemed “problematic” and “provided that the rights, opportunities, or access to critical resources of American citizens are Excludes many industrial and/or operational applications of AI that may impact or services, generally.”

Blueprint expands use cases for AI in lending, human resources, surveillance, and other areas, addressing proposed EU AI law’s ‘high-risk’ use case framework, according to a World Economic Forum brief you’ll find things of the document.

some gaps remain

Nicol Turner Lee and Jack Malamud of Brookings argue that identifying and mitigating intentional and unintentional consequential risks in AI is widely known, but how blueprints facilitate the reprimand of such complaints. said it remains undecided.

It’s also unclear “whether the non-binding document will prompt Congressional action necessary to govern this unregulated space,” they said, adding that “the Opportunities and Blind Spots,” he said in a December paper.

call to action

Tech leader Elon Musk, Apple co-founder Steve Wozniak, and more than 15,000 others have called for a six-month pause in AI development, and the lab is pushing for a system no one can fully control. It says it is in an “uncontrollable race” to develop. They also say labs and independent experts must work together to implement a set of shared safety protocols.

In the United States, there have been repeated efforts to pass laws limiting the power of Big Tech, but little progress has been made given the political divisions in Congress.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *