Italy banned ChatGPT late last month due to concerns about personal data collection and lack of guardrails to prevent minors from accessing AI chatbots. Italy’s data protection authority said he, the company behind ChatGPT, has imposed temporary and immediate restrictions on OpenAI, stopping it from processing Italian users’ data. The privacy watchdog will also launch an investigation into whether chatbots violate the EU’s General Data Protection Regulation (GDPR).
But Italy isn’t the only country trying to regulate generative AI software like ChatGPT. Calls for regulation of AI are growing louder, with governments and tech companies at odds. Today we take a look at how countries around the world have responded to the rapid advances in AI.
european union
The EU is often at the forefront of technical regulation, and it’s probably the biggest reason Apple might choose USB-C for the iPhone 15. Known as the European AI Law.
The European AI Law aims to introduce a common regulatory and legal framework for artificial intelligence in the EU, covering all sectors except military and all types of artificial intelligence. It categorizes various AI tools, ranging from low to unacceptable, according to their perceived level of risk, and imposes various obligations and transparency requirements on those who provide or use them. The AI law also works in tandem with other laws such as the General Data Protection Regulation (GDPR).
However, when the bill was first drafted, officials failed to consider the rapid advances in AI after 2022 that could generate content and art that rival humans.
A Reuters report indicated that draft EU rules could include ChatGPT under the “General Purpose AI Systems” (GPAIS) category, which describes tools that can be adapted to perform many functions. it was done. It is not yet known if GPAIS is considered high risk.
In any case, Italy’s outright ban on ChatGPT has prompted other European countries to question whether strict measures are needed to control such chatbots and whether such behavior should be coordinated. I am now investigating what is going on.
England
Last week, the UK unveiled its plans to regulate AI, outlining a comprehensive approach to regulating a technology that has grown to levels of hype. But instead of enacting new regulations like the EU does, the UK government is asking regulators in various sectors to apply existing regulations to AI.
In a white paper released last week, the Department of Science and Technology (DSIT) outlines five principles for businesses to follow. safety, security and robustness. Transparency and explainability. fairness; accountability and governance; and Competitiveness and Remedy.
At this stage, the UK is not proposing any restrictions on ChatGPT or any other type of AI. Instead, the country appears to have settled on a fairly light-touch approach. but did not specify the exact time.
“Over the next 12 months, the regulator will issue practical guidance to organizations, as well as other tools and resources, such as risk assessment templates, to set out how to implement these principles in each sector. ‘ said the government.
America
The United States does not yet have comprehensive federal legislation on AI. Instead, there is a patchwork of various current and proposed AI regulatory frameworks focused on specific AI use cases such as AI in recruitment and employment. The US also has a new Trade Technology Council (TTC) with the EU. It aims to align along some common principles and goals of AI governance.
However, the country’s National Institute of Science and Technology (NIST) will provide guidance to companies using AI on how to “improve their ability to incorporate reliability considerations into the design, development, use, and evaluation of AI.” We promote the AI Risk Management Framework framework. products, services and systems. ”
However, since the framework is voluntary, companies that do not implement it will not be penalized. The US also so far does not appear to have taken steps to restrict his ChatGPT in the country.
Last month, an AI think tank filed a complaint with the FTC to try to block OpenAI’s commercial deployment of GPT-4. The Center for Artificial Intelligence and Digital Policy (CAIDP) alleges that OpenAI violated a section of the FTC law, alleging AI companies engage in deceptive and unfair practices. The complaint could lead to an investigation into OpenAI and a halt to its commercial deployment of LLM.
India
India’s Planning Commission, NITI Aayog, has published several guiding documents on AI, including the National Strategy for Artificial Intelligence and the Report on AI Responsible for All. These documents outline the vision, goals and principles for developing and deploying AI in India, with a strong focus on social and economic inclusion, innovation and trust.
However, these documents are not legally binding and do not address several key issues and challenges related to AI, such as accountability, responsibility, transparency, explainability, and human oversight.
China
China has not officially blocked ChatGPT, but OpenAI does not allow users to sign up for chatbots in the country. OpenAI also blocks users in other countries with strict internet censorship such as Russia, China, North Korea, Egypt, Iran and Ukraine.
However, despite ChatGPT’s lack of availability, China continues to be the world leader in AI, investing heavily in improving AI research. Several Chinese tech companies have developed their own LLMs (Large Language Models), including search engine giant Baidu, and have already announced rivals to ChatGPT.