French data protection watchdog, The National Informatics and Freedom Commission (CNIL) released its action plan on Tuesday (May 16). Address privacy issues related to artificial intelligence, especially generative applications like ChatGPT.
ChatGPT, the world’s most famous chatbot, grew its user base to 100 million users in the first two months after its release. As its popularity grows, so do concerns about how it collects and processes personal data.
“Following recent news about artificial intelligence, especially so-called generative AI such as ChatGPT, the CNIL has announced an action plan towards the introduction of AI systems that respect individual privacy,” reads the announcement.
At the end of March, the Italian watchdog said, Galante Sanctioned ChatGPT provider OpenAI for data protection breaches. As a result, service resumed in Italy in April after taking several corrective actions.
But Galante’s decision has opened the door for a fragmented approach across the EU. The European Data Protection Board, which brings together all EU data regulators, has set up a task force to ensure consistent enforcement.
Role of CNIL
At the European level, French authorities are already regarded as one of the most influential authorities within the EU, and the plan could shape how European regulators approach ChatGPT and similar technologies. means that
Domestically, a person familiar with the matter told EURACTIV on conditional anonymity that the French data protection authority has announced groundbreaking EU legislation to regulate AI based on its ability to cause harm. It is said that it is in a position to lead the domestic enforcement of the AI law.
This ambition is articulated in the Action Plan, which states that “this work will also enable preparation for the start of application of the draft European AI Regulation”.
4 step approach
The four steps of the action plan consist of understanding the technology, guiding its development, creating an AI ecosystem, and controlling the AI system.
The first step concentrates on answering questions related to data protection, such as transparency of training datasets, protection of public data from scraping, protection from bias, and user-provided input.
As these aspects are prioritized in the EU and France, the CNIL has devoted internal resources to answering these questions and has already published a document sharing its vision on answering data protection issues.
In the Teaching AI Development stream, we hope to publish guideline documents and share best practices and rules to guide generative AI companies towards developing technologies that respect personal data.
The third “ecosystem” stream has three tiers. It aims to extend the regulator’s existing regulatory sandbox to innovative AI-based projects.
CNIL also launched a competition to help companies comply with European data protection regulations. In addition, the French authorities have issued an order for providers of “enhanced video surveillance” in connection with the French government’s experimentation with “enhanced video surveillance” tools set out in the law on the 2024 Paris Olympic and Paralympic Games. launched a project.
The final strand of CNIL’s action plan touches on its core competency of auditing and controlling digital systems. The privacy watchdog will focus on adhering to the use of enhanced video surveillance, using AI in fighting fraud, and investigating complaints made to generative AI.
On this last point, the French regulator said it had already received several complaints against OpenAI and would coordinate with the European Task Force.
[Edited by Luca Bertuzzi/ Alice Taylor]



