To print this article, simply register or log in to Mondaq.com.

Below is an edited version of what is shown in the video.
Generative AI models access a vast pool of information from a variety of sources, including information that users add to prompts. This may contain personal or sensitive information that may be used without your consent or incorporated into the output of the tool.
As a result, companies that develop generative AI technologies, deploy them in their products and services, or use generative AI tools face many challenges.
Businesses may need to conduct data impact assessments to identify privacy risks and develop ways to mitigate them. Data is often collected from all over the world, so the laws of multiple jurisdictions may need to be considered. Transparency, a key principle in many jurisdictions, requires companies to provide detailed information about their use of generative AI, and establish comprehensive privacy policies to do this effectively. There may be
Companies subject to GDPR regulation should carefully consider the appropriate legal basis for processing data used by generative AI tools. To meet the GDPR’s legitimate interest standard, companies must balance the interests of data controllers with the rights and freedoms of data subjects. Certain types of data, such as special categories of data, may require additional safeguards. GDPR requires that a user understands the logic of her AI tool’s decisions and has the right to challenge it, and that a data subject can access, modify, or access her data from the tool’s training her set or algorithms. or can be deleted.
Businesses must also prepare for new laws affecting generative AI. The EU is close to passing an AI law that will affect EU and non-EU companies. The law provides a two-year grace period for businesses to prepare for the requirements, and businesses should start preparing well in advance.
First, privacy professionals and attorneys should use existing tools in the privacy toolbox. Ask the following questions about the information used by the Generative AI tool.
- Can I sort them? Does the model use personal data for training? If so, is the data identifiable? Can users opt out of having their data used for training?
- Accurate? Can I verify that data input and output are correct? Can inaccuracies be corrected?
- Prejudiced or discriminatory? Are biases in the data used to train the tool? Do the outputs discriminate against specific groups?
New approaches are needed to address more complex issues, such as how generative AI tools use data about minors, and whether tools collect and use biometric and facial recognition data. may become. Cases of this kind are often subject to stricter regulations.
The content of this article is intended to provide a general guide on the subject. For your particular situation, you should seek professional advice.
Hot Article: Privacy from the United States
