How to Mitigate Legal Risks When Using Generative AI in Europe – Osborne Clarke

AI For Business


AI within various legal frameworks can carry legal risks, but companies can take steps to mitigate these risks

Generative artificial intelligence (AI) is transforming the way people learn and create. Done right, this technology has the potential to create content, products and experiences that were once unimaginable. However, its rapid progress has raised legal concerns, including issues of copyright infringement, data privacy and liability. These challenges are not limited to all types of businesses, as generative AI tools can be used in many settings. However, if the use of AI violates legal requirements, these new tools can be costly. What steps can companies take to harness the power of generative AI while mitigating the associated legal risks?

Intellectual Property and Copyright Infringement

Generative AI’s ability to generate original content such as music, images, and text has brought new challenges to intellectual property (IP) law. Companies must ensure that their use of AI-generated content does not infringe the rights of copyright owners, and it is currently unclear to what extent the output of such models is protected by copyright. .

To mitigate these risks, companies should carefully evaluate generative AI use cases and consider using purpose-built AI models that are properly licensed and trained on legally obtained data. there is. Lawsuits have already been filed alleging that the use of images generated by AI models infringes the copyright of images contained in training data.

Companies that use content generated by AI tools should consider establishing guidelines for the use of AI-generated content. This can be a problem, especially when output is critical to the company’s products, as it makes it harder to take legal action against counterfeiters and counterfeiters. Legislation in this regard is still evolving and outcomes may vary by jurisdiction. In the EU, copyrighted works are generally required to be intellectual creations of their (human) authors, and AI fails this requirement. While the US Copyright Office has issued guidelines stating that the output of generative AI tools is generally unprotected, UK copyright law may protect computer-generated works without human involvement. However, this area is under review.

Data privacy and security

Data privacy is an important issue when training, developing, and using AI tools. Generative AI tools carry a high degree of risk due to the sheer amount of data used for training. There is a risk that the personal data used to train these models may not be used legally or be reverse engineered by asking the right questions to the AI, creating both privacy and security risks.

Businesses that develop or use generative AI must ensure they comply with local laws such as the EU General Data Protection Regulation (GDPR) and the UK GDPR. The first step in this is to identify whether personal data (broadly defined as including information about an identified or identifiable natural person) is actually used.

Where personal data are used for development, this must be for a specific purpose and under a specific legal basis. Personal data should be used in accordance with legal principles and special consideration should be given to how individuals can exercise their data rights. For example, is it possible to provide any individual with access to information about an individual?

When using AI to create output, it should be monitored for potential data leaks that could lead to a data breach. For example, if an individual publishes information about themselves on social media, it may not always be lawful to use that information for other purposes (for example, to generate reports on potential customers targeted by advertising campaigns). is not limited.

Contract and Confidentiality

Before implementing or authorizing the use of generative AI tools, companies should also review the terms under which the tools are provided. These terms may restrict how the output may be used, or give the tool provider broad rights over anything used as prompts or other input. This is especially important when using tools to translate, summarize, or modify long internal documents. These documents may contain information that the company wishes to keep proprietary or confidential, apart from personal data. Uploading such information to a third-party service may violate confidentiality agreements and pose serious liability risks.

AI and sector-specific regulations

International companies should be aware that in addition to the laws surrounding AI, there are specific laws in place covering the use of AI in the EU. The current draft imposes obligations on businesses based on the risks AI creates. When used in high-risk scenarios, providers and users of these systems must do more to meet compliance requirements (some applications are considered unacceptable risk). In contrast, the UK recently released a white paper stating that AI has no specific regulation and is up to sector-specific regulators.

How generative AI fits into these frameworks depends on the context in which it is used. Business plans to use generative AI to offer international products and services should seek EU and UK legal compliance early in development to reduce potential fines risks and redevelopment requirements for products and services. You should consider your position.

Comments from Osborne Clark

Generative AI offers tremendous potential for companies to innovate, streamline, and become more efficient. However, businesses should be diligent about addressing the legal risks associated with technology. By implementing, monitoring, and enforcing policies based on the above guidelines, businesses can leverage the power of generative AI while mitigating potential legal pitfalls.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *