Controls may include the requirement to obtain consent for the use of new or updated AI solutions, the right to object to the use of the AI or how it is used, or appropriate usage controls. There are broadly three approaches customers can take when imposing controls on the use of AI.
- As a general concept, it imposes control on AI.
- Imposing controls on the current service offerings of AI-specific suppliers.or
- Mandatory regulations that reflect current legal compliance.
Customers need to consider the broader context of their commerce when deciding which approach to take.
The first approach has limited use if you know your customer is contracting an AI solution. Suppliers may have obligations aimed at restricting the use of AI altogether if their solutions clearly use machine learning technology as part of their standard functionality (which is becoming increasingly common). Resistance is inevitable. Suppliers tend to prefer her second approach. This can be more nuanced and use case specific.
Regarding the third approach, we are already seeing examples where EU AI legislation, currently in the final stages of adoption by legislators, becomes the core of a contractual position.
EU AI law takes a risk-based approach to regulating AI, classifying risks into four categories: The most important obligations of EU AI law are starting to have an impact beyond where AI systems present a high level of risk, and we are seeing examples of drafts in line with these requirements.
For example, if a customer contracts an AI solution that falls under the “limited risk” classification under EU AI law, it may be considered appropriate to impose a set of restrictions related to the supplier's current service offering. In that case, customers should seek to include in their contracts a definition of “prohibited AI” that is linked to the “high risk” and “unacceptable risk” categories in EU AI law. These definitions are not addressed by these controls.
EU AI law is the only law that has informed the drafting approach to date, but other jurisdictions are also considering their regulatory stance, and as the global regulatory landscape further develops, the contractual approach will also It is expected to evolve in line with AI legislation.
Beyond the headline question “Is AI use allowed?” there are other important questions that AI customers should address in their contracts with suppliers. If these are not addressed in the contract, indemnification controls, such as enhanced supplier management, are likely to cover some, if not all, risks.
Testing and monitoring
If the customer is satisfied with the supplier's use of AI, the next key question becomes how can they ensure that the AI system is working as intended? In traditional software procurement, customers expect to complete full acceptance testing before deploying software across their organization. However, testing AI systems is more difficult, especially when you want to test as many scenarios of error, bias, and compliance as possible.
For complex AI systems, it can be nearly impossible to complete “full” testing before implementation. Instead, customers can consider mitigating this risk by trying new AI systems in a pilot phase. For example, you can use the solution in separate business units or work with separate data sets to evaluate performance before making a go/no-go decision. Full-scale development decided.
While contracts serve as a control measure, they are not a substitute for effective and continuous testing and monitoring throughout the lifecycle of an AI system. Industry standards are rapidly evolving in this area, and both the customer and the supplier have a shared responsibility to ensure that his AI models work as intended.
data and assets
Using AI effectively often requires a strong data strategy to protect your customers' critical data assets. Customers must provide information on their business data and personal data that is owned by them or licensed from third parties so that suppliers can determine what data they need to access and under what conditions. It is important to understand the types. From a contractual perspective, any restrictions should be built into the data usage provisions of suppliers, whether third party or not.
Data ownership and control is also a concern for both customers and suppliers, with suppliers increasingly challenged by restrictions on how their output can be used. Suppliers often use customer data, signals, derived data, and feedback to improve their systems and create new data assets, not only for the benefit of their customers, but also as an improvement to the AI systems being sold. We seek broad rights to do so. other clients. There is often mutual interest in enabling suppliers to create insights using improved learning, as long as it is appropriately anonymized or aggregated.
From a customer perspective, granting this right to suppliers has implications for intellectual property ownership and data protection provisions that require careful consideration when personal data forms part of a data set. may give. Customer data is typically collected from data subjects for purposes related to the company's business. The data may not be intended to be used in particular by third parties for auxiliary purposes such as training data systems. This should be incorporated into your privacy policy. From a supplier perspective, the origin of training datasets is a concern. Suppliers want assurance that such uses are intended and that they can legally use these datasets in a way that does not expose them to liability.
responsibility
When contracting out AI systems, liability for the output produced is typically a concern for both customers and suppliers, but liability clauses themselves do not proactively manage operational risk. The issue that can have the biggest impact on liability is the scale at which AI solutions can go wrong.
While clearly assigning liability, agreeing to significant limitations on liability, and including warranties and indemnifications in the contract provide important protection, ultimately both customers and suppliers are encouraged to drive operational risk management. You must ensure that there are other contractual controls in place. Circuit breakers that allow you to stop using an AI system that shows signs of error or bias, or the ability to revert to a previous version of an AI solution that shows no signs of corruption, are useful tools here.
The author is Anita Bashi of Pinsent Masons. Pinsent Masons will host a webinar on how to manage risk and run an effective technology transformation program on Wednesday, May 15th. Participation in the event is free – Registration is now being accepted.
