Understanding Colorado's Landmark AI Law and How It Impacts Your Business | Flaster Greenberg PC

AI For Business


Reprinted with permission from The Legal Intelligencer, published June 10, 2024. © 2024 ALM Global Properties, LLC. All Rights Reserved. Reproduction without permission is prohibited. 877-256-2472 or Please contact us at asset-and-logo-licensing@alm.com.

The artificial intelligence (AI) regulatory landscape in the United States is rapidly evolving, and Colorado has emerged as a pioneer in consumer protection measures with the Colorado Act on Consumer Protection in Interacting with Artificial Intelligence Systems (the “Colorado AI Act”). The first law of its kind in the United States, the Act aims to reshape the adoption and development of AI systems and set a precedent for other jurisdictions. Scheduled to take effect on February 1, 2026, the Colorado AI Act will introduce a comprehensive framework aimed at addressing potential risks associated with AI systems, particularly those that make significant decisions that impact consumers.

Scope and Applicability

The Colorado AI Act is broad in scope and covers both developers and adopters of AI systems in the state. A developer is an organization that does business in Colorado and develops or significantly modifies an AI system. An adopter is defined as an organization that does business in Colorado and adopts a high-risk AI system. Additionally, the Colorado AI Act’s scope extends to interactions with AI systems that have significant legal or similarly significant effects on various aspects of consumers’ lives, including education, employment, financial services, government services, health care, housing, insurance, and legal services. Unlike some consumer privacy laws, the Colorado AI Act does not establish a minimum standard for covered consumers, meaning it includes organizations of any size that engage in covered activities. The Act applies to interactions with AI systems that have significant legal or similarly significant effects on various aspects of consumers’ lives, including education, employment, financial services, government services, health care, housing, insurance, and legal services. The term “consumer” refers to any resident of Colorado.

Central to the Colorado AI Act is the classification of “high-risk AI systems,” which include AI systems that make critical decisions in a variety of domains, including education, employment, finance, healthcare, housing, insurance, and legal services. These decisions are characterized by significant impacts on individual rights, opportunities, and access to important services. By targeting high-risk systems, the Act seeks to mitigate potential harms, such as algorithmic discrimination, that may result from automated decision-making processes.

Developer and Deployer Obligations

Colorado AI law imposes several obligations on developers of high-risk AI systems aimed at promoting transparency, accountability, and prevention of algorithmic discrimination. For example, developers must provide comprehensive documentation to adopters that includes information necessary for adopters to meet their obligations, such as a summary of the data used to train the system, information about the use and risks of algorithmic discrimination, how they will assess and mitigate the risks of algorithmic discrimination, and completing an impact assessment. Developers must also publish statements outlining the types of high-risk AI systems they have developed or significantly modified and how they will manage known or foreseeable risks of algorithmic discrimination associated with these systems. These statements must be updated periodically to reflect changes and developments. If a known or reasonably foreseeable risk of algorithmic discrimination occurs, developers must disclose this information to the Colorado Attorney General and known adopters within 90 days of discovering or receiving a credible report from an adopter indicating that a high-risk AI system has caused or is likely to cause algorithmic discrimination.

Meanwhile, adopters have several important obligations aimed at ensuring the responsible use of AI systems and protecting against algorithmic discrimination. For example, adopters must implement comprehensive risk management policies and programs to manage the use of high-risk AI systems. This includes conducting impact assessments to assess the potential risks of algorithmic discrimination associated with the deployment of these systems. In addition, adopters must notify consumers if a high-risk AI system makes a significant decision about them. This notice must include information about the purpose of the AI ​​system, the decision made, and the consumer's right to correct errors in the personal data used by the system and to challenge adverse decisions. In addition, adopters must publish a statement summarizing the types of high-risk systems they deploy, how they manage the risks of algorithmic discrimination associated with these systems, and the nature, source, and scope of the information the adopter collects and uses. Adopters must disclose any discovered instances of algorithmic discrimination to the Colorado Attorney General within 90 days of discovery. This requirement ensures that discriminatory outcomes resulting from the deployment of high-risk AI systems are promptly reported and addressed.

Exemptions and Enforcement

Colorado AI law imposes strict requirements on developers and adopters, but also provides several exemptions for specific organizations and scenarios.

The Colorado AI law exempts HIPAA-covered entities that make certain non-high-risk health care recommendations generated by AI and requires health care providers to implement the recommendations. This exemption recognizes the existing regulatory framework governing the privacy of health data and is consistent with HIPAA requirements.

Insurers subject to CO section 10-3-1104.9 and related regulations are also exempt from certain provisions of the CAIA. This exemption recognizes the unique regulatory environment that governs the insurance industry and the need to avoid duplicative or conflicting obligations.

Additionally, AI systems acquired by the federal government or federal agencies are exempt from the requirements of the CAIA. This exemption recognizes the federal government's authority to regulate AI systems within its jurisdiction and ensures consistency with federal regulations.

Certain banks and credit unions that are subject to substantially similar or more stringent guidance or regulation governing the use of high-risk AI systems are exempt from certain provisions of the CAIA. This exemption is intended to recognize existing regulatory oversight in the financial sector and to avoid regulatory duplication.

Enforcement of the Colorado AI Act rests primarily with the Colorado Attorney General's Office. If not in compliance, violating the provisions of the Colorado AI Act is considered an unfair trade practice and is subject to civil penalties. These penalties are up to $20,000 per violation, and violations are assessed per consumer or per transaction. The Colorado AI Act does not provide for a civil right of action, so enforcement actions may only be initiated by the Colorado Attorney General. Additionally, the Colorado AI Act gives the Attorney General's Office the authority to promulgate rules across a variety of areas, including documentation, notices, disclosures, impact assessments, and risk management policies and programs.

Colorado AI Law and EU AI Law

While the Colorado AI Act and the EU AI Act share a common goal of regulating AI to protect consumer interests, they also have some differences.

Colorado AI Act is primarily focused on interactions within Colorado and applies to developers and adopters operating within its jurisdiction. In contrast, EU AI Act has a broader territorial scope, extending its reach to developers and adopters outside the EU if the AI ​​system is available on the EU market or its results affect EU residents. This important difference reflects the EU's global regulatory ambitions, while Colorado AI Act is more local in scope.

Both laws recognize the risks associated with high-risk AI systems, but their classification criteria are different. The Colorado AI Act defines high-risk AI systems based on their potential to affect critical decisions in a variety of sectors, including education, employment, and healthcare, while the EU AI Act adds additional high-risk categories, including biometric authentication, emotion recognition, law enforcement, and democratic processes. This broad classification under the EU AI Act reflects a comprehensive approach to identifying and regulating AI risks.

Both laws impose obligations on developers and adopters, with some differences. Colorado AI Act requires developers to use reasonable care to avoid algorithmic discrimination, with strict documentation and disclosure requirements. Adopters must implement risk management policies, conduct impact assessments, and guarantee consumer rights, including the right to challenge adverse decisions. In contrast, EU AI Law places more emphasis on risk management requirements for providers than for adopters. Furthermore, while Colorado AI Act focuses on transparency and consumer rights, EU AI Law places more emphasis on explaining decisions made by high-risk AI systems and mandates human oversight, especially in sensitive areas.

The two laws have different enforcement mechanisms. The Colorado AI Act gives exclusive enforcement authority to the Colorado Attorney General and provides for civil penalties for violations that constitute unfair trade practices, while the EU AI Act gives national supervisory authorities the power to enforce its provisions and provides for heavy fines of up to €35 million or 7% of global gross revenue for violations. This difference in enforcement mechanisms reflects the different regulatory frameworks and enforcement priorities of each jurisdiction.

Compliance Readiness

This landmark law takes effect on February 1, 2026, so AI companies operating in Colorado will need to proactively assess their systems for new requirements, increase transparency, and implement robust governance frameworks. Focusing on addressing potential risks associated with AI, especially in high-impact areas, can help mitigate harms like algorithmic discrimination. Staying informed and prepared for compliance can help companies meet the standards set forth in this pioneering regulation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *