Brazil is proposing a new framework for regulating the ethical and responsible use of artificial intelligence (AI) systems. The bill is the result of a comprehensive effort to create a new bill that replaces three pending legislation in the past four years (5.051/2019, 21/2020, and 872/2021). The creation of the committee in March 2022 marked the beginning of this initiative, spanning nearly 240 days, with conferences, seminars and public hearings taking place. The result was a new text consisting of over 40 articles and over 900 pages of reports. It outlines the principles, rules and guidelines for regulating AI in the country.
Robust human rights principles and strict liability system for AI providers: An overview of Brazil's proposed AI law
The proposed law represents a strong commitment to protecting human rights. Its main purpose is to grant individuals important rights and impose certain obligations to businesses that develop or use AI technologies (AI suppliers or operators). To achieve this, the bill adopts a risk-based approach by establishing the creation of new regulatory bodies to implement the law and classifying AI systems into different categories. We will also introduce civil liability protection systems to AI system providers or operators, introducing the obligation to report serious security incidents.
Establish national codes for the ethical and responsible use of artificial intelligence systems
Articles 2 and 3 set out the fundamentals and guidelines for the development and use of AI, including respect for human rights, democratic values, equality, non-discrimination, multiplexing, and respect for labor rights. It also provides guidance principles such as the importance of accountability (Article 3, IX) and measures to prevent, mitigate and address systemic risks that may arise from the intentional or unintentional use and effectiveness of AI-based systems (Article 3, Section XI).
Protecting individual rights
Chapter 2 of the bill aims to protect the rights of individuals affected by AI decisions. The law guarantees a variety of rights, including explanations of decisions, the ability to challenge them, and human participation in the decision-making process. The law also emphasizes the right to correct biases identified as non-discrimination. Individuals may personally or collectively enforce these rights before administrative agencies or courts.
Section II emphasizes the importance of transparency and understanding of AI decisions. The law grants an individual the right to request explanations and information about the standards and procedures used by the system. It also includes measures to protect vulnerable groups such as children, adolescents, the elderly and people with disabilities.
A risk-based approach to AI regulation
Chapter III introduces risk-based regulatory models for AI systems. Article 13 requires providers to conduct a preliminary assessment to classify the extent of risk as “excess” or “high.” Systems classified as poses “excessive” risk will not be permitted, including those that exploit vulnerabilities in a particular group or use subliminal techniques. It also prohibits the use of these systems by public institutions, assessing, classifying, or ranking people based on social behavior or personality attributes for access to goods, services and public policy in an unlawful or disproportionate manner.
Article 17 defines what high-risk sectors and applications are, including security, education, recruitment, HR management, and health of critical infrastructures, such as AI systems used to:
Governance and the impact assessment of algorithms in AI systems
Chapter IV establishes governance rules and processes for AI agents to ensure system security and protect individual rights. These measurements pose a lifecycle, particularly high risk, of AI systems and require documentation, testing, and bias precautions. AI agents must ensure the explanability of AI results and provide relevant information to interpret system results.
The impact assessment of the algorithm must be carried out by an independent expert with technical, scientific and legal knowledge.as required by Article 22. Article 24 provides that impact assessments must take into account several factors related to artificial intelligence systems, including foreseeable and known risks, associated benefits, likelihood and gravity of negative outcomes, tests and assessments carried out, mitigation measures, training, training, training, awareness measures, and transparency. Furthermore, evaluations should involve regular quality control tests and rationale for residual risk of the system.
These assessments must be updated continuously throughout the system's lifecycle (Article 25). If there is an unexpected risk that threatens an individual's rights, AI agents must immediately notify the authorities and the affected individuals (Article 24).
Civil liability for damage caused by AI systems
Chapter V outlines the civil liability of AI systems suppliers and operators for damages caused. Article 27 specifies that if the system is deemed high risk, the supplier or operator is objectively liable for the resulting damages.. If the system is not classified as high risk, liability for harm is attributed to AI agents by default, and the burden of evidence in favour of the victim will change.
Regulation and surveillance of the Artificial Intelligence Act
Chapter VI allows AI agents to write best practices and governance code. This serves as a reference to show good faith. A competent authority will consider these codes when managing administrative sanctions.
Chapter VII requires AI agents to report serious security incidents to competent authorities. The authorities then determine whether measures need to be taken to mitigate the effectiveness. Article 31 provides an overview of the types of incidents that must be reported.
Chapter VIII provides an overview of the regulatory framework for the implementation and monitoring of the law. The executive department is responsible for overseeing the implementation process, conducting research and appointing competent authority to promote best practices regarding the development and use of AI systems. A competent authority issues regulations, monitors and enforces sanctions for breaching the law, prepares an annual report, and performs other tasks assigned under Section 32 and related paragraphs.
Next steps and what companies need to know
The bill currently under consideration in Brazil's parliament is seeking to address the potential risks and negative impacts of AI while promoting its benefits.
Companies developing or using AI should pay attention to the requirements outlined in the bill, including compliance with security measures, creating mechanisms for users to challenge decisions made by systems equipped with AI, and the role of human surveillance in decision making.
The final version of the bill has not yet been approved and could undergo further changes during the legislative process. Therefore, it is important for businesses that develop or use AI to inform their progress and understand the potential impact on their operations. As proposed texts move through Congress, businesses should engage with relevant stakeholders, including government representatives and civil society organisations, to provide input and feedback on the proposed law. Businesses should also be prepared for potential changes to their AI systems, such as implementing measures to ensure compliance with the requirements of the proposed law, such as risk assessment, transparency, and accountability.
Access Partnership is monitoring the progress of this bill and can be used to provide additional information on how it will affect your business. If you need further assistance, please contact Paularabakov [email protected].


