The Brazilian Federal Council of Medicine (“CFM”) issued Resolution No. 2,454/2026 (“Resolution”) on February 27, 2026 addressing the use of artificial intelligence (“AI”) in healthcare. The resolution sets out parameters for the use of AI models, systems, and applications by physicians and healthcare organizations, which must be implemented in accordance with auditing, monitoring, governance, training, and transparency standards.
Among its key developments, the resolution explicitly authorizes the use of AI as a support tool for medical practice, clinical decision-making, healthcare management, scientific research, and continuing medical education, while preserving professional autonomy and patients’ rights to information. The resolution also imposes mandatory human supervision and patient rights of refusal.
The new rules will go into effect on August 10, 2026, 180 days after promulgation. Physicians and medical institutions, such as hospitals, clinics, and medical centers, must consider these provisions in order to comply with the applicable regulatory framework and avoid regulatory sanctions.
I. Regulatory framework for medical institutions
This resolution introduces stricter regulatory requirements and governance obligations for healthcare organizations. One of the main measures is the prohibition on setting goals or policies that subordinate the professional conduct of physicians. Another relevant aspect is transparency. Transparency will be measured through scientific metrics and accessible reports with clear and plain information, allowing patients, physicians, and administrators to interact responsibly with AI.
Healthcare facilities must comply with a number of obligations, including:
- Implement ongoing audit and monitoring mechanisms.
- Establish an AI and Telemedicine Committee to ensure the ethical use of AI systems.
- Prioritize the collaborative development of AI models, systems, and applications, without compromising confidentiality obligations, and foster interoperability with other healthcare sectors and dissemination of technologies, code, databases, and best practices. and
- Conduct a preliminary risk assessment that considers, among other things, the potential impact on patients, the level of human intervention, and the significance of the context of use.
The resolution requires patients to be informed of their risk level, categorized as low, medium, or high.
| risk level | meaning | example |
|---|---|---|
| Low risk solution |
|
Automated scheduling systems, information chatbots, supply logistics. |
| Medium risk solution |
|
Systems that support important clinical or operational decisions but do not autonomously execute them. |
| High-risk solutions |
|
Systems that directly influence important medical decisions or perform automated actions that have significant clinical consequences, especially when involving vulnerable patients or life-or-death situations. |
Although the resolution refers to the category of “unacceptable risks,” it does not provide detailed or explicit definitions of what characterizes such a classification.
II. Doctor-patient relationship
This resolution emphasizes the protection of physician autonomy in relation to AI technology. According to this regulation, doctors have the following rights:
- right Utilize AI: Doctors may use AI tools as a means of professional support.
- right of veto: Physicians can refuse to use AI systems that lack regulatory certification or scientific validation, or that violate medical principles.
- right to information: Physicians must have access to clear, transparent, and understandable information about the AI systems used. and
- autonomy: Doctors do not have to follow AI-generated recommendations.
At the same time, the doctor must:
- Make critical judgments about the information and recommendations generated by AI systems.
- We only use systems that guarantee minimum information security standards compatible with the protection of sensitive personal data in Brazil.
- Maintain up-to-date information about AI systems applied to healthcare, including their capabilities, purpose, limitations, risks, and level of scientific evidence.
- Notify patients whenever AI is used to support diagnosis, care, or treatment and record this information in the patient’s medical record. and
- Respect the patient’s informed refusal and uphold the integrity of the physician-patient relationship, clinical listening, empathy, confidentiality, and respect for human dignity.
Regarding medical liability, the resolution makes clear that physicians will continue to take full responsibility for professional actions performed with the assistance of AI. However, liability may be waived if the failure is caused solely by the AI system. However, this is conditional on physicians demonstrating that they use such tools diligently, critically, and ethically. Delegating communication to patients regarding diagnosis, prognosis, or treatment decisions to AI is expressly prohibited.
III. Protection of personal data
Patient personal data used in the development, training, validation, and implementation of AI systems must strictly comply with the Brazilian General Data Protection Law (“LGPD”) and health information security standards. Institutions must implement security measures that can protect data from risks such as destruction, loss, alteration, leakage, and unauthorized access.
The resolution adopts the principle of “privacy by design,” which requires privacy policies to be embedded throughout the lifecycle of AI systems, from development to updates and retraining, while adhering to ethical and scientific principles. In this context, technical and administrative security measures should be implemented according to the state-of-the-art and criticality of the data and systems involved.
Additional obligations strengthen physician confidentiality and may result in regulatory sanctions in the following cases:
- failure to protect the confidentiality, integrity, and security of health data used in AI systems;
- Failing to ensure the appropriate processing of patient data, especially sensitive data, with respect to the processing purposes communicated to the data subject.
- Failure to notify competent authorities of suspected failures, significant risks, or inappropriate use of AI that could harm patients or health care. or
- Use of AI technology that does not ensure appropriate information security standards.
conclusion
The resolution issued by the CFM represents an important regulatory milestone in the fields of data privacy, artificial intelligence, and bioethics, recognizing technological innovation while upholding best practices and human dignity in healthcare. Therefore, professionals and institutions implementing AI technologies are strongly advised to have a regulatory legal plan in place to ensure compliance and reduce the potential risk of enforcement by authorities such as the Brazilian Data Protection Agency (ANPD) or the Federal Medical Council.
See the full resolution.
*This content was created with the participation of Ana Loiola, Legal Clerk.
