As previously explained, earlier this year the National Institute of Standards and Technology (NIST) launched the Trusted and Responsible AI Resource Center. The AI Resource Center includes NIST’s AI Risk Management Framework (RMF) and a handbook to help companies and individuals implement the framework. RMF is designed to help AI users and developers analyze and address risks in AI systems, providing practical guidelines and best practices for addressing and minimizing such risks. To do. It is also intended to be practical and adaptable to changing conditions as AI technology continues to mature and become practical.
The first half of the RMF describes these risks, and the second half describes how to address them. When AI RMF is properly implemented, organizations and users should experience enhanced processes, increased awareness and knowledge, and increased engagement when using AI systems. RMF defines an AI system as “an engineered or machine-based system capable of producing outputs such as predictions, recommendations, and decisions that affect a real or virtual environment for a specific set of purposes.” explains. AI systems are designed to operate with varying levels of autonomy. ”
Understanding and addressing the risks, impacts and harms of AI systems
The use of AI systems provides individuals and organizations (collectively referred to as “actors” in RMF) with myriad benefits, including increased productivity and creativity. However, RMF recognizes that misused AI systems can cause harm to individuals, organizations and the general public. For example, the RMF outlines that AI systems can amplify discrimination, pose security risks to businesses, and exacerbate climate change issues. RMF allows a stakeholder to deal with both positive and negative impacts of his AI system in a coordinated manner.
As many cybersecurity professionals understand, risk is a function of the likelihood that an event will occur and the damage that could result if that event were to occur. Negative consequences can include harm to people, organizations and ecosystems. In practice, it is difficult to accurately quantify risk. This is because there are significant uncertainties in the likelihood of an event occurring, and it is often difficult to recognize the impact of harm if it occurs. RMF describes some of the challenges, including:
- Risks associated with third party software, hardware and data: Third-party data and systems can help accelerate the development of AI systems, but they are unknowns that can complicate measuring risk. Furthermore, users of AI systems may not be able to use such systems in the way the developer or provider intended. Developers and providers of AI systems may be surprised to find that using an AI system in production is very different from using it in a controlled development environment.
- Availability of reliable indicators: Calculating potential impact or damage when using AI systems is complex and can involve many factors.
- Risks at various stages of the AI lifecycle: Actors using off-the-shelf systems face different risks than those who build and train their own systems.
The RMF recognizes that businesses should determine their own tolerance for risk and that some organizations may bear more risk than others, depending on legal or regulatory circumstances. doing. However, RMF recognizes that addressing and minimizing all risks is neither efficient nor cost-effective, and companies must prioritize which risks to address. Similar to how companies should address cybersecurity and data privacy risks, the RMF proposes to integrate risk management into organizational practices, as different risks emerge at different stages of an organization’s practices. doing.
RMF also recognizes that reliability is a key feature of AI systems. Trustworthiness has to do with the behavior of actors, the datasets used by AI systems, the behavior of users and developers of AI systems, and how actors oversee these systems. RMF suggests that the following characteristics influence the reliability of AI systems:
- Verification and Reliability: Actors need to be able to verify that an AI system meets certain requirements and can operate without failure under certain conditions.
- safety: AI systems should not endanger human life, health, property, or the environment.
- Security and resilience: AI systems must be able to respond to and recover from both unforeseen adverse events and changes.
- Accountability and transparency: An attacker would need access to information about the AI system and its output.
- Explainability and Interpretability: An AI system should be able to provide an actor with just the right amount of information and a certain level of understanding.
- Enhanced privacy: Where appropriate, AI system design choices should incorporate values such as anonymity, confidentiality, and control.
- Managing fairness with harmful bias: AI systems risk perpetuating and exacerbating existing discrimination. Parties should be prepared to prevent and mitigate such bias.
AI RMF Risk Management Core and Profile
At the heart of AI RMF (RMF Core) are foundational functions designed to provide a framework to help enterprises develop trustworthy AI systems. These functions are Governance, Mapping, Measurement and Management, where the “Governance” function is designed to influence each of the other functions.

Figure 1: Risk Management Core (NIST AI 100-1, page 20).
Each of these features are further divided into categories and subcategories designed to achieve high-level functionality. Given the vast number of subcategories and recommended actions, the RMF Core is not intended to serve as a checklist that companies use to simply “check the boxes.” Instead, the AI RMF recommends that risk management should be performed continuously and in a timely manner throughout the AI system’s lifecycle.
The AI RMF also recognizes that there is no “one size fits all” approach to risk management. An attacker must build a profile specific to the AI system’s use case and choose appropriate actions to perform and accomplish the four functions. While AI RMF describes the process, AI RMF playbooks provide detailed explanations and helpful information on how to implement AI RMF for some common situations (commonly called profiles). RMF profiles vary according to specific sectors, technologies, or applications. For example, employment-related profiles address different risks than profiles for detecting credit risk or fraud.
The RMF core consists of the following functions:
- To govern. Strong governance is critical to developing internal practices and norms that are critical to maintaining organizational risk management. In governing functions, he describes three other categories that help implement policy and practice in functions. Create accountability structures, workplace diversity, and accessibility processes to have AI risks assessed by a diverse team and develop a culture-driven organizational team. Practicing safety-first AI practices.
- map. Map functionality helps actors understand the risk landscape when using AI systems. By taking the actions provided under the map, organizations can better anticipate, assess, and address potential sources of negative risk. This category of capabilities includes establishing and understanding the context of AI systems, classifying AI systems, understanding the risks and benefits of all components of AI systems, and identifying potentially affected individuals and groups. increase.
- measurement. The measurement function uses quantitative and qualitative tools to analyze and monitor AI risks and assess the use of AI systems by subjects. Measurements should track a variety of goals, such as trustworthiness characteristics, social impact, and quality of human-AI interaction. Categories of measurement capabilities include identifying and applying appropriate methods and indicators, evaluating systems for reliable characteristics, implementing mechanisms to track identified risks over time, and gathering feedback on the effectiveness of measurements. .
- management. After determining the appropriate amount of associated risks and risk tolerance, the management function should help the enterprise prioritize risks, appropriately allocate resources to address the highest risks, and regularly monitor and monitor AI systems. Help enable improvements. Categories of management capabilities include post-assessment risk prioritization with maps and measurements, developing strategies on how to maximize AI benefits and minimize AI harms, and managing AI risks from third parties. will be
As such, the playbook provides concrete, actionable suggestions on how to achieve the four functions.
business impact
AI RMF helps companies develop robust governance programs and address risks in AI systems. Although the use of AI RMFs is not currently required by proposed legislation (including the EU Artificial Intelligence Act), AI RMFs, like other NIST standards and guidance, are designed to enable companies to comply with NIST’s risk analysis requirements. It will definitely help you stay compliant. Such laws are applied in a structured and reproducible manner. Therefore, companies considering providing or using AI systems should also consider using AI RMF to analyze and minimize risks. Firms may be required to present to regulators high-level documentation produced as part of their use of the AI RMF. You may also consider providing such documentation to your customers to reduce concerns and increase confidence.
The authors would like to acknowledge the contributions of Mathew Cha, a UC Berkeley Law School student and Summer 2023 Associate at Foley & Lardner LLP.
[View source.]
