The Case of Appointing an “AI Custodian” for AI Governance

AI Basics


As artificial intelligence governance and legislation become more important in the EU and elsewhere, there will be many practical issues regarding the implementation and day-to-day management of AI systems in terms of compliance and reliability.

Current frameworks, such as the National Institute of Standards and Technology’s AI Risk Management Framework, and guidelines, such as the European Commission’s High-Level Expert Group on AI, articulate certain characteristics of trustworthy AI systems. increase.

Although the framework and guidelines use similar concepts, different terminology is used. Such characteristics need to be properly maintained, emphasizing that AI systems should respect human rights. Be ethical, robust, reliable, safe, secure, resilient, accountable and transparent. Avoid prejudice and ensure non-discrimination.

These documents also provide some suggestions for implementing AI systems. But they are always high level, which is typical for this kind of document. As an example, he stipulates in the NIST RMF that processes and procedures should be in place to determine the required level of risk management activity based on the organization’s risk tolerance.

Nonetheless, AI governance and management involves a wide range of internal and external stakeholders and interests with varying degrees of importance, influence, and involvement at various stages of AI creation and deployment from a compliance and trust perspective. A general consensus prevails that stakeholders need to be involved. Stakeholders include designers, developers, scientists, procurement, staff who use or operate AI systems, legal and compliance, management, third parties, end users, civil society organizations, and others.

Recently, there was a strong push by IAPP and others for the privacy team to lead AI governance and compliance management.

There are many good reasons to follow this perspective, but it’s important to remember that your privacy program involves more stakeholders than just your privacy team. More specifically, staff members, as process owners or leaders, are appointed to be responsible for specific activities or processes involving personal data. This includes contacting our Privacy Team, following their guidance, managing our activities from a privacy perspective, and collaborating with other parties.

Therefore, similar, but not identical, approaches are required to ensure effective governance of AI from a compliance and trust perspective. From this perspective, it is important to emphasize that AI systems are inherently socio-technical in nature, as described in the NIST Framework and various other documents. In other words, you can’t simply speak to app owners in the same way you can with other apps and tools. As the scope and potential impact of AI systems becomes wider, a larger perspective is required.

Given the many specific requirements for compliance and reliability of AI systems, it makes sense that someone like an AI administrator would be appointed to that role.

The same person can manage many systems as long as they have enough support teams and expertise to manage multiple systems simultaneously. AI administrators do not necessarily have to be AI technical experts. However, you should understand not only the basics of the technology, but also the details relevant to your particular use case. At the same time, he in the organization should understand how the AI ​​compliance program works and be familiar with the latest developments regarding AI laws and frameworks.

So what are the primary responsibilities of an AI administrator?

AI administrators are responsible for broad day-to-day governance and compliance requirements, and are responsible for engaging in more specialized functions as needed. How all of this will look in practice remains to be seen, but some specific responsibilities are easy to predict.

First, AI administrators are responsible for ensuring that AI systems are registered in their internal AI system inventory with accompanying information related to their source, usage, and basic technical details.

Second, given that some systems can be developed internally, they lead internal assessments and support the implementation of AI system compliance from the point reasonably possible for participation. However, most will be externally acquired from both vendors and open source.

Assessments should include impacts on human rights, human oversight, safety, privacy, transparency, non-discrimination, and social and environmental welfare. It is clear that many stakeholders familiar with the system and functions, such as security and compliance, contribute to the evaluation. But the presence of so many stakeholders is an even stronger reason why specific people are needed to ensure that the process is completed and managed in a timely and efficient manner. .

Since evaluation and implementation are only a starting point and many activities need to be repeated, day-to-day AI governance requires many specific steps and actions, detailed in other publications. Therefore, maintaining compliance and risk management must be left to a certain extent by one person.

From a broader perspective, this person will be supported by the AI ​​Compliance Team and Ethics Committee on the most important decisions. AI administrators must be accountable for compliance from system creation to decommissioning, but some activities (such as rights of redress) may need to be managed over longer periods. When combined with the NIST RMF, AI administrators should be actively involved in all functions ranging from governance, mapping, measurement, and management, but the most active role will be related to management. This includes determining whether the system achieves its intended and stated objectives, handling documented risks, requesting additional resources, monitoring risks and benefits, measuring continual improvement, Includes communication of incidents and errors to relevant stakeholders.

It makes sense for AI managers to assess risk and make risk-based decisions on a daily basis, but this should be limited to low and medium risk depending on the organization, and high risk decisions should be escalated to your organization. Reach higher levels within your organization with the active participation of your AI compliance team. The most influential topics should be discussed with the relevant ethics committee.

Appointing an AI administrator as a full-time or part-time role is currently just one of many possible ways to tackle the challenges of AI governance. However, we suspect this will be seriously considered to strengthen and support the teams responsible for AI compliance (which may be current and existing privacy teams in many organizations). increase.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *