Back to Basics: Revisiting the Responsible AI Framework | by Maya Murad

AI Basics


Series on Responsible AI

Sifting through dozens of existing frameworks to create a robust mental model for the responsible use and deployment of algorithmic decision-making systems

Maya Murad
Towards Data Science
Source: image by author

In the last few months we have seen promising developments in establishing safeguards for AI. This includes a landmark EU regulation proposal on AI that prohibits unacceptable AI uses and imposes mandatory disclosures and evaluations for high-risk systems, an algorithmic transparency standard launched by the UK government, mandatory audits for AI hiring tech in New York City, and a draft AI Risk Assessment Framework developed by NIST at the request of US congress, to name a few.

That being said, we are still in the early days of AI regulation. There is a long road ahead to minimize harms that algorithmic systems can cause.

In this article series, I explore different topics related to the responsible use of AI and its societal implications. I specifically focus on two important ideas:

  1. Meaningful transparency of algorithmic systems is an important pre-requisite for effective accountability mechanisms — yet challenging to implement in practice;
  2. Public-interest groups, such as advocacies, activists and journalists, as well as individual citizens, currently play a critical role in uncovering AI harms — yet they do not have a meaningful role in any currently proposed regulation.

Before I dive into these topics, it is important to start with a robust mental model that adequately reflects the complexity of promoting the responsible use of AI at a societal level.

Frameworks, frameworks, and more frameworks!

I first started by reviewing existing frameworks, guidelines, and charters on AI that were developed by various corporations, governments, and research institutions. As shown in the database below, the most widely cited AI principle is “Fairness”, followed closely by “Transparency”, “Explainability”, “Security”, “Robustness” and “Safety”.

Database of existing frameworks on AI Ethics — Created by author. To contribute a source to this database, please fill this form.

Most of these existing frameworks on AI are limited in scope. They either provide general non-binding guidance, or are only applicable for a specific organization (or set of organizations). There is a need for a comprehensive responsible AI framework that accounts for conflicting stakeholder priorities and that safeguards impacted groups against harms.

A comprehensive framework on ethical AI should include the following:

  • a clear definition of the object to assess ( → the AI system);
  • broad buy-in on the ideal to achieve;
  • well-defined principles to help achieve our stated ideal;
  • a set of stakeholder groups with defined responsibilities to help promote the achievement of our ideal.

1. Defining the right “unit” or system

What is the right “unit” to evaluate an AI deployment? Most frameworks vaguely refer to AI systems, others to individual algorithms. The evaluation of individual algorithms is inconsequential without an understanding of how the algorithm impacts the overall outcome of a broader system. Moreover, a system that performs decisions may be comprised of several steps that are either automated or performed by humans. Each automated step can be tied to one or more algorithms that may use varying levels of sophisticated artificial intelligence. It is therefore important to consider a decision-making system in its entirety as failings can occur in one of several stand-alone algorithms or their interoperability, without forgetting potential human error.

Components of an Algorithmic Decision-Making System (ADS). Source: image by author

Throughout the rest of this article (and upcoming ones), I will be referring to Algorithmic Decision-Making Systems (ADS) as a single unit of evaluation. ADS refers to the introduction of algorithmic systems that help procedurialize and automate parts of (or the entire) decision-making process.

An ADS is generally composed of the following:

  • Algorithms and computational processing techniques (with various levels of sophistication, from rule-based to neural networks);
  • Supporting datasets (which may include sensitive, personally identifiable data or other confidential data);
  • Human in or on the loop (the former actively reviewing each decision and the latter supervising overall system performance).

Not all ADS should be prioritized for assessment. In general, we speak of “high-risk” systems as those that govern the services, benefits, punishments received as well as access to opportunities (e.g. hiring) as system failings can result in significant harm for impacted individuals.

2. Defining the right “ideal”

Existing frameworks drive one of three following ideals: ethical, trust, or responsibility.

Ethics has philosophical connotations of moral obligation. We cannot correctly speak of ethical AI as “the capabilities that underpin AI solutions […] are not ethical or unethical, trustworthy, or untrustworthy.” Rather it is the choices that we make regarding algorithmic systems that can have ethical or unethical attributes. Promoting ethical actions regarding ADS can be a great ideal to strive for. However it can also be an ambiguous ideal that leaves the door open for different interpretations of moral obligation requirements.

As for trust, corporations tend to focus on this ideal as it underpins the adoption of their products and services. To trust a technology means to hold a belief that it will perform up to your expectations. This concept of “trust” blurs accountability structures. As stated before, AI is not inherently trustworthy or untrustworthy. It is the choices made surrounding its development and use for a particular context that confers these attributes.

We currently observe numerous occasions were corporations avoid responsibility of ADS failure by organizing themselves in ways to deflect responsibility away from the centers of decision-making.

An algorithmic decision-making system causes harm, not because it is untrustworthy, but because the system owner failed to design, develop, test, deploy, or maintain it in a responsible manner.

Therefore, I argue that the “ideal” we should be driving is responsibility and not trust. From a societal wellbeing perspective, we should focus on promoting and safeguarding the responsible use of technologies, such as AI, and holding organizations accountable for how they use and implement them.

Responsible ADS use challenges the notion of a “responsibility gap” — the perception that it may be more difficult to assign responsibility for resulting harm given the nature of AI.

In the ideal state, there should be collective responsibility, where:

  • System owners are held accountable for developing, deploying and monitoring their algorithmic systems to avoid harms;
  • System users are adequately trained on the limitations of ADS and have agency to reason about the system outcome;
  • Impacted groups are empowered to provide feedback, flag harms and seek recourse;
  • Government actors are accountable for setting adequate regulatory safeguards and establishing avenues for recourse.

3. Identifying the core principles supporting the achievement of our “ideal”

To achieve the responsible use of ADS we need to define the two complementary sets of principles:

  • The “What”: principles that reflect responsible use ideals (referred to as first-order principles). These principles can be used to evaluate the design and deployment of an ADS.
  • The “How”: principles we use to ensure conformity with first-order principles and rectify failings (referred to as second-order principles). These principles are necessary to ensure we achieve the responsible use ideal.
First- and Second-Order Principles Governing the Responsible Use of ADS. Source: image by author

The composition of first-order principles may evolve with time and depending on specific context of a region or industry. Based on my assessment of existing frameworks, a responsible ADS deployment is:

  • Fair and non-discriminatory: actively assesses, monitors, and mitigates bias; aims to produce properly calibrated fairer outcomes and decisions
  • Explainable: able to produce interpretable justification for the decisions produced
  • Secure: enacts effective controls to protect system from threats; actively flags and mitigates vulnerabilities
  • Robust: consistently meets accuracy and performance requirements and is robust to perturbations
  • Upholds data privacy rights: protects data privacy rights and conforms to existing data laws for both direct and indirect users
  • Safe: avoids harm for impacted users and aims to promote human wellbeing

Second-order principles are necessary to ensure abidance with the requirements outlined above. These include:

  • Ensuring transparency of the ADS: at a basic level, transparency translates to system visibility, and at a more sophisticated level, it reflects the system’s performance on first-order principles. More nuance on transparency is provided in subsequent articles.
  • Ensuring accountability of the ADS system: this refers to the system’s owner ability to explain their actions (and failings) and take responsibility for them. ADS transparency is a pre-requisite to achieve accountability. Accountability can be enforced via assessments, audits, and feedback loops.
  • Preserving human agency and possibility for recourse: when an ADS fails and adversely impacts an individual, the concerned individual should have a clear recourse process to follow in order to rectify the error. Strong transparency and accountability mechanisms should be in place to enable recourse.

4. Identifying the relevant stakeholders and their responsibilities

Finally, we need to identify the relevant stakeholders engaged in an ADS and clarify their role in ensuring responsible use and deployment.

Three broad stakeholder groups: System Owners, Public Entities and Civil Society. Source: image by author

We can identify three broad stakeholder groups:

  • System Owners: these include one or more entities responsible for commissioning, designing, developing, and maintaining an ADS. In the ideal state, the commissioning entity should be responsible for establishing purchasing requirements and obligations from vendors based on responsible use principles.
  • Public Entities: these include the entities responsible for regulating the use of ADS, assessing them and holding system owners accountable. As it has been demonstrated (here and here), self-regulation of large tech companies is not conducive to societal wellbeing. In the ideal state, public entities should provide external accountability to ensure compliance with ADS regulation.
  • Civil Society: this includes groups of individuals impacted by ADS as well as public-interest groups, which consist of research, academic, and advocacy organizations that work to protect and advocate for the rights of civil society and, specifically, marginalized groups. Given the lack of comprehensive ADS regulation in most countries, public-interest groups have historically been the sole drivers of external transparency and accountability. Think here of the ProPublica investigative journalism piece that brought the COMPAS biased recidivism algorithm to light or the MIT researchers that revealed the bias in Amazon’s facial recognition software. In my articles, I argue that public interest groups should still play an active role in representing civil society interests and providing external accountability, even if adequate regulation exists. These groups should be formally included in the process of designing and deploying ADS, especially high-risk ones.

Putting it all together

Next we should map how various stakeholders perform actions that ensure the responsible use of algorithmic decision-making systems.

A Comprehensive Framework for the “Responsible Use of Algorithmic Decision-Making Systems”. Source: image by author

The framework above represents an ideal state where public entities are the main safekeeper of the system. It also entails clear ownership and accountability of ADS and relies on the support of public-interest groups to complement external oversight.

We are currently far from the ideal state, as:

  • Most countries do not have national-level ADS principles;
  • Most governments are not equipped or empowered yet to regulate and oversee the use of ADS;
  • Ownership of ADS are often obfuscated by vendors and third-parties in the process with no established requirements or responsibilities assigned for each party;
  • Public-interest groups currently carry an oversized share of the burden of providing ADS accountability, and have limited access and resources to perform this role.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *