The issue of algorithmic bias and military applications of AI.

Applications of AI


Last week, the contracting parties from each country met to First Meeting of the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) The roundtable, the most substantive discussion to date of the GGE, was on the topic of “Risk Reduction and Confidence Building,” and focused on bias. Working Paper on Prejudice Canada, Costa Rica, Germany, Ireland, Mexico, Panama.

In this post, Dr. Ingvild Bode, The associate professor at the Center for War Studies at the University of Southern Denmark argues that bias is a social as well as a technical problem, and addressing it requires going beyond technical solutions. She believes that as the work of the GGE turns to operationalization considerations, more focused attention needs to be paid to the risks of algorithmic bias. These arguments are based on the authors' presentations at GGE side events. “Addressing the gender issue in military AI: mitigating unintentional bias and addressing risks” It will be hosted by UNIDIR on March 6, 2024.

Algorithmic bias, which can be defined as “the application of algorithms that exacerbate existing inequalities in socio-economic status, race, ethnic background, religion, gender, disability, or sexual orientation,” has long figured prominently in academic research and policy debates on the societal impacts of artificial intelligence (AI). Surprisingly, this has not extended to research and discussions on AWS and AI in military contexts. With notable exceptions, primarily UNIDIR’s 2021 report, “Is There a Gender in Military AI?” and policy briefs published by the Observer Research Foundation and the Campaign to Stop Killer Robots, the issue of bias has not been addressed in any detail.

Still, we can leverage insights from the established civilian literature to think about bias in AWS and other military applications of AI for two reasons. First, much of the transformative potential of AI technology comes from civilian technology companies that are increasingly collaborating with military stakeholders. Second, and more fundamentally, the types of technologies used in civilian and military applications of AI, such as machine learning, are the same, which raises similar concerns about bias.

Bias in AI technology and its consequences

We can think about algorithmic bias in three main ways: 1) data bias, 2) design and development bias, and 3) usage bias. That is, bias can occur throughout the lifecycle of an algorithmic model, from data collection, training, evaluation, use, and archiving/disposal.

  1. Bias in the data used for machine learning models. The potential training data is a limited snapshot of the social world. This snapshot may contain direct biases, such as stereotypical language and images, but also indirect biases in the form of frequency of occurrence. For example, an image set may contain more pictures of male physicists than females. Data bias therefore arises from unrepresentative data leading to unrepresentative output. In other words, “bias arises when certain types of data are missing or over-represented over others, and often stems from how the data was acquired and sampled.” Both over-representation and under-representation are important issues. Considering data bias is a good starting point, but the problem of algorithmic bias extends beyond this stage. This is expressed, for example, in the common notion of “garbage in, garbage out,” where the quality of the input determines the quality of the output. Whatever the bias, implicit or explicit, the training data “has ripple effects throughout the rest of the model development, since the training data itself is the only information a supervised model can learn from.”
  2. Bias in design and developmentBias in data can be amplified at various stages of processing data, for example as part of a machine learning model. The training process of an AI technology is a value-laden process. Human task workers, programmers and engineers make several choices here: annotating/labeling/classifying data samples, feature selection, modeling, model evaluation, post-processing after training, etc. Algorithmic bias can therefore also be the result of the often unconscious biases that the humans involved bring to the machine learning lifecycle by performing different tasks. However, bias can also creep in through “black boxed” processes related to the functioning of the algorithm. At this point, the AI ​​technology reflects the biases inherent in the training data and the biases of its developers.
  3. Bias in usage. Finally, AI technologies, through repeated and increasingly widespread use, acquire new meanings and functions, and potentially biases. This happens in two ways. First, the mere adoption of systems powered by AI technologies amplifies the biases they contain. Second, people act on the output that AI systems produce. People may create “more data based on the decisions of an already biased system.” Thus, users of AI technologies may find themselves in a “negative feedback loop,” which may then become the basis for future decisions. In this way, biased output produced by AI technologies may also be used as further justification to continue existing (biased) practices. At the point of use, biases in the way humans interact with AI technologies must also be considered, most notably automation bias. This represents a tendency for humans to rely too heavily on automated systems and follow the output produced by such technologies. There is considerable evidence of automation bias from studies outside the military domain. Thus, unfortunately, it is easy to think of situations in which human users would overly trust AI systems in a military context.

Knowing that AWS and other military applications of AI are likely to contain algorithmic bias has serious consequences. Bias can lead to legal and moral damages, as people of a certain age group, gender, or skin color may be mistakenly determined to be combatants. These damages are well summarized, for example, in UNIDIR's 2021 report, which mentions a variety of problematic consequences of such misidentifications. Bias also affects the functioning and predictability of the system. This has to do with a lack of transparency and explanation. It is often unclear which features of the data the machine learning algorithm assigned to the output. “This means that we cannot explain why a particular decision was made.” Moreover, bias in datasets used for military applications of AI may be exacerbated. This is because available data suitable for training military applications may be more limited in scope than data used to train civilian applications. For example, available data may only represent a specific conflict or type of operation that is not applicable to broader applications. In other words, the quantity and quality of data on which military applications of AI can be trained may be both limited.

Prejudice as a socio-technical problem

Research on algorithmic bias, especially gender bias, can be categorized into studies focused on bias perpetuation and bias mitigation. A prime example of a study documenting gender bias perpetuation is the Gender Shade project conducted by Joy Buolamwini and Timnit Gebru. The authors investigated three types of facial recognition software and found that all three recognized male faces much more accurately than female faces, and generally better at recognizing lighter-skinned people. The poorest performing model did not recognize darker-skinned female faces 1/3 of the time. Other studies have looked at how and in what ways bias can be mitigated. This research is primarily technical in nature and focuses on specific techniques that can be used in machine learning models and facial recognition systems, such as rebalancing or normalizing data, or more risk and harm analysis, as well as designing “fair” algorithms through more rigorous testing and auditing. Such technical mitigation strategies are not easy. For example, systems that were later identified as problematic in studies focused on gender bias perpetuation were functionally operational during testing. The problem of algorithmic bias is not easily solved.

Thinking about bias emphasizes once again that technology is not neutral. It is a “product of its time” and a reflection of our society. Instead of looking at technology as an object and its development as a process that follows a different trajectory than our society, we need to recognize technology and its development process as social in nature. In other words, “bias is inherent in society and it is also inherent in AI.” For this reason, technological solutions alone are not enough to solve bias.

Addressing the problem of algorithmic bias requires a fundamental change in discriminatory attitudes. For example, mitigation strategies must be built into the way AI programmers think about (early) modeling parameters, even at the design stage. Here, it is important to take a closer look at the technology companies and their specific interests that dominate investments and development in AI technologies, as these interests are likely to directly influence choices at the design stage. To change this, it is necessary to address and change the “biases ingrained in the workplace culture” or professions that are particularly important for the design of AI technologies, namely the STEM professions. Currently, STEM professions are dominated by a limited number of people who are not representative of the broader society. “Tech companies hardly ever hire women, minorities, or people over 40 years old.” Addressing this problem of representation and diversity requires the development of capacities for under-represented groups. But it also requires a fundamental change in professional cultures, for example in engineering or IT, which are based on a long-standing and often implicit association of technical knowledge and expertise with masculinity and certain ethnic backgrounds.

In conclusion, algorithmic bias is now firmly recognized as a key risk factor associated with AI technologies in the military domain. For example, bias and harm reduction are listed in many of the emerging lists of responsible AI principles for the military domain. At the same time, much of the logic that seems to encourage states to integrate AI technologies into their weapons systems and broader military contexts is based on the argument that using such technologies will make the conduct of war more rational and predictable. However, this idea that AI technologies can be “superior to human judgment” ignores the fact that AI technologies that may be used in weapons systems are shaped by and shape (human) decision-making. The problem of algorithmic bias indicates that we should think of AI technologies not as separate from human judgment, but as deeply intertwined with forms of human judgment, for better or worse, throughout the lifecycle of AI technologies.

Author's NoteThe research for this paper has been funded by the European Union's Horizon 2020 research and innovation programme (grant agreement no. 852123, AutoNorms project).

reference:

  • Roxana Radu, Eugenia Oriaro, Not Child's Play: Protecting Children's Data in a Humane AI EcosystemsDecember 14, 2023
  • Nivedita Raju, Space Security Governance: Measures to Limit the Human Costs of Military Operations in SpaceAugust 22, 2023
  • Dr. Stuart Eves, Gilles Doucet, Reducing civilian costs of military counterspace operationsAugust 17, 2023
  • Wen Zhou, War, Law, and Space: Pathways to Reducing the Human Costs of Military Space OperationsAugust 15, 2023



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *