“The image of AI as a tool is too simplistic.”

Applications of AI


How are the advantages and disadvantages distributed in this context?

What is very clear is that the people most likely to suffer are the “global majorities” – people of African, Asian, Latin American or mixed backgrounds who make up the majority of the world’s population. Poorer countries with large populations of data workers but without large AI industries tend to be at a disadvantage. Conversely, Western countries, particularly the United States, benefit from these countries’ access to affordable labor and their ability to extract resources and export e-waste.

We should also be clear that the AI ​​industry is supported by very powerful people who promote anti-democratic ideologies, especially in the United States. Elon Musk and others have declared the goal of replacing state institutions with automation. Since the beginning of Donald Trump’s second term, we have seen that this is not fiction, but rather a scenario that is being actively pursued.

All of this doesn’t necessarily mean you shouldn’t use AI. But we need to be aware of these issues, especially when talking about using AI for the public good. We then need to consider how these terms can be changed politically or adapted for personal use.

Should AI be seen as a tool that can be used for a variety of purposes, both beneficial and harmful?

The image of AI as a tool is too simplistic. Behind it lies a huge complexity. AI is an umbrella term for many different forms of technology. These are interconnected through a vast network of material structures such as data centers and the industrial infrastructure in which data workers operate. These technologies are also being used in very difficult social situations, where we face the dilemmas described above when we want to leverage AI to promote sustainability and the public good.

So we return to the audit methods developed to evaluate AI projects. How does it work?

First, we and our partners Greenpeace and Gemeinwör Okonomie Deutschland (German Economy for the Common Good) will identify potential projects and hold preliminary discussions about whether an audit can be carried out. It is important that AI systems are already in use, not just in the research phase. After that, we will request materials and conduct an interview. For example, I’m interested in what goals a project is pursuing and how those goals are evaluated. But we also look at the technical infrastructure and consider the ecological footprint.

We have over 200 questions in total. Among them are: Is there a way for users to contact the project directly? Can they criticize the project if they are affected by the application and notice an issue? What discussions were made about the design of the system and how were they made? This goes much deeper than a traditional impact assessment. We wanted to understand the project context more precisely, and to that end we integrated various existing techniques into our model.

How do you evaluate your data?

We qualitatively analyze content and evaluate metrics and numbers. Next, rate each aspect of your project on a 5-point scale. The standards are inspired by the standards defined by the United Nations for its own AI projects: the United Nations Principles for the Ethical Use of AI. Examples include whether the system is necessary and appropriate; in other words, whether there is an easier way to solve the problem at hand. Other aspects also play a role, such as security, discrimination, human oversight, transparency, and data protection.

Who is the audit intended for?

So far, we have audited two projects by established NGOs in the field of AI and democracy. One is about fighting disinformation and the other is about forming democratic public opinion. Next is the field of digitalization of government. So far, we have limited ourselves to projects that aim to improve sustainability and promote public interest. However, it is also possible to evaluate other projects using the same criteria.



Source link