Should you focus more on risk or goals?
The most interesting feature of ethical guidelines is that their focus is often primarily on the threats and risks associated with artificial intelligence. While the goals of the guidelines, such as “ethically sustainable AI”, are positive in themselves, the technology is primarily discussed in the list of principles from the perspective of minimizing risks and negative impacts, also known as the non-malicious principle.
In other words, guidelines prohibit, restrict, and prevent, but they give little consideration to how technology can be used to promote positive goals according to the so-called principle of good faith. For example, guidelines may state that artificial intelligence should not be developed or used in a way that results in discrimination. No consideration is given to how artificial intelligence can be used to develop a society free of discrimination. Specifically, the difference is significant. The scope of tools for preventing discrimination with artificial intelligence solutions is quite different from using AI to prevent discrimination.
Technical solutions are generated, developed, and utilized in complex structures, making it difficult to understand accountability issues alone. Who is in charge of the technology and whose activities are subject to regulation? Who is responsible for discrimination by artificial intelligence? Programmers, product developers, or users? Nor is it at all clear what the purpose of such regulation should be, including prohibiting algorithmic discrimination. As facial recognition algorithms demonstrate, the same algorithm can be used in acceptable and unacceptable ways. Therefore, regulation should target unacceptable uses of algorithms, not the algorithms themselves. However, when it comes to developing technologies with a wide range of uses and applications, regulating their use is actually not that easy.
Also important is the fact that simply preventing risks and disadvantages does not create well-being or promote other positive social goals. Focusing on risks and disadvantages often narrows our thinking and blinds us to opportunities. By focusing on prohibitions, prevention, and restrictions, the potential to use technological solutions to further valuable goals is overlooked.
Various algorithms
Algorithms can be used to improve equity in education, support learning for people with learning difficulties, and develop better technological solutions for education. These help promote minority rights, develop methods of citizen participation, and protect democracy. It can be used to improve data protection and cybersecurity. With the help of algorithms, we can also prevent the accumulation of diseases and social problems. It can also help find solutions to huge social problems such as climate change, the energy crisis, water scarcity, poverty and pandemics.
Businesses related to artificial intelligence are worth hundreds of billions of euros, and their internal dynamics can be influenced to a surprising extent by ethical policies. Ethics, at least in speech, is already an element of competitiveness, and depending on how you look at it, it can either promote competitiveness or hinder it. The ethics of artificial intelligence is also connected to many global policy issues, such as the distribution of global prosperity, the polarization of technological development, the development of human rights, and the rules of algorithmic warfare.
In other words, AI ethics is no longer just about assessing ethical acceptability, but also about politics, money, and power. The more it intertwines with the goals of AI development, the more we need to discuss the goals of that development. The lack of analytical exploration of the positive goals of AI and algorithm development is perhaps the biggest flaw in the current debate on ethics. But above all, it may require a well-thought-out and carefully crafted opinion about what the fundamental goals of algorithmization are.
This text is an abridged and edited version of the originally published “Algoritmien aakkoset” article.
