10 Reasons for Not Using AI in Development and 10 Routes for More Responsible Use

Applications of AI


British Prime Minister Kier Starmer says AI should be made primarily directly for all parts of the government and economy. The Foreign Secretary says he “plans to bring AI to the heart of our work.” However, there is a lot of evidence about the harm that AI can cause.

Computer-generated graphics swirl around the letter
Credit: cnokel/shutterstock

Here we outline 10 reasons not to use generation AI such as ChatGpt in international development and humanitarian work, outline how responsible AI can deal with these failures, and explore remedies. I urge politicians and practitioners to postpone the use of these potentially harmful technologies until this work is complete.

Mark Zuckerberg coined the term “moving fast and breaking things,” and remained Facebook's motto until 2014. However, the backlash against the resulting disinformation and harms of surveillance has declared in 2019 that the era of “fast movement and broken things” is over. But today, Kier Starmer is trying to position himself as a “technical companion” who suggests that AI is a drug we cannot get enough. This is despite the growing evidence that the uncritical use of AI is harmful to the environment, violating human rights, and does not provide economic benefits to investment.

International development and humanitarian work is ethically committed to the preventive principle of “no harm” and should at least take a more consideration and reflexive approach, such as responsible AI in the development approach.

What is artificial intelligence?

The term artificial intelligence (AI) dates over 70 years old and applies to a wide range of technologies. Technology is not “artificial” and does not contain “intelligence,” so it is a marketing term where content is empty. AI often relies on statistical modeling of numbers: big datasets to identify patterns used for predictions. This is anything that people like whether they need access to social protection or parole. This technology creates a risk of harm by causing bias and errors.

There are many different types of AI, but the revival of interest was due to a specific form of “machine learning” AI, which produces natural language output, such as the Chatbot program ChatGPT. Billions of dollars have been invested in ventures, including Open, headquartered in Silicon Valley, California, including Google Gemini, Amazon Lex, Anthropic and Cohere.

Large language models were generated by copying data from the Internet without consent or copyright permission. The mechanisms of AI systems pose several serious challenges in relation to humanitarian principles.

Given the known harms of AI, it may be considered difficult to reconcile AI with the preventive principle of “no harm” (AI4D) for use in international development.

10 Reasons for Not Using AI in International Development

  1. Stolen data. Much of the big data that AI is based on has been stolen and either removed from the web without permission or sourced without the consent of the producer.
  2. Bias data. Much of that data is biased in ways that reflect and reproduce historical patterns of racial and gender bias.
  3. Labor exploitation. Once collected, data are often labelled using the labour of exploited women from Kenya and other low-income, low-income labor protection sites.
  4. no Transparency. Once labeled, the data is processed using opaque and proprietary algorithms, so it is not clear what decisions are being made.
  5. Biased algorithms. It is not only the data that can be biased, but the mechanisms used to operate big data can also be biased and problematic.
  6. Error-prone. Currently, it is known that the generated AI (such as Chat-GPT) is not only biased, but also error prone. This regularly generates wrong pieces or manufacturing (sometimes called AI hallucinations) ranging from small inaccuracies to obvious errors.
  7. There is no accountability. Because the mechanisms inside AI algorithms are “black boxes” that are unaware of, it is virtually impossible for citizens to improve or obtain errors made.
  8. Dehumanization. The core purpose or outcome of many AI applications is to intermediation and/or automation, i.e. to remove human elements of processes and replace them with mechanical calculations. This is difficult to reconcile with humanitarian participatory principles or commitment to human-centered processes.
  9. Colonialism. Many AI applications extract data from Africa/Asia and extract the big technologies of major financial companies, as scholars call Africa's algorithm colonization, the new imperialism of the global South, or simply data colonization.
  10. Climate impact. AI's balloon carbon emissions are difficult to match with sustainable development commitments. Evidence shows that AI is exploitative of not only the African body in sweatshop labor, but also the world's water and mineral resources. Training only one AI model creates emissions equivalent to 300 people flying 100 times around the world.

10 Routes to Responsible Use of AI for International Development

Ongoing activities are currently underway to recognize the harm and discrimination caused by AI and to address and avoid these injustice. The Canadian Development Agency IDRC is leading the way by investigating and developing forms of “responsible AI” aimed at using AI without violating human rights or causing fraud, gender or racism.

  1. Data Integrity. Instead of portraying data sets by major high-tech companies as “stolen” Responsible AI Examining the use of internal big data sets in government departments and agencies.
  2. Delete data. Responsible AI practitioners have experimented with various methods to remove bias to remove historical patterns of gender and racial bias, but some scientists have argued for the need to go beyond the data committee.
  3. Decent work. Responsible AI practitioners should remove exploitation of workers from the AI ​​supply chain and ensure fair working practices are in place.
  4. Transparency. Responsible AI practitioners can either “open” the process or be transparent and address forms of surveillance and accountability.
  5. Algorithm Audit. Responsible AI practitioners can submit to algorithm accountability and submit themselves to human rights audits of AI systems.
  6. There's no harm. Many responsible AI initiatives use precautionary principles and go slowly in design, “doing no harm.” Adopt and implement guidelines for using AI.
  7. Loop man. Remotely automated processes can result in inhuman development experience, bias and discrimination. One form of responsible AI is to maintain human judgment and human interfaces in the development process at all times.
  8. Participation and inclusion. Many responsible AI initiatives focus on developing AI solutions with affected populations. We aim to find ways to ensure influential participation of affected groups at each stage of the project cycle.
  9. Colonization of AI. This new area is relatively undeveloped. Efforts are being made to build AI hubs in a majority world and to expand AI training and education in underserved communities. However, criticism of AI's coloniality is beyond numbers, and there is much to do in this area.
  10. Climate impact. While some initiatives present AI as a solution to building climate resilience, AI can also accelerate global warming. Much more thought is needed to reduce the AI ​​industry and the net carbon emissions of the AI ​​and the environment.

Uses technologies known to not harm

The harms of AI use have already been experienced and well documented, but the work to mitigate and overcome AI harm and injustice is at an early stage. To avoid causing harm or reputational damage, development funders and humanitarian agencies must apply precautionary principles. They should not try AI on vulnerable groups or marginalized communities.

Until we address the 10 issues identified above, funders and agencies should focus their efforts to advance the path to responsible AI, or use only technologies known to “do no harm.”

If you are involved in developing digital development practices, policies and strategies and are interested in exploring the role of AI, Tony Roberts collaborated to derive comprehensive digital transformation in the short course of ID's international development.

Sign up to apply now



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *