Can AI be imbued with ethics?

Machine Learning


To secure a humane future, we need to address the key question: Can AI be trained to behave ethically?

There are no easy answers. Machine ethics is elusive because it is global in scope, the rules are lax, and there is little accountability required.

The risks of unethical behavior can be devastating. As long as powerful AI models outperform human reasoning, our species faces an ambiguous future.

The debate over what constitutes responsible behavior has come to the fore in recent clashes between the Department of Defense and big technology companies. Leading startup Anthropic has rejected a bid to remove safety guardrails for deploying AI in certain classified military operations without sufficient human participation. In response, the Pentagon said the government should not dictate when private companies should use AI.

While AI’s biological capabilities have the potential to save countless lives, the potential for toxin release is devastating. Without input from human doctors, life-altering diagnoses can be scientifically incorrect and morally irresponsible.

Anthropic had already experienced a code leak known as Claude, which had to be settled in court. Modern AI systems and projects are adept at finding flaws in software. These can wreak havoc on critical global infrastructure such as the internet and power grids. But how do we punish AI for its criminal behavior?

On the other hand, technology utopians have great faith in AI. Technological dystopias, on the other hand, despair over unintended consequences.

Proprietary knowledge from big tech companies reduces transparency. And the government is withholding technical information from the bad guys. The Trump administration is considering overseeing the distribution of AI tools, but remains opposed to significant regulatory action.

Another complication is that the top echelons of society benefit the most from the AI ​​industry, with the majority bearing the costs, especially in the Global South. The widening AI divide makes countries with weak digital infrastructure more vulnerable to powerful companies like Google and OpenAI. Their data centers ingest minerals, consume energy, consume water, and spit out pollution.

That said, three ethical difficulties stand out: increased surveillance, technological unemployment, and autonomous warfare.

First, the Internet allows companies to extract vast amounts of personal data and direct human activity for profit. Shoshana Zuboff, professor emeritus at Harvard Business School, showed how human experience can be used to create and sell predictive products that shape behavior. The power of AI could thereby erode privacy, threaten democracy, and undermine freedom.

The second stems from technology’s role in liberating humans from labor. Due to automation, job losses are widely experienced in fields such as accounting, art, film production, healthcare, programming, engineering, and architecture. Technological unemployment is only partially offset by new niches in the information economy. Because work can be a source of finding meaning in life, it is important to consider whether it is ethically acceptable to allow this natural destruction of well-being.

Third, autonomous lethal weapons systems make decisions with little or no human involvement. The United States employs pilotless drones in military operations. Similarly, killer robots remove humans from the kill chain and program other machines.

Israeli historian Yuval Noel Harari reminds us of the possibility of technological totalitarianism. And superintelligent machines may go to war with each other, fall into the hands of rogue cyber-attackers, or even destroy the forms of human intelligence they embody. Writers of novels and films are unleashing their imaginations to warn the public about the terrifying possibilities of digital dictatorship.

Few proposals have been introduced to check the capabilities of algorithms and AI, such as regulatory reforms to protect people’s rights, audits by third-party inspectors, and review boards. Meanwhile, pushback from digital activists against ethically questionable practices known as “techlash” is growing. Workplace strikes, petitions, and protest rallies have become common.

There is a fleeting opportunity to infuse AI with the rigorous ethical standards of democratic governance before it is too late. There are signs that countless classes of people making a living across occupations and lifestyles are demanding that advanced technologies be instilled with a code of ethics. To make this scenario a reality, youth, Gen Z movements can provide the impetus, just as they have spurred political change in countries from Bangladesh, Madagascar and Nepal to Peru.

I’m still hopeful. Forging a consensus on who’s AI ethics is a fight that can be won.

JBoulder resident and camera columnist Im Mittelman is an educator, activist, and author. His books include Globalization Syndrome: Transformation and Resistance, Unbelievable Dreams: World-Class Universities and the Repurposing of Higher Education, and Runaway Capitalism.

To send a letter to the editor about this article: Submit online or check us out guidelines Click here to learn how to submit by email or mail.



Source link