“AI is not intelligent at all” – experts warn of the global threat to human dignity

Machine Learning


Robots that think about artificial intelligence technology
Credit: ShutterStock

Opaque AI systems run the risk of undermining human rights and dignity. Global cooperation is required to ensure protection.

The rise of artificial intelligence (AI) Charles, a new study from Darwin University (CDU) found that while changing how people interact, it poses global risks to human dignity.

Dr. Maria Randazzo, a lead author from the CDU law department, explained that AI is rapidly reshaping Western legal and ethical systems, but this change is eroding democratic principles and reinforcing existing social inequality.

She said current regulatory frameworks often overlook basic human rights and freedoms, including privacy, protection from discrimination, individual autonomy and intellectual property. This shortage is primarily due to the opaque nature of many algorithmic models, making operations difficult to track.

Black box problem

Dr. Randazzo described this lack of transparency as a “black box problem,” noting that decisions created by deep learning and machine learning systems cannot be tracked by humans. This opacity challenges individuals to understand whether and how the AI ​​model violates their rights and dignity, and prevents them from effectively pursuing justice when such violations occur.

Dr. Maria Randazzo
Dr. Maria Randazzo discovered that AI formed Western legal and ethical landscapes at unprecedented speeds. Credit: Charles Darwin University

“This is a very important issue and it's only getting worse without proper regulations,” Dr. Randazzo said.

“AI is not intelligent in the human sense at all. It's not cognitive behavior, it's an engineering victory.

“There's no reason for what it does, because humans understand it, so it's just pattern recognition that has been stripped of embodiment, memory, empathy, or wisdom.”

A global approach to AI governance

Today, the three dominant digital powers around the world (US, China and the European Union) each take a significantly different approach to AI, leaning against market-centric, state-centric and human-centric models, respectively.

Dr. Randazzo said that the EU's human-centered approach is a favourable path to protecting human dignity, but without a global commitment to this goal, even that approach is insufficient.

“Global, if we don't fix the development of AI on what we are human, we run the risk of creating a system that cuts down and flattens humanity to downdata points rather than improving our human condition,” she said.

“Humanity must not be treated as a means of ends.”

See: “Human dignity in the age of artificial intelligence: an overview of legal issues and regulatory regimes,” Maria Salvatriz Randazzo and Guzil Hill, April 23, 2025. Australian Journal of Human Rights.
doi:10.1080/1323238x.2025.2483822

This paper is the first of a trilogy that Dr. Randazzo produces on this topic.

Don't miss a breakthrough: Join our ScitechDaily newsletter.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *