why is it needed now

AI News


Like millions of people around the world, I have spent much of the past year living with the existential threat of AI.

And I’m not new to technology. I have worked in the digital media field since the 1980s. My company created one of the first multimedia scientific journals and won a U.S. Presidential Design Award. I thought I understood how innovative technologies arrive, accelerate, and ultimately become entrenched in our daily lives.

Since then, AI has permeated every aspect of our lives.

At first it felt miraculous. Suddenly, an energetic assistant joined me to interpret medical research, hone my writing, manage a complex project in rural India where I live part of the year, and even generate new ideas for my own artwork. It was like having a great intern who never sleeps. However, halfway through the process, my feelings toward this “miracle” changed.

As I cared for my wife during a serious medical journey, I began to notice how algorithmic systems were silently shaping which information surfaced first, which treatment options appeared, and which questions were never asked or never answered. In many cases, no one explained the decision. And it was never clear who was responsible for them.

That’s when AI stopped feeling like just a tool.

We are entering a world where systems make decisions for us. The benefits are huge. But the downside is already there. It’s about the information we see, who has access to opportunities, and how quickly these systems evolve, often faster than we can understand or manage them.

How do you analyze something that changes every day? Especially when the media is filled with breathtaking headlines about the latest AI breakthroughs. Most of us have little time or attention to keep up, let alone decide how to live with this technology. Meanwhile, regulations and public understanding are lagging further behind.

That urgency forced me to take a step back and ask a simpler question.

As AI becomes the invisible infrastructure of our lives, what basic rights will people need?


Timing is critical. As we approach the 250th anniversary of American democracy, we once again face a concentration of power that can either strengthen human freedom or quietly erode it.

This is not a robot apocalypse fantasy story. It’s about everyday fairness.

Today, AI systems are trained on the work and data of millions of people who do not receive any compensation. Workers are being forced out of their jobs without meaningful transition support. Algorithms influence decisions about jobs, loans, healthcare, and housing, often without transparency or appeal.

To me, a full-fledged framework for AI needs to address four things.

beginning, truth: Systems that shape what we see and hear must not distort reality without accountability.

Number 2, fairness: Creators should be compensated for their work training AI. When automation changes jobs, workers need to be protected. The benefits of AI should not only accrue to those already in power.

Third, transparency: People have the right to know when AI is involved in making decisions about their lives and to challenge those decisions if they are wrong.

Fourth, human safety equipment: Human options must always exist for high-stakes decisions, especially in medicine, justice, finance, and education. Children and vulnerable communities need special protection.

Compare this to what exists today. The government has announced principles. International organizations have issued guidelines. Regulators are trying to catch up. It’s all important. However, most frameworks still avoid the most difficult questions of accountability, economic justice, and who is ultimately responsible when AI causes harm.


This moment feels uncomfortably familiar. New power structures are forming faster than democratic oversight can keep up. Enterprise AI profits are exploding. Public trust is waning.

But here’s what gives me hope.

We are not powerless.

The future of AI is not predetermined by code. It is shaped by the rights we claim, the rules we insist on, and the values ​​we refuse to abandon.

My new book explores these questions in more detail. Before AI makes a decisionIt provides a practical way to remain human within a system that increasingly makes decisions on our behalf. But the core issue is simple. The question is not whether AI will change the world. It’s already happening. The question is, will we lead that change or will we wake up one day and realize that it was decided for us?

Now is the time to draw a clear line.

Before it’s too late.


Payson R. Stevens is a science communicator, author, and artist whose work spans technology, public communications, and human-centered design for more than 50 years. Recipient of the U.S. Presidential Design Award for pioneering digital science media.



Source link