Biden’s top advisor on AI and authoritarianism

AI For Business


The big battle over whether and how to regulate AI is finally emerging. President Donald Trump vowed earlier this week to block local officials from trying to regulate the technology. According to a draft executive order leaked Wednesday, the administration would punish states that violate the law. State legislators and legislators, including Georgia Republican Rep. Marjorie Taylor Greene, are now pushing back.

It’s been a while since this happened. Lawmakers have put forward countless proposals to regulate artificial intelligence, but no significant legislative package has passed. The Biden administration issued major executive orders regarding the technology, but the Trump administration spent millions attacking it and ultimately rescinded many of its actions.

“Despite very broad bipartisan support, the federal government has not taken even minimal action to, for example, manage the risks and harms to children. If there’s one thing we can all agree on, it’s that,” said Arati Prabhakar, former director of the Defense Advanced Research Projects Agency (DARPA) during the Obama administration and director of the Office of Technology and Science Policy during the Biden administration. fast company. “It just makes no sense to oppose any action at the federal level, saying states shouldn’t do anything because the federal government should do it.”

fast company Senior writer Rebecca Heilwile spoke to Prabhakar, who also submitted an extensive brief defending Congress’ ability to support scientific research amidst a federal funding crunch, about our position on AI regulation today and what continued advances in technology mean for the future of American democracy, governance, and well-being. This interview has been edited for clarity and length.

The administration has made clear that it does not believe state-level AI regulation is necessary, and continues to push for federal regulation. That clearly benefits some AI companies. What do you think about that?

Countries are very proactive. States have often considered multiple bills. But overall, most of what has been enacted are transparency measures. It’s a beginning, but a pretty small beginning.

I think we are far from discussing this technology and steering it in the right direction. It’s ridiculous to pretend that the federal government is trying to accomplish that without the states.

The Trump administration has rescinded Biden’s major executive order on AI. What is its impact? (Editor’s note: Biden’s executive order on AI signed In October 2023, it gave federal agencies a variety of new responsibilities related to this technology and guidance on how to use it.)

The actions taken by this administration on many fronts are deeply worrying. They plunged the country into a national crisis. The AI ​​front is less dramatic. While this is being positioned as this major dramatic change, much of the executive order’s implementation under President Biden was already in place. I’ve even seen them take credit for improving the performance of their ministries and agencies thanks to the clever use of AI.

The bigger problem is that this administration is failing to address two things we need to do as a country to get AI completely on the right track. While the market is doing all the experimentation to figure out where business productivity applications are, there are two public roles that are currently not really being addressed by the current administration. One is to manage risk and harm, and the other is to actively pursue AI for public purposes.

That’s where we fall short. At a time when the most powerful technologies of our time are proliferating, this government is not stepping up.

How concerned are you about people forming very psychological relationships, even very romantic or sexual relationships, with chatbots?

To me, it’s part of the distortion of reality that began in the social media era. By the way, it was AI in the social media era too, right? It was AI behind the scenes that decided what was being given to you. Now, things are even worse with AI appearing right in front of us using chatbots and image generators. I think this is something to be very concerned about.

These range from polarization caused by misinformation and disinformation to these parasocial relationships. There have been some truly tragic incidents and even suicides as a result of interactions that have left people in truly dangerous and vulnerable situations with dire consequences.

AI sparks conversations about cognitive offloading. We often talk about calculators, but yes, we’re not that good at doing calculations in our heads. But in general, automating computation is just good for our overall intelligence. But many people are worried about the prospect of outsourcing thought to these platforms.

Think a lot about the calculator example. There’s a difference between relying on a calculator to do calculations (which we all do) and not understanding what fractions mean. In order to deal with the world, you need to understand what fractions mean. I think that’s the sort of cleanup you need in a large language model.

I saw that Gallup was conducting a survey asking students about their attitudes toward AI. For example, I was really surprised to learn how anxious high school students are about AI. Part of their anxiety is that it’s not clear when they can and cannot use it at school. But part of their anxiety is also a concern about their own critical thinking skills. I love the fact that they had enough critical thinking skills to worry about it.

Is there a risk that focusing too much on AI competition with China will prevent us from developing better regulation of this technology within the United States?

That claim is being used to evade regulation. But I think we need to be really clear that what’s happening now is that countries around the world are competing to use AI as a tool to build a future that reflects their values.

I do not want to live in a future defined by the values ​​of this Chinese authoritarian government. When we look at their human rights abuses, we see that they are using AI to create a deep surveillance state. . . If you look at their military invasions and the potential to use AI in an offensive manner in a military context. . . I don’t think that’s the world most people want to live in.

It certainly does not reflect long-held American values. Of course, it’s very disturbing to see the Department of Homeland Security employing some of these tactics here. This is a huge red flag about what is going on with the authoritarian push in our government.

But again, the core question is: How do we enable AI to serve people and build a future that reflects the values ​​we have, centered around people, their creativity, and their ability to chart their own course, rather than being driven by kings and dictators? Therefore, I would like to utilize AI.

My impression is that both the Biden and Trump administrations have said that they are at least genuinely interested in the government’s use of artificial intelligence. But you also say there are concerns that it could be used by the federal government to move closer to an authoritarian approach.

It all depends on how you use it. Under the Biden administration, the Department of Homeland Security went all-in on initiatives such as using facial recognition in TSA pre-screening and Global Entry. These have very narrowly defined features and involve comparing fresh camera images to a database with good reason. If you’ve ever gone through TSA PreCheck or Global Entry, you’ll know how proper and respectful use of technology can speed up those processes and make them much better.

This is in stark contrast to the horror stories of police across the country using off-the-shelf facial recognition technology, purporting to make matches from grainy video, during arrests at convenience stores, for example. Truly poor and completely inappropriate use of flawed facial recognition technology led to the wrongful arrest of a black man. In one case, the crime was committed in a state the man had never set foot in. This is completely unacceptable.

So the difference between using these technologies wisely and appropriately, while respecting our core values, and simply using them recklessly without thinking through what it means for the society we want to live in, is all the difference in the world.

What do you make of the rise of companies like Anduril and Palantir who are seriously interested in selling AI and automation platforms for use on the battlefield and for defense purposes? What should we think about that?

I would like to broaden the scope of the question and say it’s not just about the battlefield. These are technologies that are being deployed against Americans here at home. So this is a very important question. And the central question is whether we can democratically control how we use technology. In the wrong hands, these technologies also have the potential to violate Americans’ privacy in dangerous and frightening ways.

We’re seeing it in some of the things that are happening right now. And that is never acceptable. Companies tend to take the position of “I’m just providing technology.” However, the implementation they have is contributing to this truly dangerous exploit. This is an example of the loss of democratic control over these very powerful new features.

We hear a lot about the AI ​​race. Think about the space race. There was a race to get someone into space. Then the race to get someone into orbit began. And so began the race to take someone to the moon. And now humans can live on the moon. When will the AI ​​race end? Speaking of needing to be first in the AI ​​race, I wonder. What comes first?

That’s the whole ball game – from beginning to end?

What I keep thinking about, and what I really think we need to focus on, is what AI can do to fundamentally change people’s lives. In 2024, when I was still in the White House, I held a conference called “AI Aspirations” where I highlighted seven different big ambitions for AI. These range from closing the educational gap for children, to getting better medicines faster, to better weather forecasts, new materials for advanced generations of semiconductor technology, to changing transportation infrastructure and making it safer.

At the moment, the only talk about AI is really about LLMs and maybe image generators. But what we’re talking about is the more general ability to train AI models on very different types of data. We live in such a data-rich world. So the problem is not just language. These are sensor data, scientific data, administrative data, and financial data. It’s all the data that’s generated when you click and move around the web.

Another important point for me is that it doesn’t happen just because a company commercializes a product. A thorough investigation is required. We have the datasets we need to build the weather and traffic models we need. They are public responsibilities. Ultimately, we need regulatory advances, not just to invent things faster, but to allow the regulatory process to sort out what is safe and effective, for example in medicines.

We are now at a stage where this powerful technology is just beginning to take off. There has never been a more important time for the federal government to step up. And instead, we are retreating from many of the other things that determine who will truly succeed in AI.

The final deadline for Fast Company’s World Changing Ideas Awards is Friday, December 12th at 11:59pm PT. Apply now.



Source link