This is part of Northeastern’s 2023 graduation exercise. For more information, see our exclusive start page.
Few experts in the world of technology are more in tune with the development of artificial intelligence than Alondra Nelson, the noted academic, author, and policy expert who oversaw the drafting of the White House AI Rights Act. plug.
For the past two years, Nelson has served as Acting Director of the White House Office of Science and Technology Policy and Deputy Assistant to President Joe Biden. The AI Bill of Rights is the first document of its kind related to emerging technologies and articulates a set of five principles that help drive the development, use and implementation of AI-based tools.
Over the weekend, Nelson was named an honorary doctorate at Northeastern University’s 2023 undergraduate commencement ceremony as a “groundbreaking advocate of scientific discovery and innovation focused on ethics, racial and gender equality, and access.” number was awarded. Northeastern Global News had a broad conversation with Nelson about so-called generative AI, the technology-based “hype cycle”, and the promises (and dangers) associated with the current moment of AI craze.
In October 2021, you and one of your colleagues in the White House Office of Science and Technology Policy wrote an op-ed discussing the need for an AI Bill of Rights. What developments in the AI field inspired this article? If you could give us some insight into what you’ve seen, it would be great if it could give us the current context. is.
The Biden-Harris administration came into office with a kind of technology policy agenda and a big tech accountability agenda. There’s so much going on in this space that it’s hard to see all the pieces come together. For example, there was, and still is, a push for antitrust and competition policy among people like the National Economic Council (NEC) and Sen. [Amy] clove char. There was a sense that one way to achieve accountability in Big Tech was like competition. The US and EU had set up what they called the Technology Trade Council and were beginning to meet (AI was one of the key workstreams there).
Until January of this year, my colleague at NEC, Tim Woo, was working on a project called “Manifesto for the Future of the Internet,” an advocacy for internet freedom in 61 countries. to be clingy. And then there was something like the Democratic Senate. The first took place in his November 2021, and the blueprint for the AI Rights Bill was released as a product of the second Democratic Senate meeting earlier this year.
So there was this big context, this cauldron of things going on. And the AI Bill of Rights blueprint was part of this larger Biden administration strategy. That said, we also came into office with some concerns that people had. Similar to information integrity, it is sometimes referred to as misinformation or disinformation. Harm to mental health caused by people engaging in social media, especially young people. Concerns about facial recognition technology and its use in surveillance. These are three different examples, but what they have in common is the use of AI and algorithmic amplification.
In information space, it’s like YouTube. The AI and algorithms used in these systems can make them more harmful and useless. But those processes sometimes help us too, right? Think about YouTube, YouTube helps us find something. want See; what we might be really into. It was therefore clear that AI underpins both these three concerns, as well as the very exciting potential for work in science and technology policy. Beginning with that article, there was almost a year of engagement on these issues.
At the heart of how AI works is the algorithmic process you just mentioned?
There are many different processes, but what AI typically does is use the data that enters the system. The system makes predictions, makes decisions, and makes decisions about consumer choices that may lead to certain types of consumer behavior. The move to generative AI (advanced AI) gives us the ability to use these processes at scale. It’s both speed and scale like you’ve never experienced before. We’ve been in the age of AI for quite some time now, and I think we’ve made some big strides forward.
let me‘Talk about the AI Bill of Rights. Metaphorically, it sounds like a founding document. Can you talk about how the public should think about this document?
The “Bill of Rights” framework is very intentional. Because part of what we were trying to communicate to many different stakeholders is that even with new technologies coming, GPT-4 our basic rights will not change. That’s what it means. U.S. employment law will not change. It does not change US civil rights law. As long as there are federal privacy laws, that will never change. We have often said that technologies will change, but much of the conversation in policy circles surrounding AI has been about the desire to preserve civil liberties and civil rights, and the desire to advance these technologies while adhering to democratic values. It always comes back to the desire to let.
Part of what the documentation is trying to do is to say: What does it mean to have democratic principles in mind in doing this, both at the level of principle and technical practice? not that it is necessary already The restrictions we have on those who break the law exist even when we are talking about quantum computing, for example. This is a highly speculative technology.
Then there were two other purposes. One is educational. Although this document is long, it is also intended for general readers. I hope you enjoy reading it. It is intended to be illustrative with many examples. This is intended to be a non-technical reading level for non-professionals. As such, we expected a fairly broad readership, from high school students to parents to policy makers to state and federal legislators, just like the stakeholders with whom we were involved. I really imagined that people would read it, and wrote it so that they would read it.
And that also means being ambitious. Even in October 2021, this is new territory in technology. What kind of world do we want to live with technology? And technology is the tool we use, whether it’s AI, the Internet of Things, or quantum computing. Also, some of the rhetoric about AI and advanced AI suggests that we are out of control. Humans have no control over setting the tables, so to speak, and setting values for how technology is used.
Our desire here is to say: You should have a right to privacy and should not be subject to algorithmic discrimination. Also, if the algorithm system gets stuck in a loop, it should at least have the option of getting a human. AI system loop. These things are hard. Sometimes they are more expensive. Sometimes they demand that we go back to the woodshed one more time and do a little more on the engineering side. Part of what the White House should do—part of what the president should do is set our vision at its best.
There seem to be two slightly different objections to the current pace of AI development. One is that the technology falls into the wrong hands (in your op-ed, you allude to how China is using AI-based facial recognition technology). For example, AI could become “super-intelligent” and subdue or kill humans. What is your position regarding these potential dangers, and do you lend credence to the latter claim?
i think they are related. I think what we’re looking at is different risks. Some of these risks are some of the things I talked about when we started the public engagement process on the AI Bill of Rights, such as algorithmic discrimination. Surveillance and facial recognition technology, for example, are pushing people out of the job pool, thus keeping them away from benefits and services. And of course, disinformation and misinformation. Generative AI comes with concerns about disinformation and misinformation at scale. About copyright. Unemployment and automation have also been discussed for probably a little less than a decade. You can even think of his late 20th century conversation about robots taking our jobs and this sort of thing.
AI is a very powerful technology and unemployment is significant. But what unemployment could potentially mean, if it really becomes one thing we have to think about, is a change in social organization. It is a very profound future. shapeJust as some risks give one pause, this is also an opportunity that can still shape how this turns out, which is a tremendous opportunity.
AI is at least a dual-use technology, so we’re always worried about adversaries. Especially when working in government. It’s part of what we do to keep our country safe. Combining his two examples you gave: Just messing around with powerful tools and not being part of an automated mastermind.
And I think empirically — I’m a researcher above all — the jury hasn’t come out to superintelligence yet. , I don’t think it’s very clear. Basically, I think one of the reasons I’ve worked in tech is because I think tech is cool. And because I believe in the ability of researchers to finally understand and be able to understand it. handle on it. I personally don’t feel compelled to claim that “everything is about to be released into society.”
What advice would you give people on legitimate concerns about the dangers of AI and how to distinguish between fear-mongering and expressiveness?
As someone who has worked in the field of tech policy, I was, on the one hand, really encouraged by the nightly news talking about AI, and the fact that AI is having a major moment in the news cycle. That’s a big change. When I picked up the iPhone, I already knew that over half of the apps used some form of AI. And we know there are a few companies that are leaning in as early adopters of generative AI, like Snapchat and Instacart. So it’s already in us, so people need to understand what it is.
Also, the fact that more and more people are using and recognizing it in terms of caring about democracy means leaving the role of making decisions about what these technologies are to just a few experts. I think you mean not. We may not fully understand it, and we all don’t need to understand it on a professional level, but the public conversations of the last few months have made it easier for people to have stakes and opinions. I hope you can help.
As for your question, it’s difficult because very eminent scientists say a lot of contradictory things. For me, when there are some very smart people who have done extraordinary things in the world — Turing Award winners and people who have created new innovations — I see these same people saying, “I don’t have.” I’m a little skeptical about saying the power to control this thing. They are people who have worked at the cutting edge of human knowledge, and I want to take them as seriously as possible, but also encourage them to use their innovation and talent to create the optimal future. Both I think. Moreover, a future in which we all thrive.
Tanner Stening is a reporter for Northeastern Global News. Send an email to t.stening@northeastern.edu. follow him on twitter @tstening90.
