What are the AI ​​risks we should really be concerned about?

AI For Business


There have been some very public and tough discussions about artificial intelligence over the past few weeks, many of which have come from people who have made AI their life’s work. Jeffrey Hinton, known as the “Godfather of AI,” recently quit his job at Google to embark on a sort of media tour warning about the dangers of technology. It’s not just him. An open letter from Elon Musk and others calling for a moratorium on AI development, and an article in Time magazine by theorist Eliza Yudkowski that said generative AI could harm or even end humanity. There was also an essay.

On Friday’s episode of What Next: TBD, I spoke with: Meredith Whitaker, chairman of the Signal Foundation and co-founder of New York University’s AI Now Institute, has sorted out what’s missing in the real threat of AI and the Doomerism discourse. Our conversation has been edited and condensed for clarity.

What are your thoughts on the concerns raised by Jeffrey Hinton and others regarding the safety of AI?

Meredith Whitaker: The risks I see associated with AI are that only a handful of companies have the resources to build such large-scale AI systems, and that companies are driven by profits and growth, not necessarily the public good. is driven by the interest of I think the concerns raised by Jeff et al. often focus on hypothetical future scenarios in which these statistical systems somehow become superintelligent, but I see no evidence to support those claims. yeah. It’s not that I don’t believe people are loyal to these beliefs. That’s what I’m concerned about. Look here and see the broad future What we need to worry about now is in the hands of companies.

What we call machine learning or artificial intelligence is basically a statistical system that makes predictions based on large amounts of data. So if you’re talking about companies that we’re talking about, we’re talking about data that’s collected through surveillance, or variants of the surveillance business model, that’s used to train these systems, and then the next is claimed as This data is often very flimsy, but it is intelligent or capable of making important decisions that shape our lives and opportunities.

The data fed into these systems are collected Search the web by crawling millions of websites— what we are talking about Everything from news sites to hate speech. Why is it important?

That data is packaged into these machine learning models and used in highly sensitive ways with little accountability and little testing to effectively discourage the companies looking to profit from it. backed up by wildly exaggerated claims advertised.

You are working with the AI ​​Now Institute, which claims that nothing is inevitable when it comes to artificial intelligence. what do you mean?

Part of the narrative about inevitability has been built over the years by the clever tricks that have confused the products these companies are creating (email, blogging, search) with scientific advances. A message may, implicitly or explicitly, Don’t put your finger on the balance of progress. Instead, let the technicians handle the technology. For a long time, this circumvented regulation. This intimidated people without a computer science degree. Because I don’t want to look stupid. That, in large part, has brought us to where we are today.

We are in a world where private corporations hold immeasurably complex and detailed documents on billions of people and increasingly provide the infrastructure for social and economic organizations. Whether it’s providing so-called AI models that outsource decision-making, or cloud support that ultimately provides highly sensitive information, again with little transparency and little accountability. is in the hands of a few companies that centralize the functions of . It’s not an unavoidable situation. We know who the actors are and where they live. We have some understanding of what interventions are sound to move in a direction that better supports the public good.

What are you most concerned about as we embrace AI?

There are many concerns that we must have at once. This is not a zero sum game. And, of course, data bias, and the fact that these systems are shaped like the source data, is a big problem. Nitasha Tiku of the Washington Post did a great job explaining what actually goes into creating ChatGPT. Where do we learn how to predict the next word in a sentence based on the billions of sentences we see? dangerous items were indicated. So data is a big concern. Who has access to author data? Who decides what that means and how it forms the implicit worldview parroted back through AI systems? There are also major concerns about who will end up using these systems, who will benefit and who will suffer.

These systems require so much expensive computing power and so much data that they can really only exist in the hands of very wealthy companies or very wealthy individuals. What is the strategy behind the general availability of generative AI?

It costs billions of dollars to create and maintain these systems from start to finish, and there is simply no business model to make ChatGPT equally accessible to everyone. ChatGPT is a Microsoft advertisement. This is an ad for studio heads, the military, and others who actually want to license this technology via Microsoft’s cloud services. We already know who will actually be able to use this in the end, and who the business model will target. It’s not democratically distributed technology. It will follow the inequality matrix that is currently forming in our world.

There is a kind of dichotomy in the criticism of generative AI. On the one hand, Geoffrey Hinton, Eliza Yudkowski and others have argued that there is an existential threat here. On the other hand, people like Timnit Gebru, Deb Raji, Joy Buolamwini, and perhaps you, the problem here is as much in how these things are built and trained as in anything else. You may be saying. I saw Hinton say in an interview on CNN that those concerns weren’t really there. What are your thoughts on these two different ways of thinking about how these models get out there and what harm they can do?

What concerns me about some of the so-called existential, most existential arguments is that they implicitly suggest that the people who are currently most privileged, the currently unthreatened, are actually threatened. It claims that we have to wait until Consider the risks big enough to care about. Too many people are at risk today, including low-wage workers, historically marginalized people, blacks, women, people with disabilities, and people in countries threatened by climate change. Their existence is threatened or harmed in some way by the introduction of these systems. Let’s take a look at these systems in use by law enforcement. A few months ago there was an article in the New York Times about a man imprisoned based on false facial recognition matches. It is deeply present in a person’s life. The person was black. People like Deb, Joy, and Timnit have documented time and time again that these systems are likely to misidentify black people. In a world where blacks are increasingly criminalized and inequalities exist in law enforcement, that will be detrimental. My concern, therefore, is that if we wait for an existential threat that includes even the most fortunate people in the whole world, we are implicitly — maybe not loudly, but that The structure of the argument is — I wonder if it comes down to saying it’s a threat to minorities. And what is being damaged now doesn’t matter until it matters to the richest people in the world. It’s just another way we stand by while these harms are spreading. That’s my core concern with focusing on the long term rather than the short term.

so what thennext step? S.clean it all up?

In my view, the next step would be something like a victory for the Writers Guild of America, showing that clear guardrails can be put in place for the use of these systems, and those guardrails appealing to those already in power. indicates that you do not need to They actually come from building power in the workplace and in the community. There are also some interesting proposals for more grounded regulation. I recently took note of Rina Khan’s New York Times op-ed calling for structural separation of these companies. I would also like to draw your attention to the really grounded proposals presented by her Amba Kak and Sarah Myers West from the AI ​​Now Institute in her 2023 landscape report. In particular, a proposal to consider privacy laws as beneficial to thwart some of his data-centric AI developments. Because, of course, we have to get back to this core reality: AI is built on surveillance. This is a product of the surveillance business model.

Where is the hope of curbing it in the field of policy? Do you think it was a statement from the FTC? Because it is certain that Congress is doing nothing.

it’s complicated. We cannot think of laws, regulations and policymaking as isolated from the rest. We know these companies can spend hundreds of millions of dollars lobbying and supporting space lawn organizations to make their voices heard. We know it’s the Post in the US.citizen united In the world, it’s very difficult to win without a huge amount of money, and that money can end up being secretly donated. So we’re in an ecosystem where policies aren’t emerging from Zeus’ forehead. Policymaking has a lot of influence. Some people take this matter seriously, but that doesn’t mean there isn’t a lot of backlash. We still need people on the ground saying, “No, we don’t want facial recognition in our community.” We need people to lobby for privacy. California privacy laws must prove and set the standard. We have to recognize that there is a lot of competition to put that pressure on.

How would you like people to think about things like AI’s sea of ​​headlines?

They are not the only ones feeling overwhelmed. Really confusing. Headlines are full of claims about what these things do and don’t do. If there’s one thing I can say, it’s to always be on the lookout for who benefits and who might suffer. Whenever you see a headline about OpenAI, you should always realize it’s talking about Microsoft. When you see headlines about AI, you have to remember that only a handful of companies in the world (those based in China or the US) have the resources to develop AI.

And remember, AI is not magic. This is based on intensive computing power, intensive data resources generated by monitoring, and intensive capabilities of these companies. Again, we know where they live, where their data centers are, and it’s entirely possible to thwart these technologies if they have the will. So it’s not out of our control, it’s not out of our control. You don’t have to be a computer scientist to get an informed opinion about how these are used and who and what they are used for. end.

Future Tense is a partnership of Slate, New America, and Arizona State University that investigates emerging technologies, public policy, and society.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *