Why you should avoid AI liability

Applications of AI


There has been a lot of discussion recently about creating an AI rights bill in anticipation of machines becoming sentient. This is well-meaning, but as expressed in this open letter calling for a moratorium on building systems such as GPT-4, creators of artificial intelligence (AI) products that can cause serious harm does not impose requirements on

For policymakers to truly protect against AI-specific threats, they need to enact an AI “liability bill” that holds creators accountable without stifling innovation.

First, it’s important to recognize the unique aspects of AI that create two new kinds of risk.

First, modern AI systems can act more like agents than software tools, interacting directly with the world. But it’s clear that no one understands AI as complex as ChatGPT, a large-scale language model trained on massive amounts of data freely available on the internet. Systems like ChatGPT can be used for political, racial, gender, and other i.

This feedback suppresses the dark side of such systems detected in the training data. This was evident in Bing’s recent conversation with journalist Kevin Ruth, in which he tries to convince him to leave his wife despite Kevin repeatedly saying that he is very happy with the marriage. somehow adopted a persona similar to Glenn Close in “Fatal Attraction.”

Second, AI has become available to society in the form of “pre-trained” models such as ChatGPT. This can be configured for applications not explicitly designed for it. Along the way, AI has evolved from an application to what economists call a “general-purpose technology.”

Think of it like electricity or information technology (IT). Both are general-purpose techniques, used for all sorts of things that weren’t supposed to happen when they happened. Similarly, the availability of pre-trained models has commoditized intelligence, unlike electricity and IT.

The nature of AI as an active agent and the ease with which downstream applications can be composed from standard building blocks make AI extremely powerful in terms of functionality, but it also creates significant risks. Darker uses of technology that cause concern include creating false or deceptive sales pitches to vulnerable people, but it is not hard to imagine the consequences of destroying society. Ethical platform operators abuse AI.

In reality, spirits are out of AI bottles and cannot be put back. The big question is whether the market will settle on its own or will it require some form of regulation?

Unfortunately, history shows that the market cannot solve this on its own without the damaging consequences of AI platform operators. The best option is to hold the AI ​​operator responsible for the obvious damage the system does.

There are simple rules from the physical world that seem to help control AI risk, or credible threats. If we intentionally release products that are dangerous to society, such as toxic waste liquids, into the environment, we will be held liable for damages. The same applies to AI. Harmful AI products released without adequate analysis and oversight could impact the market. Harmful activities cover areas of harm such as terrorism, money laundering, mental health, and market and population manipulation. Regulatory bodies should be established to oversee these risk areas.

In addition, we need rules regarding the use of AI training data. Language models were born because there were no laws against using the vast amounts of data freely available on the internet for training. A remarkable property of AI systems is that their intelligence scales with data. The more complex, the more you can understand.

The previous generation of tech companies got rich using data, much of it collected through dubious means. The damage was an invasion of privacy, but the benefits were great. This time, considering the sensitivity of the data we share with agents such as ChatGPT, the risks are much higher, such as intrusion into our lives by agents that can cause unexpected harm.

Society could change forever if legislators don’t act now.

Vasant Dhar is a professor at New York University’s Stern School of Business and Data Science Center. An artificial intelligence researcher and data scientist, he hosts a podcast.brave new worldexplores how post-COVID-19 technology and virtualization are transforming humanity. He brought machine learning to Wall Street in his ’90s, after which he founded SCT Capital Management, a machine learning-based hedge fund.

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *