Experts explain how AI and ChatGPT are full of possibilities and dangers

Applications of AI


At this point, I tried ChatGPT. Even Joe Biden tried ChatGPT. This week, his administration put on a big show by inviting his AI leaders, including Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman, to the White House to discuss how to build “responsible AI.” rice field.



But maybe you’re still fuzzy about some very basic things about AI. — but I hate to admit it.

no worries. We spent much of the spring talking to people working in AI, investing in AI, trying to build businesses in AI, and the current AI boom being exaggerated or dangerously misguided. I’ve talked to people who think they’re making progress. I made a podcast series about the whole thing. Media recoding.

But we have also drawn a sample of the insightful and often contradictory answers we get to some of these very basic questions. It’s a problem everyone else needs to solve soon.

Read on — don’t worry. Don’t tell anyone you’re confused. we are all confused.

Just how big is the current AI boom?

Microsoft Chief Technology Officer Kevin Scott said: I was 12 when the PC revolution happened. I was in graduate school when the internet revolution happened. I was running his mobile startup in the very early days of the mobile revolution, coinciding with the massive shift to cloud computing. This sounds a lot like those three things to me.

Innovation Endeavors co-founder Dror Berman said: Mobile was an interesting time. It gave us a new form factor that allowed us to take our computer with us. I think we’re in an entirely different era now: we’ve now been introduced to the basic intelligence blocks that we’ve made available. You can rely on all the knowledge that is publicly available.

Gary Marcus, entrepreneur. Emeritus Professor of Psychology and Neuroscience at NYU: I mean, it’s absolutely funny. I don’t want to argue against that. I think it’s a rehearsal for general artificial intelligence that will be realized someday.

However, there are trade-offs at this time. These systems have several advantages. You can use them to write something for you. And there are some minuses. This technology is used, for example, to spread misinformation and can do so on an unprecedented scale. This is dangerous and can undermine democracy.

And I would say that these systems are not very controllable. They are powerful and reckless, but they don’t always do what we want them to do. Ultimately, it was like, “Okay, we can build a demo here. Can we build a working product? And what about that product?”

I think in some places people will adopt something like this and they will be perfectly happy with the output.

How can we create AI responsibly? Is it possible?

James Manyika, senior vice president of technology and social at Google, said: I’m just trying to make sure the output isn’t toxic. In our case, we do a lot of adversarial generation testing of these systems. In fact, if you use Bard, for example, the output you get when you enter a prompt isn’t always what Bard originally thought of.

We run 15 or 16 of the same prompts and look at their output to pre-evaluate safety, such as toxicity. And now we don’t get all of them all the time, but we do get a lot of information already.

By the way, one of the big problems we have to face is that this is about us, not about technology, it’s about us as a society. How do we think about what counts as toxic? That’s why we try to involve and engage the community in understanding them. We involve ethicists and social scientists to study and understand these issues, but they are real issues for us as a society.

Emily M. Bender, Professor of Linguistics, University of Washington: People talk about democratizing AI, and I always find it really frustrating. Because what they are referring to is putting this technology in the hands of so many people. That’s not the same as giving everyone a say in how AI is developed.

Basically, I think cooperation is the best way. Appropriate regulation is done from outside so that companies are held accountable. And then there are the internal tech ethics officers who help companies actually meet the regulations and the spirit of the regulations.

And to make that happen, we need to equip people with broad literacy so that they can ask their elected representatives what they need. So I hope our elected representatives are familiar with all this.

Scott: From 2017 to today, we have spent rigorously building responsible AI practices. No AI can be exposed to the public without a strict set of rules defining sensitive uses. We need to be transparent with the public about our approach to responsible AI.

How worried should you be about the dangers of AI? Should you worry about worst-case scenarios?

Marcus: Airships were very popular in the 1920s and 1930s. Until the Hindenburg was made. Everyone thought all these people flying heavier than air were wasting their time. They said, ‘Look at our airships, they scale much faster. I made a bigger one, and it’s all going well.”

So sometimes it scales the wrong thing. In my view we are scaling the wrong thing now. We are expanding an inherently unstable technology.

It is unreliable and untrue. We’ve made it faster and have more coverage, but it’s still unreliable and untrue.For many applications this is a problem. Some are not correct.

ChatGPT’s specialty has always been surrealist prose. I’m better at writing surrealist prose than I used to be. If that’s your use case, fine, I’m fine with that. But if your use case is one with a cost of error and needs to be honest and trustworthy, that’s a problem.

Scott: Thinking about these scenarios is absolutely helpful. It’s more beneficial to think about them based on where the technology really is, what the next steps are, and the steps ahead.

I think we are still many steps away from what people are worried about. There are those who dispute that claim. They believe that uncontrollable and urgent action will occur.

And if there’s a research team looking into the possibility of these emergency scenarios, we’re keeping a close eye on it. What is true autonomy. It’s a system with a feedback loop that allows you to participate in its own development and reach the superhuman. Fast improvement. And that’s not how the current system works. Not what we are building.

Can AI be used in potentially high-risk environments such as medicine and healthcare?

vendor: I already have WebMD. We already have a database that goes from symptoms to possible diagnoses, so you know what to look for.

Many people need medical advice and treatment, but many cannot afford it. It’s a social failure. Similarly, there are many people who need legal advice and legal services but cannot afford to pay for it. These are real problems, but throwing synthesized text into these situations doesn’t solve them.

If anything, it would exacerbate the inequalities we see in our society. For those who can’t pay, well, good luck here. You know: give it a try by shaking the magic 8-ball that will tell you something it seems to be related to you.

Manika: Yes there is a place. If I’m trying to explore it as a research question, how do I come to understand those diseases? If I’m trying to get medical help myself, I’m not going to these generative systems. go to a doctor or someone you know has reliable factual information.

Scott: I think it depends on the actual delivery mechanism. I definitely don’t want a world with substandard software and no access to real doctors. But, for example, I have a concierge doctor. I primarily communicate with the concierge doctor via email. And it’s really a great user experience. It’s amazing. It has saved me a lot of time and has given me access to many things that I otherwise would not have been able to access due to my busy schedule.

So I’ve been thinking about it for years. Wouldn’t it be great if everyone had the same? Alone. I think it’s good to have something to help deal with complexity.

Marcus: If it’s a medical misinformation, it might actually kill someone.In fact, false reports from search engines are the most worrisome domains

People are now constantly searching for medical-related information, but these systems do not understand drug interactions. They probably don’t understand certain people’s situations, and I suspect there’s actually some pretty bad advice out there.

I understand from a technical point of view why these systems hallucinate. And we can say that they hallucinate in the medical field. The next question is how it will be. What is the cost of error? How prevalent is it? How are users reacting? We don’t know all these answers yet.

Will AI put us out of work?

Berman: I think society needs to adapt. Many of these systems are very powerful and allow you to do things you never thought possible. By the way, we still do not fully understand what is possible. Also, I don’t fully understand how some of these systems work.

I think some people will lose their jobs. Some adapt and get new jobs. There is a company called Canvas that is developing a new type of robot for the construction industry and is actually working with unions to train workers to use this type of robot.

And many of the jobs that will be replaced by a lot of technology aren’t necessarily the jobs that most people want to do. So I think there will be a lot of new features coming that will allow people to be trained to do more exciting jobs.

Manika: Looking at most of the research on the impact of AI on jobs, it can be summed up in one word: more jobs, more jobs lost, and more jobs changed.

All three things happen because there are some professions where many of the tasks associated with these professions will probably decline. A series of jobs will be won and created as a result of this incredible series of innovations. But frankly, the bigger impact, and I think most people will feel, is that it has changed jobs.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *