There are no easy answers to the rise of AI

Applications of AI


Rapid and continuous advances in artificial intelligence pose difficult problems with no easy answers.

This was the prevailing sentiment among the panelists of the Rochester Beacon online discussion “AI’s Dilemma: Risks, Rewards and Regulations”.

The May 23 event featured Pencheng Shi, Associate Dean of the RIT Golisano University of Computing and Information Sciences. Chris Kanan, Associate Professor of Computer Science, University of Rochester. Professor Tim Madigan, Professor of Philosophy, St. John Fisher University.

The event was sponsored by Bond, Schoeneck & King LLP, Armbruster Capital Management and Next Corps Luminate.

It also agreed with ChatGPT, an AI-powered program that Rochester Beacon publisher and discussion moderator Alex Zapesochny, CEO of Clerio Vision, asked to write an introduction to the event.

“Although AI has immense potential benefits, it is not without significant risks,” noted ChatGPT. When asked if and how AI should be regulated, he answered in the affirmative.

“The role of effective and wise regulation will be paramount,” ChatGPT replied.

To address the issues raised by AI’s imminent role in weapons systems, medicine, and education, its employment by fraudsters and pranksters in deepfakes, and AI’s replacement of human workers, “It will require international cooperation and multi-stakeholder dialogue,” he said.

The human panelists agreed with almost all points made by ChatGPT.

“(AI) has been everywhere[but]for the past few years, but it hasn’t been so obvious. Unlocking your phone with your face, that’s AI,” Kanan said.

On the plus side, AI is already embedded in areas ranging from medicine to law to computer programming, facilitating improvements.

In coding, for example, AI’s ability to program in multiple languages ​​and translate between them has dramatically increased productivity. In education, tutoring for students has improved. In medicine, I was able to successfully pass on some diagnoses, such as the interpretation of the electrocardiogram.

Mr. Shi said his daughter was a freshman in college and had already proven the benefits of AI tutoring. She told him that an AI tutor gave him a better experience than a human teaching assistant.

In healthcare applications, AI like telemedicine can play a leveling role, bringing access to care to those who otherwise can’t afford the additional services available to wealthy patients, Madigan said. said Mr. However, like telemedicine, AI can have drawbacks that make it inaccessible to those who do not have access to broadband, computers or smartphones.

But what about other drawbacks such as deepfakes that ChatGPT mentions?

Kanan said he is certainly concerned about the prevalence of fake content, such as deepfakes and AI-generated text. Still, he concluded he wasn’t as worried as he used to be. The next generation seems to be absorbing the lesson that seeing doesn’t always mean believing.

“One of the good things about AI being so prevalent right now, especially among young people, is that they are aware of this,” says Kanan. “I hope they’re rapidly learning that they can’t always trust their eyes and ears with what they see and read.”

Kanan said the spread of misinformation is nothing new. AI simply provides a new channel for disseminating fake content. Still, he conceded, AI has made the distribution of such materials “much faster.”

Bringing up a point ChatGPT didn’t raise, Zapesochny wondered whether reliance on AI could harm children’s learning. Will children raised on ChatGPT or even more capable future versions learn to write and analyze on their own?

“That question is very interesting,” said Kanan, adding that there is no answer.

Nonetheless, he said, “It is up to us as educators and parents to try to solve this problem in terms of how it changes the way we learn. It’s a responsibility,” he said.

And finally, he concluded: “I don’t think you can take these tools away from children. People will use them. yeah.”

How far will AI go? It could evolve into an independent form of intelligence hostile to humans, like Skynet, the AI ​​war machine obsessed with eliminating humanity as imagined in Arnold Schwarzenegger’s Terminator series. I wonder?”

After all, an authority no less than the famous physicist Dr. Stephen Hawking said, “AI could completely replace humans. If people design computer viruses, someone will design an AI that replicates itself.” This will be a new life form that surpasses mankind.”

Such dangers may seem far-fetched, but ChatGPT itself concludes that “without a doubt, the U.S. government must step in to regulate AI, an innovation of enormous scale that carries both promise and danger.” ing.

However, like ChatGPT, Shi argued that to be effective in practice, regulation would need to transcend borders and encompass the entire globe, allowing actors who do not fall under the rules of any one country or group of He pointed out that it is necessary to ensure that there is no danger to To date, the United States, United Kingdom, European Union and other countries and regions have not agreed on a single uniform regulatory standard.

According to Shi, a Chinese friend recently asked China’s version of ChatGPT a question about President Xi Jinping. The Chinese program remained silent instead of answering. She said her friend’s account was then closed. Xi explained that in China, AI must follow party line.

After all, AI is a human creation. These programs may do it faster and more eloquently than humans do, but ultimately AI programs, at least for now, are only returning information gleaned from human sources. .

Musician Frank Zappa once asked a question and finally answered with a musical irony that might be the last word on AI.

Sun Zappa: “Do you like it? Do you hate it? That’s how you made it.”

The prophet Dr. Hawking also said, “The genie came out of the bottle. We need to advance the development of artificial intelligence.”

Will Astor Senior Writer for Rochester Beacon. Beacon welcomes comments and letters from readers that follow our policy. Comment policy Including the use of full names.submissions to letter page should be sent to [email protected].



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *