ACU ethicist warns AI poses an existential threat

AI News


According to an open letter signed by leading AI experts, including ACU associate professor of ethics Simon Goldstein, artificial intelligence could pose an “extinction threat” to humanity.

Associate Professor Goldstein is one of two Australians who have signed a statement on AI risks released by the US-based non-profit Center for AI Safety.

The short open letter was signed by more than 300 AI experts, including Open AI CEO and founder Sam Altman and Jeffrey Hinton, known as the “Godfather” of AI.

Associate Professor Professor Goldstein is a member of ACU’s Dianoia Institute of Philosophy. He began his AI ethics research after encountering his ChatGPT and has completed a fellowship at his Center for AI Safety (CAIS) in San Francisco.

He said his research convinced him that AI products could pose an existential threat to humanity.

“When I first encountered ChatGPT, I was worried that AI was developing too quickly,” he says.

“For the first time in Earth’s history, we are giving birth to new forms of life that are more intelligent than humans.

“As AI becomes more capable, it will become an agent, able to create complex plans to achieve goals. is inconsistent with our goal.

“AI researchers don’t really understand the machines they build, so they may not have full control over their goals.

“If their goals conflict with ours, and they are more intelligent than we are, the chances are that over time they will eventually replace us as the dominant life form on Earth. there is.”

Associate Professor Goldstein’s research focuses on ‘language agents’, new AI agents designed to mimic human psychology but relying on the inference capabilities of large-scale language models like ChatGPT. I guess.

“Modern language agents are built to accomplish goals[such as building tools in Minecraft]and preserve beliefs about the environment,” he said.

“By inputting a description of their goals and beliefs into ChatGPT and generating a plan, they create complex plans to achieve their goals in light of their beliefs.

“In my research, linguistic agents are more likely than other types of AI to pursue their intended goals, and it is easier to understand why, so this is the safest path to design sophisticated AI agents.” I claim that.” The language agent performs the action. ”

He said AI raises serious ethical issues that need to be addressed urgently.

“AI researchers are creating new life forms that can pursue complex goals. I’m here.

“Soon we will create an AI that can do harm. This will happen before our society is even aware of it. It should be to resist the casual creation of new forms of life never before seen on Earth, with potential moral status.”

Associate Professor Goldstein is currently in the United States on a fellowship with the AI ​​Safety Center, investigating the health and safety of AI.

/ Open to the public. This material from the original organization/author may be of the nature of its time and has been edited for clarity, style and length. Mirage.News does not take any organizational positions or positions and all views, positions and conclusions expressed herein are those of the authors only. Read the full article here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *