Efforts to ensure corporate safety at the 2nd World AI Summit | Technology News

AI News


Alphabet's Google, Meta, Microsoft, and OpenAI are also participating, as well as companies from China, South Korea, and the UAE.

artificial intelligence machine learning
Amazon, IBM, Samsung won't develop AI models if they can't reduce risk

Reuters

Sixteen companies at the forefront of developing artificial intelligence pledged at a global conference on Tuesday to develop their technology safely at a time when regulators are scrambling to respond to rapid innovation and new risks. did.

Participating companies include major US companies such as Google, Meta, Microsoft, and Open AI, as well as companies from China, South Korea, and the United Arab Emirates.

Click here to follow our WhatsApp channel

These were supported by broad declarations from the Group of Seven (G7), the EU, Singapore, Australia and South Korea at a virtual meeting hosted by British Prime Minister Rishi Sunak and South Korean President Yoon Seok-yeol.

South Korea's presidential office announced that the countries have agreed to prioritize safety, innovation, and inclusiveness in AI.

Yoon pointed to concerns about risks such as deepfakes, saying, “We must ensure the safety of AI to protect the well-being and democracy of our society.”

Participants noted the importance of interoperability between governance frameworks, safety agency network planning, and engagement with international organizations agreed at the first meeting to adequately address risks.

Companies working to ensure safety include China's Alibaba, Tencent, Meituan, Xiaomi-backed Zhipu.ai, the UAE's Innovation Institute, Amazon, IBM and Samsung Electronics.

They committed to publishing a safety framework for measuring risk, avoiding models that do not sufficiently mitigate risk, and ensuring governance and transparency.

In response to the declaration, Beth Burns, founder of METR, an organization that promotes the safety of AI models, said, “We need international agreement on a 'red line' where AI development becomes unacceptably dangerous to public safety. That is extremely important.”

Computer scientist Yoshua Bengio, known as the “godfather of AI,” welcomed the efforts, but noted that voluntary efforts must be accompanied by regulation.

Since November, the debate over AI regulation has shifted from long-term doomsday scenarios to more practical concerns, such as how to use AI in fields such as healthcare and finance, a leading language modeling company said. Kohia co-founder Aidan Gomez said on the sidelines of the conference. Summit.

China, which co-signed the Bletchley Agreement on joint management of AI risks at its first meeting in November, did not attend Tuesday's meeting, but plans to attend an in-person ministerial meeting on Wednesday, South Korea's president said. Officials revealed.

AI industry leaders such as Tesla's Elon Musk, former Google CEO Eric Schmidt, and Samsung Electronics Chairman Jay Y. Lee attended the conference.

The next meeting will be held in France, officials said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *