Do AI bots like ChatGPT threaten humanity?

AI News




























Home > Business











This photo, taken in Toulouse, southwestern France, on January 23, 2023, shows a screen displaying the OpenAI and ChatGPT logos. ChatGPT is a conversational artificial intelligence software application developed by OpenAI. Lionel Bonaventure, AFP


TAKASAKI, Japan – Like nuclear weapons and biotechnology before it, artificial intelligence is threatening the world with an existential crisis, and some experts say that without proper checks on a global scale, humanity’s The future could be in jeopardy.


Against this backdrop, AI models like ChatGPT are high on the agenda of Japan’s two-day Digital and Technology Ministerial Conference, which ended on Sunday, to help policymakers understand how rapidly advancing technology can be used. We agreed that there is an urgent need for continued discussion on how to manage technology.


In a joint declaration, the G-7 agreed to promote the “responsible” use of AI and called for broad stakeholder participation in developing international standards of governance.




ChatGPT, which was released as a prototype in November 2022, stands for Chat Generative Pre-trained Transformer, which has been trained on large amounts of data and can process and simulate human-like conversations with users.


Chatbots, which ran on the GPT-3.5 model when first released, took the world by storm with a staggering 355 billion tunable variables that could be used to generate text. They typically had only a few million parameters.


On March 14th, US-based developer OpenAI released the next iteration of the model known as GPT-4. It is more powerful than its predecessor and has multimodal capabilities. This means you can receive both text and images as prompts.


The seemingly limitless possibilities of generative AI have raised concerns that technological development could get out of hand, but experts disagree on whether it’s a sign of humanity’s doom. .


In March, the Future of Life Institute, a think tank focused on the responsible development and use of technology, released an open letter calling for a moratorium on training AI systems stronger than GPT-4 for at least six months. bottom.




A letter on policy advocacy, citing the dangers of perpetuating bias, misinformation, destabilizing the labor market and concentrating power in the hands of a few firms, had over 27,000 signatures as of April 30. collected. -Steve Wozniak, founder of OpenAI and co-founder of Apple Inc.


Advanced AI systems can “pursue human or self-assigned goals in ways that place little value on human rights, human safety, or human existence in the most dire scenarios,” the think tank said. I am writing.


In an interview with Fox News earlier this month, Musk, one of the lab’s supporters, sounded the alarm against hyperintelligent AI, saying it “could destroy civilization.”


“[If]we only have regulations after something terrible has happened, it may be too late to actually introduce regulations. AI may be able to take control at that point,” he said. Told.


Katja Grace, co-founder and principal investigator of AI Impacts, a project focused on the long-term impacts of advanced AI, believes that humans are unlikely to go extinct if they fail to control AI. We estimate that there are 19%.


“The biggest risk is that due to current advances, current AI systems are just as good at making decisions about everything as they are at making decisions about Go and Chess, and that they can be used for purposes contrary to human welfare. I think it will quickly lead to an AI system that is planning a strategy, which will lead to the extinction of mankind.”


However, the system is still far from perfect. Inaccurate information presented as facts, called hallucinations, remains a challenge for large-scale language model technology, making it impossible to rely on for critical applications.


Satoshi Kurihara, chairman of the ethics committee of the Japanese Society for Artificial Intelligence, said that AI currently exists only as a tool for humans, and that “it is humans who will destroy humanity.”


“I believe we can avoid extinction if we can learn how to coexist with highly autonomous and versatile AI, which will become a reality in the future,” he said in a recent written interview. said in


Kurihara emphasized that during the development of such an advanced system, guidelines such as peace, cultural diversity and integrity must be adhered to, and the scope and transparency of AI usage must be controlled.


In a joint declaration issued this weekend, the G-7 recognized the “need to consider the near-term opportunities and challenges” of generative AI, considering its global prominence and rapid pace. “We will continue to promote safety and trust,” he said. development.


Less apocalyptic, but more pressing concerns surrounding generative AI models are the unauthorized collection of user data, their ability to manipulate public opinion, and their potential use for malicious purposes such as deepfakes and revenge porn. is centered on.


A repository maintained by AI, Algorithms and Automation Incidents and Controversies, an initiative to track unethical misuse of AI, has documented several incidents of fake news and disinformation during Russia’s invasion of Ukraine. .


However, AIAAIC founder Charlie Pownall said that while it is important to curb such abuses, it should be “proportional and not overly abusing users’ privacy, confidentiality, and other rights.” It may prove difficult to regulate


International regulation of AI is further complicated by different attitudes towards technology around the world.


For example, Japan’s emphasis on the potential utility of generative AI suggests that the government has so far been more cautious about regulation than the EU, which has proposed its first-ever legal framework on AI. means that you are taking


“Given political, economic, and legal differences, as well as significant differences in public perceptions and expectations of AI, China, the United States, the EU, the United Kingdom, and other major markets are embracing many important aspects of AI legislation. It seems unlikely that we’re on the same page about .Across different countries and cultures,” Pownall said.


== Kyodo














Source link

Leave a Reply

Your email address will not be published. Required fields are marked *