BARCELONA — Alarmed by the growing risks posed by generative artificial intelligence (AI) platforms such as ChatGPT, European regulators and law enforcement agencies are looking for ways to slow humanity’s plunge headlong into the digital future. I’m here.
ChatGPT, which responds to user queries in the form of essays, poems, spreadsheets, and computer code with few guardrails, has recorded over 1.6 billion visits since December. European Union law enforcement agency Europol said at the end of March that his ChatGPT, just one of thousands of AI platforms currently in use, was accused of phishing, malware creation and even terrorist acts. I warned you that you can help criminals.
“If a potential criminal knows nothing about a particular crime area, ChatGPT can greatly speed up the investigation process by providing key information that can be investigated further in subsequent steps.” A Europol report said: “That is why ChatGPT can be used to learn about a huge number of potential crime areas, from home break-in methods to terrorism, cybercrime and child sexual abuse, without any prior knowledge. can.”
Last month, Italy temporarily banned ChatGPT after a glitch exposed user files. Garante, Italy’s Privacy Rights Commission, has called on the program’s creator, her OpenAI, to address the issue of where users’ information goes, and to set an age limit on the platform, with a number of sanctions against privacy violations. threatened to impose a million-dollar fine. Spain, France and Germany are investigating personal data breach complaints. And this month, the EU’s European Data Protection Commission formed a task force to coordinate regulations across the 27-country European Union.
“This is a wake-up call in Europe,” EU MP Dragos Tudrash, who co-sponsored the Artificial Intelligence Act finalized by the European Parliament and establishing a central AI agency, told Yahoo News. “You have to identify very clearly what’s going on and how you structure the rules.”
Artificial intelligence has been a part of our daily lives for several years (Amazon’s Alexa and online chess games are just two of many examples), but ChatGPT, a “big language model” with which users interact Nothing made me realize the potential of AI quite like this. Answer questions and view completed tasks in seconds.
“ChatGPT has knowledge that even very few humans have,” says Mark Bünger, co-founder of Futurity Systems, a Barcelona-based consulting firm focused on science-based innovation. said. “One of the things he knows better than most humans is how to program computers. It’s quick. or A version would be even better and would program something that even humans can’t understand. ”
Experts say this amazingly efficient technology is opening the door to all kinds of fraud, including school impersonation and plagiarism.
“For educators, it is cause for concern that submitted coursework may have been aided or created entirely by generative AI systems such as OpenAI’s ChatGPT or Google’s Bard,” Yahoo News told Yahoo News. Told.
OpenAI and Microsoft, which financially backs OpenAI but develops rival chatbots, did not respond to a request for comment on this article.
Futurity Systems CEO Cecilia Tham said: Since ChatGPT opened to the public as a free trial on his November 30th, the programmer has been getting a lot of attention from PlantGPT, which helps monitor houseplants, with “chaotic or unpredictable output,” according to its website. Ultimately, it will “destroy the human race.”
Another variation, AutoGPT (short for Autonomous GPT), can perform more complex goal-oriented tasks. “For example,” Tam said. How can I do that?”—and figure out all the intermediate steps to that goal. But what if someone said, “I want to kill 1,000 people”? Can you give me all the steps to do that?’ ChatGPT models are limited in the information they can provide, but “people could hack them,” she said.
Due to the potential dangers of chatbots and AI in general, the Future of Life Institute, a technology-focused think tank, issued an open letter last month calling for a temporary halt to AI development. Signed by Elon Musk and Apple co-founder Steve Wozniak, “AI systems with intelligence competitive with humans could pose serious risks to society and humanity,” and “AI Lab [are] No one is caught in an uncontrollable race to develop and deploy ever more powerful digital minds that even their creators cannot comprehend, predict or reliably control. ”
The signatories called for a six-month moratorium on the development of AI systems stronger than GPT-4 so that the regulation could be crushed, and threatened the government if major players in the industry do not act voluntarily. I asked them to “set a moratorium”. So.
EU MP Brando Benifei, who co-sponsored the AI law, scoffs at the idea. “A moratorium is not realistic,” he told Yahoo News. “What we need to do is continue to work on finding the right rules for the development of AI. We also need a global discussion on how to address this very powerful AI challenge.” is.”
This week, EU MPs working on AI called on President Biden and European Commission President Ursula von der Leyen to “convene a high-level global summit” to “provide a preliminary set of basic principles for development.” Decide on AI control and deployment”.
Tudorache told Yahoo News that the AI law, due next year, “will give regulators new powers to deal with AI applications” and empower EU regulators to impose hefty fines. The law also includes risk ordering of various AI activities and prohibits the use of things like “social scoring.” This is a dystopian surveillance scheme that rates virtually all social interactions on a merit scale.
“Consumers should know what data ChatGPT is using and storing and what it is being used for,” said Deputy Head of Communications at the European Consumer Organization (BEUC). One Sébastien Pant told Yahoo News. “It is not yet clear what data is being used or whether data collection respects data protection laws.”
Meanwhile, the United States lags behind in taking concrete steps to regulate AI. Recently Raised Concerns “AI is now being used to decide who to hire, who to fire, who to put on loan, who to hospital and who to go home,” said FTC Commissioner Alvaro Bedoya. ‘ said.
When asked recently whether AI could become dangerous, Biden replied, “We don’t know yet, but it’s possible.”
Differences in how consumers think about protecting their personal data go back decades, Gabriela Zanfir-Fortuna, vice president of global privacy at the Future of Privacy Forum, a think tank focused on data protection, told Yahoo News. Told.
“The EU attaches great importance to how people’s rights are affected by automating personal data in this new computerized and digital age, and even before including provisions in the Charter of Fundamental Rights European countries such as Germany, Sweden and France adopted data protection laws 50 years ago, she added: “The United States does not yet have general data protection laws at the federal level. U.S. lawmakers haven’t seemed to pay much attention to the issue for decades.”
Meanwhile, Gerd Leonhard, author of “Technology vs. Humanity,” and others discuss what happens when ChatGPT and more advanced forms of AI are used by the military, banking institutions, and those working on environmental issues. I am worried.
“The ongoing joke in the AI community is that if you ask AI to fix climate change, it will kill all humans. is the answer.”
