“AI Will Lead to Cultural Upheaval”

AI News




























Home > Spotlight











An illustration image shows the introductory page of ChatGPT, a conversational AI chatbot model trained and developed by OpenAI, on a website in Beijing, China, March 9, 2023. Wu Hao, EPA-EFE/File


A new novel written like Ernest Hemingway? Fake Photos of Arrested Donald Trump? Or New Gerhard Richter?


AI systems like ChatGPT, Stable Diffusion, and Aiva make it all possible for you to write text, create the image you want, compose music, and more. All in just a few minutes, stunningly perfect, astonishingly perfect to the world. And worried.


“AI allows us to mass-produce creative services that were previously only available to highly qualified specialists,” says Robert Exner, founder of the Hannover content creation agency Fundwort. , AI systems undermine the value of human creative thinking and work,” says Exner.


“AI but fair”: Under this slogan, 15 organizations in the German creative industry have published policy papers on artificial intelligence (AI).


Among them, organizations in the fields of copywriting, editorial, journalism, graphics, illustration, photography and art are calling for their work to be protected from unauthorized use. Copyright law needs to be strengthened urgently, says her Exner-co-founded paper so that creatives can continue to enjoy the rewards of their work.


AI needs training materials


In fact, algorithm-based AI systems cannot generate text, images, or music without proper training material. “Developers are using our work unsolicited, without consent or compensation, to provide the data they need for their learning systems,” Exner told his DW. “This self-service mentality that incurs our costs is unacceptable!”


In principle, the philosopher Vincent Muller thinks so. At the University of Nuremberg in Erlangen, Müller conducts research in the still-developing field of “Philosophy and Ethics of Artificial Intelligence”.


“Of course, this is copyrightable data,” says Müller. Admittedly, AI systems do not simply recreate data. Rather, it learns something from existing materials and uses it in creating something. But who owns the copyright for this? “It’s a social problem if you make something new with economic value out of something you get for free,” says Müller.





Lack of AI regulation


The biggest problem is probably the lack of rules. The German Cultural Council, the governing body of the German Cultural Association, recently called for new regulations. The creative industry now also demands intellectual property protection in the digital realm, effective copyright laws, and data protection.


Hannover-based Robert Exner said: “Politicians hope to stand up for the approximately 1.8 million people working in Germany’s cultural and creative industries.


It’s not yet obvious to everyone, but it’s been a long time since artificial intelligence entered our daily lives. Publishers use AI to check manuscripts for potential bestsellers. News editors use AI writing programs. AI translates language and speech into text. Insurance companies use AI to calculate damage risk, and internet sites target visitors with relevant ads thanks to AI.


“What matters is who benefits from AI, and whether this benefit is more negative or positive for society as a whole,” explains AI ethicist Vincent Muller. That is, whether the rights of all parties involved are protected.


Muller estimates that the use of artificial intelligence will lead to cultural upheaval. “The cultural upheaval is that more and more decisions are being made by automated systems.”


As for auto-delivery of parking tickets, he says it might not be a problem. “But there will be more and more decisions like that, and you have to think about which decisions you want to leave to the mechanical system and where you want it to be used to help.”


How automated decision-making can cause problems was demonstrated in 2022 by the Dutch “child care benefit problem”. To create a risk profile for individuals applying for childcare benefits, the Dutch tax and customs authority used an algorithm that uses ‘foreign name’ and ‘dual nationality’ as indicators of potential fraud. bottom. As a result, thousands of low- and middle-income families have come under scrutiny, falsely accused of fraud, and demanded to return legally earned benefits. These algorithms, which led to racial profiling, put thousands of people in financial trouble.


Regarding this lesson, Muller said:




EU defines first legal framework


Most people agree that very little works without laws. Vincent Muller agrees. He points to his EU Commission’s initiative to regulate automated decision-making systems. The Brussels proposal includes a list of “high-risk” applications that require approval.


For example, the real-time use of biometric systems to identify people in public places is limited to a few exceptions, such as counter-terrorism. A social credit system like the one already tested in China to force good deeds should be outright banned.


But will this alone increase trust in AI? “AI is changing the psychological relationship between humans and machines,” says Müller, a philosopher and his AI researcher. “Usually we think of machines as objects of limited autonomy, ultimately controlled by humans.” That changes when machines are given autonomy, he says. “Because the possibility of intervention will also change.”


Who really understands their car?


Vincent Müller observes that the fear of losing control is exacerbated by another factor. Many see AI-controlled machines as mysterious black boxes. Of course, this also applies to many other technologies. For example, few people know the inner workings of a car. “But if the computer decides you can’t take a loan, it’s completely baffling to begin with.”


Concerns about the risks of artificial intelligence also plague developers and investors. In a dramatic appeal, prominent experts in the AI ​​and tech industry, including Tesla CEO Elon Musk, recently called for his six-month moratorium on AI development. According to an open letter from the nonprofit Future of Life, this time should be used to create a set of rules for this fairly new technology.


“A powerful AI system should not be developed until you are confident that its impact is positive and the risks manageable.” Besides Musk, Emad Mostaque, head of AI firm Stability AI, said Apple More than 1,000 people signed the manifesto, including Steve Wozniak, founder of Google, and several developers at DeepMind, Google’s AI subsidiary.




Program becomes a black box


But these technologies are now so advanced that even developers can no longer fully understand or effectively control their programs, Appeal said. As a result, information channels can be flooded with propaganda and falsehoods, streamlining the job of satisfying people. For this reason, all developers working on next-generation artificial intelligence programs must stop working in a publicly verifiable way. They demand that the state should impose a moratorium if this does not happen soon.


The Call for Rules brings together AI experts and German creative experts. The latter demand protection and reward because they fear digital exploitation. Whether this can be completely prevented remains to be seen. Until then, Chat GPT and other features will give us plenty of reasons to surprise us.




This article was originally written in German.











Source link

Leave a Reply

Your email address will not be published. Required fields are marked *