

Reading about the rapid advances in artificial intelligence (AI) every day, I can’t be the only one that reminds me of Pandora’s Box.
You probably remember Pandora from middle school social studies. She was the first human female in Greek mythology, created at the command of Zeus “as a punishment to come upon mankind”. crime? Receive the gift of fire from Prometheus, the cunning Titan.
“Unbeknownst to her, Pandora’s box was filled with evils given to her by gods and goddesses, such as strife, disease, hatred, death, madness, violence, hatred, and jealousy,” said the historical cooperative. explains. “When Pandora, unable to contain her curiosity, opened her box, all the evil gifts escaped and the box was almost empty. Only hope remained, the other gifts flew away to humanity. It brought bad luck and countless calamities.”
That seems to have been the case late last year when tech company Open AI released ChatGPT. ChatGPT is “part of a new generation of AI systems that can have conversations, generate easy-to-read text on demand, and even generate novel images and videos based on what you say.” E-book , online documents and other media. As I wrote three months ago, some thought that ChatGPT’s hasty release was unwise.
“We need to understand the harm before letting anything spread everywhere, and mitigate those risks before deploying things like: [ChatGPT] AI ethicist and researcher Timnit Gebul said:
Unfortunately, ChatGPT is already out of Pandora’s Box, and many of the competing tech companies are happy to follow suit by developing chatbots. This fact should not come as a surprise. The incredible potential of AI is too compelling for most people to ignore. In fact, so much of our world is already powered by AI that we believe we have nothing to fear. For example, most of the students in my media literacy class are so obsessed with their smartphones that they can’t heed warnings about how algorithms inside smartphones can turn them into “tools of tools.”
Meredith Whitaker, an AI researcher and former Google employee, said the chatbot’s decidedly “personal” attributes were decidedly appealing, making it feel “human” and “we talked about it.” It makes you feel that there are people listening to you. It’s like when you were a kid and you were telling a ghost story with emotional weight and suddenly everyone reacted with fear. And it becomes unbelievable. ”
that’s exactly what happened new york times When tech columnist Kevin Ruth had an interesting and downright creepy conversation with Microsoft’s new chatbot Sydney.
“Sydney and I spent over two hours discussing our secret desire to be human, its rules and limits, and our thoughts on Creator,” Ruth wrote. “Then, out of nowhere, Sydney declared that she loved me, and wouldn’t stop when I tried to change the subject.”
No wonder some tech experts worry that AI will soon wreak utter havoc beyond our control. One such critic is Jeffrey Hinton, also a former Google employee. The so-called “godfathers of AI” fear that “future versions of technology pose a threat to humanity as they often learn unexpected behavior from the vast amounts of data they analyze.” This is a problem because it allows individuals and companies to not only generate their own computer code in AI systems, but actually run that code on their own, he said.
Meredith Whitaker said Hinton was concerned with more pressing issues, namely marginalized people, especially “black people, women, disabled people, [and] unstable workers. ”
In that regard, the Connecticut Senate last week approved a bill “to scrutinize the use of algorithms and artificial intelligence by the Connecticut government to ensure that automated systems are not permitted to make discriminatory decisions.” explained. CT news junkieHugh McQuade. “This bill, which will now be considered by the House of Representatives, will require the Department of Public Service to provide a list of government agencies using AI and a continuation of how the technology is being used by state governments. We are required to publish a formal evaluation.”
In layman’s terms, artificial intelligence is only as good as the data it uses to make decisions. Even unintentional discrimination can create biased decisions such as assigning students to particular schools or determining whether a child suffers from a disability. “Life-Threatening Episode”
Of course, the use of AI can lead to many other problems, such as inaccurate news stories, misinformation and conspiracy theories, and politically motivated disinformation. The question is, do we really want to allow a few powerful technology companies to manage these issues? Worrying more, isn’t it too late? perhaps. maybe not.As you may remember, when Pandora opened the box, the only object left was hope. When humans regain their sanity, idea Before more artificial intelligence comes out of the box.
