Italy has banned ChatGPT due to privacy concerns. An Australian whistleblower threatens to sue for defamation after a chatbot falsely described him as the perpetrator of a scandal he uncovered.A Belgian man who confided in an AI chatbot commits suicide.
Dr. Nick Schuster, AI Ethicist in Humanising Machine Intelligence at ANU, said:
“ChatGPT seems to be the latest headline maker.”
AI technology raises ethical questions as well as pressing social implications, Schuster said. His concerns extend beyond his ChatGPT.
“There are others on the horizon. Self-driving cars are a very vivid example of how artificial intelligence can be truly disruptive.”
As new unethical AI case studies keep popping up, universe We asked leading AI experts where and why problems occur, and what, if anything, can be done about them.
design problem
Professor James MacLaurin says many problems stem from the “constantly stochastic” nature of AI systems and the unpredictable nature of their output.
Co-Director of the Center for AI and Public Policy at the University of Otago in New Zealand, new generative AI models make predictions of the future from historical data, whether they are text, audio, video, or other forms of training input. or stating that it is based on the output.
“There are obvious problems with this,” he says. “The past is not a happy place.”
A notable example, he said, is Amazon’s hiring algorithm, which the company built to evaluate and rank the resumes of job seekers. The system was biased towards women because it was trained on the resumes of current and former employees of the company, who were overwhelmingly male.
But even when biases were identified, they proved difficult to overcome.The system infers gender from word choices, college names, or sports of choice. The prejudice was so entrenched that Amazon eventually abandoned it.
Humans also rely on the past to predict the future, McLoughlin says. “It’s standard inductive reasoning.” But humans can also think critically, which AI machines don’t.
A further problem with large language models is that the systems weren’t really designed to provide true or accurate output, he says.
“Something like ChatGPT is designed to be conversationalist. Highs are more like plausibility than truth.”
Uncontrolled output from AI systems can lead to unintended consequences.
This was the case with the AI advertising system used by US retail chain Target, says MacLaurin. The system is designed to detect things about people in order to tailor and deliver advertising and marketing content.
“We detected that a young woman was pregnant and started serving ads that seemed appropriate to someone who was pregnant or was about to be a young mother. My parents saw an ad.”
“Something like ChatGPT is designed to be conversationalist. Highs are more like plausibility than truth.”
Professor James McLoughlin
Where it goes wrong: Data, Algorithms, Applications
Schuster said ethical issues arise with three key elements of an AI system: the datasets it trains on, the algorithms themselves, and their applications.
The big problem, he says, is the lack of diversity in the training data.
“If you are training an AI system to recognize faces, but most of the faces you are training are white male faces, it will be difficult to identify other groups of people with high accuracy.”
Then there’s the way algorithms make predictions or inferences from that data. Here, Schuster says his use of AI in predictive policing to target specific neighborhoods or groups of people based on crime statistics highlights the problem.
Even with unbiased data (which he adds generally isn’t), “predicting someone’s likelihood of committing a crime based on factors such as race or ZIP Still wrong.
“Historically, if you are overpolicing in an area, your forecasting system will most likely tell you to continue overpolicing in that area,” he says.
And then there’s the application. The way we use these systems, such as AI-based recommendation systems in social media designed to maximize engagement, can negatively impact people’s mental health.
Four Big Risks: Privacy, Fairness, Accountability and Transparency
Professor Tim Miller, co-director of the Center for AI and Digital Ethics at the University of Melbourne, cites privacy, fairness, accountability and transparency as the main AI risks.
When it comes to privacy, “probably most people don’t realize how much data organizations hold about you and how much they’re trying to infer from it.”
Many countries, including Australia, have privacy laws that govern how people’s data is collected and used. Italy’s move to ban ChatGPT stems from concerns that the system is not compliant with European data protection laws, he said.
The US non-profit research organization Center for AI and Digital Policy recently filed a complaint with the Federal Trade Commission citing similar concerns.
Fairness is related to bias and keeps algorithms from making discriminatory decisions. According to the Stanford University AI Index, as the size of a large language model increases, so does its power, but it often also becomes more biased.
While issues such as privacy and fairness have received the most attention so far, transparency and accountability are also important, says Miller.
Transparency means that people can understand the data behind it, how the data was collected and where the AI decisions came from when machine learning algorithms are used. Accountability ensures that someone is held accountable when things go wrong.
A post by US designer Jackson Greathouse Fall recently went viral when he gave GPT-4 a $100 budget and asked him to make as much money as possible.
But what does Miller say when an AI model like ChatGPT gives really bad financial advice?
In Australia you need a qualification or license to give financial advice. Also, in some cases, the financial advisor may be held accountable. But for algorithms, blame is unclear, he said.
How do you protect vulnerable people?
For Erin Turner, CEO of the Consumer Policy Research Center (CPRC), transparency is a top priority.
Without transparency, she said: “When and sometimes this technology is being used by companies, what it is being used for, how it is configured, how it is being tested to see if it is giving fair results. It is very difficult to know if
“There are two layers to the problem,” she says. “What’s going on? It’s very hard to know. Then; is it okay?”
Businesses already collect a ton of data about Australian consumers, Turner said, and consumers do not really have control over what personal information is collected and how it is used, stored and shared. you can’t.
“Our data is collected and we are asked for all sorts of information. ), used to sell more and make us pay a higher price,” she says.
According to the CPRC survey, 79% of Australians want businesses to collect only the basic data they need to deliver a product or service, and no more, Turner said. increase. The same percentage of people don’t want their data shared or sold under any circumstances, she says.
People who are already discriminated against are the most likely to be adversely affected by AI systems, Schuster said.
When using AI to evaluate applications for jobs, credit, or social services, “it’s really important that these automated systems do things fairly,” he says.
“Our data is collected and we are asked for all sorts of information. ), used to sell more and make us pay a higher price.”
Erin Turner, CEO, Consumer Policy Research Center
“As an English-speaking middle-class white American, I don’t worry too much about myself. Most of these technologies were built by people like me, for people like me. I think that there.
“I am more concerned about those who are not part of that culturally dominant group. It’s about how you can be left behind, washed out and marginalized.”
Any solution?
Miller said much work is underway to try to address the gap between law and policy, flagging the European Union’s proposed AI law.
Turner says a good place to start in Australia is Personal Information Protection Lawwhich limits the data that companies can collect and use in the first place.
“As we speak, Personal Information Protection Law in progress.And for me this is one of the most important fundamental reforms we should see […] So getting it right is the first step. And that’s where we’re going to get into big discussions about AI strategy and protection,” she says.
MacLaurin said AI companies are adding filters to prevent their systems from serving biased, rude or dangerous content. But these layers of trust and security aren’t impregnable, he says.
“There is a kind of arms race going on between hackers and trolls and all sorts of people looking for ways to turn off the layers of trust and security.”
MacLaurin, who is generally positive about the future of AI, admits that he, along with Apple co-founders Steve Wozniak and Elon Musk, among others, signed a petition calling for a six-month moratorium.
Policy moves slower than computer science, he reasones, and will need time to catch up.
“People shouldn’t be too depressed. But they need to focus on thinking about ethics, thinking about policy, thinking about how to do this fairly.”