Billionaire entrepreneur Elon Musk announced plans this week to create an AI-driven conversational tool called “TruthGPT” after he criticized the popular AI text bot ChatGPT for being “politically correct.”
In an interview with Fox News host Tucker Carlson, Musk, the CEO of Tesla and owner of Twitter, said, “There’s definitely a way to an AI dystopia. Train them to deceive,” he warned.
AI chatbots pose significant risks, especially political bias. This is because the model can generate huge amounts of speech, shape public opinion and enable the spread of misinformation, an expert tells his ABC News.
But Musk’s comments highlight the difficult challenges posed by the issue. Some say it’s because content moderation has become a polarizing topic in itself, and Musk has expressed his opinion that he places his approach within that hot political context. expert added.
Gary Marcus, emeritus professor of psychology and neuroscience at New York University who specializes in AI, told ABC: news.
“But it might actually be a mistake to associate the question with political correctness,” he added. It would be a mistake to try to tie the two together.”
Created by artificial intelligence company OpenAI, ChatGPT is a chatbot, a computer program that converses with human users.
Neither Musk nor OpenAI responded to ABC News’ request for comment.
ChatGPT uses an algorithm that scans billions of texts on the internet and selects words based on lessons learned. The tool has become popular with viral posts showing it creating Shakespeare poems and identifying bugs in computer his code.
But the technology has also sparked controversy with some nasty results. ChatGPT’s designers programmed safeguards to avoid picking up controversial opinions or expressing hate speech. .
AI content moderation poses a legitimate challenge for designers who must decide which messages are offensive or objectionable enough to warrant intervention, experts told ABC News.
Ruslan Salakhutdinov, professor of computer science at Carnegie Mellon University, told ABC News:
“The question is how do you make it fair or neutral? It’s just part of the designer’s judgment,” he added. increase.”
Furthermore, responses from AI conversational tools are highly dependent on the text used to train the model, says Kathleen Carley, professor of computer science at Carnegie Mellon University.
“There’s this view that most of the information it’s been trained on is more left-leaning, and it’s embedded with certain political biases and certain political agendas,” Curley said. where is it coming from?”
Musk, who co-founded OpenAI but left the organization in 2018, announced in December that Tweet accused OpenAI of “training and waking up AI”.
AI chatbots deserve scrutiny for political bias, but Mr. Musk is a poor voice for such criticism, some experts say.
Musk has taken many conservative stances in recent months, including endorsing Republican candidates in last year’s midterm elections and repeatedly criticizing “woke” politics.
“I think his ‘truth’ means ‘agree with me,'” Oren Etzioni, CEO of the Allen Institute for AI and professor of computer science at the University of Washington, told ABC News. .
Still, the polarized political environment poses challenges for AI chatbot developers trying to coordinate responses, experts say.
Eliezer Yudkowsky, a decision theorist at the Institute for Machine Intelligence, told ABC News: “To draw the line in a reasonable place for AI, it has to know where to draw it.”
