AI helps spread fake news and misinformation, but it could also be used to combat its spread.
Fake news is not a new phenomenon, but its proliferation today is unprecedented. But disinformation is more than just fabricated, fabricated news articles: it often also includes real material used out of context and manipulated content.
The spread of artificial intelligence (AI) has complicated the task of verifying the authenticity of news, leading to a decline in trust in journalists.
Edelman's 2024 Trust Barometer, a global survey (over 32,000 respondents in 28 countries) measuring public trust in the media, government, NGOs and corporations, found that journalists were the least trusted profession, with 64% of respondents saying that journalists deliberately try to mislead people by writing stories that they know are false or by making gross exaggerations.
As if distrust wasn’t enough, there’s also a growing perception that language model-based chatbots like ChatGPT-4 and Gemini could make journalists obsolete.
Misinformation erodes trust in institutions, deepens social divisions, and undermines informed decision-making.
But AI itself may also be an antidote to its own creations.
AI systems can provide comprehensive and user-friendly explanations for classifying and detecting fake news.
AI is crucial in detecting disinformation because it methodically analyzes language nuances and contextual details that human moderators may miss.
In March 2024, a research group at the Norwegian University of Science and Technology (NTNU) integrated a variety of machine learning and deep learning techniques to develop an advanced AI algorithm that can identify fake news across three different datasets.
It outperformed other models, achieving over 97% accuracy in classifying news articles.
The system's success lies in its combination of advanced analytical techniques that can detect subtle patterns and linguistic cues that indicate disinformation.
Explainable AI not only makes the detection process transparent, but also provides valuable interpretive insights – giving users a better understanding of the reasoning behind content warnings and flags by providing insight into how the system came to a conclusion.
This transparency helps build trust and provides an opportunity to further refine detection methods.
In the fight against misinformation, AI offers three key advantages.
First, AI algorithms can be trained to identify language patterns associated with disinformation, including techniques such as sentiment analysis to detect emotive words often used in manipulative content, and identifying stylistic markers that deviate from established journalistic norms.
Second, AI can be used to automate the fact-checking process by cross-referencing information with reliable sources. By analyzing factual claims against established knowledge bases and databases, AI can flag statements that may be false or misleading.
Third, AI can track the spread of information on social media platforms and identify suspicious patterns and user behaviors associated with disinformation campaigns, allowing for early detection of emerging trends and developing targeted intervention strategies.
Related articles: AI for good: When AI is the “only viable solution” | AI will forever change fertility treatment | Imagining AI's ethical place in environmental governance | Can artificial intelligence help us talk to animals? | Industry leaders warn that AI threats to humanity rival pandemics and nuclear war | The role of governments in regulating and using AI for the SDGs | The challenges ahead for generative AI
AI not only helps identify fake news, but also helps foster a healthy news ecosystem by providing journalists with the tools to teach readers how to spot real news.
For example, AI can assist content moderators by flagging potentially harmful content for human review, resulting in a more scalable and efficient approach to moderation. AI tools can also be used to create educational resources that teach people how to critically evaluate online information.
Chatbots and interactive modules can provide real-time guidance on identifying bias, verifying sources, and spotting manipulative techniques.
Although promising, AI is not a perfect solution for fighting disinformation: AI models trained on incorrect or biased datasets risk perpetuating existing biases and misinformation.
It is important to be aware of bias, adapt to evolving disinformation tactics, and ensure ethical and responsible implementation. Ensuring training data is diverse and representative is essential to avoid reinforcing social and political biases.
Another challenge is that disinformation tactics are constantly evolving to evade detection, so AI models need to be continually updated and refined to keep up with these new threats.
As a result, new skills will be essential to exercise informed citizenship – especially the ability to discern sources, processes and intermediaries in the face of silent and invisible algorithms.
These insights inspired the final report of a study carried out by the Rai (Italian Radio-Television) Laboratory and the Catholic University of the Sacré Heart to map media education efforts on the topic of online misinformation.
Through comprehensive analysis, the study highlights the important efforts being made to equip individuals with the skills they need to navigate the ever-evolving digital information environment.
Striking a balance between protecting free speech and curbing the spread of harmful content is crucial, and human oversight is required to determine the appropriate response to flagged content. Only a combined effort of AI tools and human expertise will achieve the best results.
** **
This article was originally published on by 360 Info™.
Editor's note: Opinions expressed by authors herein are their own and not those of Impakter.com. — Cover Photo Credit: Sarah Hall.
