AI and News: How AI can help, fail, and why it matters.

AI News


AI is reshaping the news ecosystem in search, fact-checking, and personalized feeds. Done well, it can support journalism and strengthen democracy.

Artificial intelligence (AI) is transforming the way news is delivered and consumed.

From algorithmic news distribution and AI-powered news aggregators to the increasing habit of asking AI chatbots (such as ChatGPT and Gemini) to summarize articles, this technology is increasingly integrated into the daily flow of information.

Given the scale of this change, it is more urgent than ever to understand how AI is being used in news production and journalism. And much of it happens behind the scenes, before the reader even clicks on the headline or reads the text.

However, given the lack of public understanding of how AI systems work and the risks of low levels of media and AI literacy, people remain confused and at risk of making harmful decisions.

To better understand these risks, we explore three areas where AI is reshaping the news ecosystem in search, fact-checking, and personalized feeds_._

AI search

If you type almost any question into Google, you’ll never see a list of links. Instead, the initial summary response is provided by Gemini, Google’s AI tool powered by large-scale language models (LLMs). These AI summaries collect information from across the web and display it in ready-made summaries.

Taking full advantage of these LLM-powered search engines, AI chatbots with web search capabilities, can help people quickly understand complex topics or stay up-to-date in emergencies.

However, these systems are far from reliable. Research shows that AI search tools frequently make factual errors, amplify bias, and misattribute information. Some early output from AI search systems was particularly problematic. Google’s AI overview famously advised users to put glue on pizza to make the cheese stick, and in another case suggested that eating rocks might have health benefits, errors that were widely circulated as examples of the technology’s unreliability. These incidents highlighted not only factual inaccuracies but also the potential risks when such systems provide seemingly authoritative advice.

In response to these initial issues, Google introduced a series of targeted fixes. We built better detection mechanisms for nonsensical queries, reduced the impact of satire and user-generated content, strengthened safeguards around sensitive domains like health and news, and limited the potential for misleading snippets to be included.

But even with these adjustments, concerns remain. Even when tools provide citations, these attributes are often inaccurate or misleading. In effect, these tools end up borrowing the authority and credibility of existing news organizations without meaningfully supporting them.

Despite these facts, public attitudes are changing. According to the Digital News Report 2025, one in five Australians (21 per cent) say they are satisfied with news that is primarily produced by AI. This is a higher level of acceptance than in many other countries.

Among those interested in AI-generated news, news summaries (29 percent) and story recommendations (22 percent) are the most appealing uses, and paid online news subscribers are the group most likely to use AI chatbots for news.

Only 30% said they did not want AI to personalize their news.

These numbers show a divided but increasingly open country to AI news, even as concerns about accuracy, transparency and accountability continue to grow.

AI fact check

AI is also increasingly being used for fact-checking. Human fact checkers are still working hard, but many of the important tasks are being taken over by machines to speed up the verification process.

This is called automated fact checking (AFC), and it helps in three ways. This can help you find false claims (such as “the world is flat”), search for evidence to prove or disprove a claim, and verify claims.

AFC uses natural language processing to process large amounts of text and identify relevant categories. Different AI models are also trained to perform these tasks, so you might have one model trained to find claims and another focused on validation.

But we’re not rushing toward a fully automated fact-checking future. These models take a considerable amount of time to build, and even when deployed, they are not completely reliable. They are also typically built to work with only one type of argument (e.g., written or spoken word), rather than working across multiple modes (e.g., photography, data visualization, video).

Personalization and recommendations

AI is also used to deliver news to people through algorithms. We often think about Facebook’s News Feed or TikTok’s algorithms, but even the simplest news websites use algorithms to choose what content to show you, especially if you’re logged in.

Web pages may be primarily curated by humans, but the top 10 most viewed is a good example of an algorithm selecting news for display.

People have been concerned about filter bubbles for some time, but there is limited evidence to support the claim that people are trapped in information silos. Recent empirical research shows that platforms show people different types of news and that personalization is not as strong as once feared. Research is currently underway to determine whether social media actually amplifies conflict.

Social media algorithms will continue to drive engagement, and while news websites will focus on popularity, there is a growing movement by a small but growing group of news organizations focused on building better news algorithms in the public interest.

For example, Swedish public service organizations have developed public value indicators. This means that its algorithms don’t just promote what’s popular, but what’s most important to the general public.

Enhance journalism, not replace it

AI brings some efficiency. Accelerate newsroom workflows, help viewers solve complex problems, and expand access to important local information. But it also has the power to obscure stories, distort facts, and create confident mistakes that can pass unnoticed.

The future of trusted news depends on how wisely the industry responds to this moment. When used well, AI can support journalism and strengthen democracy. But used poorly, they risk undermining the very foundations of informed citizenship.

Republished from 360info.org on November 30, 2025



Source link