Several large-scale studies have been conducted on how journalists use artificial intelligence, but some have primarily surveyed early adopters, while others have not differentiated between current use and future planned use.
We therefore decided to survey a representative sample of British journalists. We asked them about their actual use of AI in their newsrooms and how they perceive and approach it. The findings were published in a recent report by the Reuters Institute for Journalism at the University of Oxford.
Overall, the results show that most journalists (56%) use AI professionally on a weekly basis, and 27% of them use AI on a daily basis. Only 16% of journalists say they have never used AI in their work.
The three most frequently used are language processing (transcription, translation, and grammar checking). These tasks are likely to be higher on the list because accuracy issues related to AI output are probably less of a concern in these contexts than in tasks such as fact checking. Nevertheless, our findings clearly show that journalists are also using AI for substantive journalistic tasks. More than a fifth use AI for “story research” at least monthly, and 16% use AI for “idea generation” and “generating parts of text articles.” On the other hand, AI is rarely used to generate still images or videos.
Who is using AI more often?
Male journalists reported slightly higher levels of AI use than their female colleagues, and younger journalists reported using AI more frequently than older journalists. Our findings also show that the use of AI increases as managerial responsibilities increase. Part of the explanation may be that the use of AI by those with more management responsibilities is less restrictive than its use by those without. The use of AI was also relevant to some of the reporting that journalists worked on. We found that business journalists use AI much more frequently than journalists reporting on lifestyle topics.
The survey asked journalists which media formats they produced in. We found that photojournalism was associated with less frequent use of AI. In contrast, if you’re involved in the production of “graphics, comics, illustrations, or animation,” you’ll likely use AI more often. We also found that the more of these media formats journalists produce, the more often they use AI. They may be turning to AI to ease the pressure of producing journalism in multiple formats. Or perhaps AI will allow journalists to produce in more formats.
AI and job satisfaction
It is often said that the use of AI in journalism frees journalists from low-level tasks, freeing them up time to tackle more complex and creative tasks. Our findings are not consistent with such a proposal. We found that people who use AI more often are more likely to think it’s working on too many low-level tasks. One explanation could be that the use of AI involves new low-level tasks that are unique to AI, such as cleaning data and checking AI output. Another explanation could be that journalists who frequently work on low-level tasks use AI more frequently to alleviate this aspect of their workload.
Our research also shows that the more people use AI, the less satisfied they are with the time they spend on complex and creative tasks such as in-depth interviews and surveys. In fact, the happiest journalists are those who don’t use AI at all.
Opportunity or threat?
They found that a clear majority of journalists (62%) believe that AI is a big or very big threat to journalism, while only a minority (15%) say it is a big or very big opportunity. Even though young journalists are more likely to use AI, they do not see it as an opportunity rather than a threat.
However, management responsibilities make a difference. Those in more senior roles are more positive about AI, but still see it as more of a threat than an opportunity. Journalists with higher levels of AI knowledge are also more likely to see AI as an opportunity.
But the biggest difference has to do with how often AI is used. Those who use AI on a daily basis are one of the few groups who do not have an overwhelmingly pessimistic view of its potential impact on journalism. This highlights the importance of using technology to shape attitudes.
ethical concerns
One of the perceived threats to journalism posed by AI is the potential ethical concerns it raises. To dig deeper into this issue, we asked journalists how concerned they are about a variety of potential ethical implications. The overall level of concern is very high. For example, more than half said they were very concerned about the negative impact on public trust in journalism, accuracy and originality.
Most groups of journalists share these ethical concerns. However, there are some differences. People with a higher level of AI knowledge tend to have more concerns, while people who use AI on a daily basis tend to have less concerns.
Integrating current and future newsrooms
40% of journalists we surveyed reported that AI is not integrated at all into core newsroom processes. Integration was largely limited, if at all. Only 11% of journalists said their AI integration was “moderate,” and very few said it was extensive or complete.
News organizations that are part of conglomerates are more integrated than independent news organizations. Independent companies may become more flexible and allow journalists to adopt AI in an ad hoc manner. However, because conglomerates have dedicated AI staff and more resources, they are more likely to be able to deploy AI company-wide.
British newsrooms have so far achieved limited AI integration, but journalists believe this will change in the future. Overwhelmingly, they believe AI integration will increase in major news organizations. This is more true among journalists whose primary outlet is part of a conglomerate rather than independent.
Integrating AI into newsrooms involves a variety of practical and organizational issues that news organizations must consider. So how are news organizations approaching issues like AI guidelines, tool selection, and training?
Around 40% of UK journalists reported that major news organizations have established guidelines on most of the issues we asked about, including human oversight, data privacy and transparency. However, few respondents said they had guidelines regarding AI bias.
Almost a third (32%) of UK journalists report that their news organizations offer AI training. Journalists working for conglomerates are more likely to say their news organization offers AI training (50%) than journalists working for independent news organizations (14%), likely reflecting increased resources.
The study differentiated between internally developed AI tools and third-party developed AI tools. Considering the skills and resources required to develop AI, it’s no surprise that 57% of journalists say their main news organization only uses third-party tools. Independent dealers are more likely to use only third-party tools than conglomerates.
Overall, our data shows a gap between independent media and conglomerates, and journalists expect that gap to widen further. Unless action is taken, resource-poor news organizations may continue to integrate less AI, reduce the amount of AI training they provide to staff, and increase their reliance on third-party AI tools.
You can read the full report here.
Neil Thurman is Professor of Communication in the School of Media and Communication at LMU Munich and Senior Honorary Fellow in the School of Journalism at City St George’s, University of London. Sina Thäsler-Kordonouri is a faculty member and researcher at the Department of Media and Communication at LMU Munich. Richard Fletcher is director of research and deputy director of the Reuters Institute for Journalism.
adobe stock
