Artificial Intelligence and Journalism | Opinions

Applications of AI


AI applications continue to expand rapidly into all areas of life. They transform processes and workflows in pervasive domains, creating new opportunities. Alongside these contributions, however, AI also poses a range of risks, ranging from data security breaches to remaining vulnerable to individuals, to strengthening bias, deepening inequality, and the generation of misinformation. These risks vary in scale and nature depending on the particular characteristics of the field to which the AI ​​is applied.

Journalism is one of the areas that has been most influenced by AI, and is deeply felt on a wide spectrum, including data analysis, content creation, content personalization, and editing processes. It has become a particularly valuable ally of investigative journalism. Furthermore, AI is currently contributing to every stage of the news cycle, including news gatherings, reporting, storytelling, and news distribution. In regions where digitalization is widespread, AI functions as a changeable force. Given that journalism is one such field, many researchers argue that AI is not just a tool for journalism, but a transformative force reshaping the profession itself.

The widespread adoption of machine learning has opened up a new perspective, particularly for investigative journalism. Based on specific details of a particular topic, it is now possible to briefly analyze big data and identify underlying patterns within the data. This detailed contribution has greatly promoted and improved the quality of research journalism and news production, particularly in complex areas such as elections, health, education, finance, financial markets and sports. Thanks to AI, information containing news values ​​and complex narratives that previously were difficult to detect due to structural complexity has been revealed and presented to the public. As a result, AI technology has seen a significant increase in news production capabilities. This increase in capacity offers a great advantage in both national influence and economic benefits, especially for the press.

Meanwhile, it is now possible to conduct detailed public opinion analyses through social media and other digital platforms. In this way, readers and viewers' responses to news content can be assessed more comprehensively. Additionally, it becomes a common practice to analyze user preferences on news platforms and recommend new content accordingly, helping users to extend the time they spend on these platforms.

One of the most important contributions of AI is its ability to enable personalized content production. AI is widely used to generate personalized educational content in the field of education, and likewise it is beginning to be widely applied to journalism to collect, evaluate and distribute personalized content tailored to individual users. In short, AI Technologies is making an increasingly important contribution to increasing productivity and efficiency in journalism. The forecast is to use the time saved by this increased productivity to improve the overall quality of journalism.

Findings on the impact of AI on employee productivity suggest that increased efficiency and production is particularly important among low-skilled and medium-sized workers. In other words, AI technology can help compensate for the skill gaps among these employee groups. When used in journalism like this – complementing people rather than replacing them – AI can increase productivity without having a major negative impact on employment. At the same time, you can create additional time for journalists to focus on improving the quality of their reports.

However, there is a clear risk that AI could take over completely the position of journalism, including routine tasks such as creating standard news reports and performing data analysis. Meanwhile, as mentioned above, integrating AI technology into journalism as a transformational force requires workers on-site to quickly acquire new skills related to changing industries. Therefore, it is extremely important to improve AI literacy and skills among journalism experts. Without investment in developing these capabilities, many journalists could face the risk of losing their current position.

Meanwhile, the biggest risks associated with personalized news content are reduced content diversity and enhanced information comfort zones by directing users towards content like echo chambers. As a result, individuals are increasingly exposed to information supporting existing beliefs and attitudes, but access to different opinions and news is limited. This makes it more difficult for people to encounter diverse content and interpretation of events begins to vary widely depending on the boundaries of each echo chamber. One of the biggest risks facing modern society is gathering the public into distinct groups and confinement within the echo chamber. As AI further enhances the personalization of news content, it could enhance the formation of these echo chambers. This poses a serious threat to the overall health and cohesion of modern society.

AI can analyze big data and detect patterns, but the lack of transparency about how these analyses are performed due to the “black box” nature of many AI systems raises serious concerns, especially in news content production and investigative journalism. The opaque nature of the analytics and content generated by AI can generate news that is not transparent and accountable. This raises important questions because AI itself cannot be held responsible for the content it generates. Can journalists using AI in this way be responsible for non-transparent content and analytics? This issue is also being discussed actively in the academic world.

For example, as generative AI tools began to be used to produce scientific articles and sometimes came into being as co-authors, academic journal editorial teams faced intense debate over whether AI would be perceived as authors. Honorable journals such as Science have taken a firm stance, saying that not only cannot list AI as authors, but that AI-generated content such as text and graphics should not be used in academic articles. However, more flexible policies are gradually emerging. According to these, AI cannot be considered a co-author, but if it contributes to the quality of a scientific article, its role in the production process must be clearly articulated within the article. At the heart of all these discussions and efforts to find solutions is the fundamental issue that AI cannot be held responsible for its contributions and cannot be held responsible for its actions. Similar precautions should be implemented in the field of journalism.

Another major concern about the widespread use of AI in journalism is the risk of perpetuating bias. Because AI technology generates content based on prediction, optimization, and actual data, training data effectively functions as a form of memory. This “memory” includes biased judgments and linguistic patterns related to religion, race, gender, and other characteristics of various social groups. This is a bias that can be reproduced directly with new content. As a result, journalistic content generated in AI could replicate these same biases, leading to a surge in biased news. Furthermore, such biased content circulates within the echo chamber and is interpreted repeatedly by a lens of partial perspective, increasing the risk of deepening social inequality. The same dynamics reside in culturally embedded content generation through AI. As explained in a previous article entitled “The Powerful Wave of Orientalism Driven by Artificial Intelligence”, AI applications continue to generate content that retains the tone of Orientalist. These systems attempt to maintain control of the right to represent the “East” from a Western and White-centered perspective, detached from the cultural reality they portray.

Furthermore, advances in artificial intelligence technology have made the production of highly realistic yet false video content (deepfakes) increasingly widespread. The ease of creating such manipulative and misleading content not only increases social unrest, but also poses a threat to individual safety. In this context, another risk is the possibility that AI will generate false content, which has negative implications for journalism. As is well known, generative AI can generate information that appears to be consistent within the text, but in reality it is incorrect. This is a phenomenon known as “hastisation” or “compulsion.” Relying entirely on AI to produce news content increases the risk of biased reporting as well as the risk of misinformation. Therefore, editorial oversight is extremely important in eliminating these risks. To ensure this, the editorial team must own a strong level of AI literacy and must continuously update this literacy.

In summary, AI applications are transformative and far-reaching impacts in the field of journalism. The opportunities it offers have already translated processes and workflows significantly in this domain, leading to significant economic benefits. However, it is clear that this transformation poses many risks, ranging from the negative impact of journalism on employment to the production of biased content. Like other disciplines, the most human-centered approach in journalism is to use AI technology to use it in a way that complements human efforts rather than replacing it. Otherwise, the economic benefits of AI could be concentrated in the hands of small groups, but the risks it poses will affect the wider segment of society. Furthermore, the risks associated with AI make editorial monitoring more important than ever. In this context, increasing AI literacy and supporting the development of related skills increases the likelihood of profiting from these technologies in a balanced and responsible way.

Daily Saba Newsletter

What's going on in Türkiye, it keeps you up to date with what's going on in the region and in the world.


You can unsubscribe at any time. By signing up you agree to our Terms of Use and Privacy Policy. This site is protected by Recaptcha and the Google Privacy Policy and Terms of Use.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *