Australian media calls for AI policy to fight misinformation

AI News


A new study on generated AI images shows that only more than a third of media organizations surveyed at the time of the study have image-specific AI policies in place.

The study, led by RMIT University in collaboration with WA State University and the QUT Digital Media Research Center, surveyed 20 people in photo editors or related roles at 16 major public and commercial media organizations across Europe, Australia and the US. We interviewed them about their perceptions of generative AI technology. in visual journalism.

Lead researcher and RMIT senior lecturer Dr TJ Thomson said most staff interviewed were concerned about the impact of generative AI on misinformation and disinformation, but the scale at which content is shared on social media He said there are factors that complicate the issue, such as speed and speed. The algorithmic bias was out of control.

“Photo editors want to be transparent with their viewers when generative AI techniques are used, but media organizations have control over human behavior and how other platforms display information. “We can't do that,” said Mr Thomson, from RMIT's School of Media and Communication.

“Viewers don't always click through to learn more about an image's context or attribution. We saw this happen when an AI image of the Pope wearing Balenciaga went viral. . Many believed it to be real because it was a close-up photo-like image that was shared” without context.

“The photo editors we interviewed also said that because the images they receive do not always clearly state what kind of image editing has been done, they can easily share AI images with news sites without their knowledge and undermine the trust of news sites. He said it could affect sexuality.

By putting in place policies and processes detailing how generative AI can be used in various forms of communication, Thomson will help combat misinformation and disinformation, such as the image alteration of Victorian MP Georgie Purcell. He said that it is possible to prevent such incidents from occurring.

“More media organizations need to make their policies transparent so that viewers can trust that their content is created or edited exactly as the organization says it is,” he said.

Banning the use of generative AI is not the answer

The study found that five of the retailers surveyed prohibited staff from using AI to generate images, and three of those prohibited only photorealistic images. There was found. Some allowed AI-generated images if the story involved AI.

“Many generative AI policies from a news organization's perspective are general and abstract. When news organizations create AI policies, they consider all forms of communication, including images and videos, and consider more specific We need to provide guidance,” Thomson said.

“A complete ban on generative AI would likely result in a competitive disadvantage and would be nearly impossible to enforce.

“It would also deprive media workers of the benefits of technology that uses AI to recognize faces and objects in visuals, enhance metadata, and assist with captioning.”

Mr Thomson said Australia remained a “latecomer” when it came to AI regulation, with the US and EU leading the way.

“Australia's population is much smaller, which limits the ability of our resources to be flexible and adaptable,” he said.

“But there is also a wait-and-see attitude, watching what other countries do so they can improve or copy their efforts.

“I think it's good to be proactive, whether it's a government or a media organization. If you can show that you're proactive in making the internet a safer place, it shows leadership and We can shape the conversation around AI.”

Algorithmic biases that affect trust

The survey found journalists are concerned that algorithmic bias can perpetuate stereotypes about gender, race, sexuality and ability, leading to reputational risk and mistrust in the media. did.

“The study had photo editors enter detailed prompts into a text image generator to show a South Asian woman wearing a top and pants,” Thomson said.

“Despite the detailed depiction of women's clothing, the generator insisted on creating an image of a South Asian woman wearing a sari.”

“These issues stem from a lack of diversity in the training data, leading to questions about how representative the training data is. That leads to the question: How can we think about who is represented in the game?'' All video games can be used to train these algorithms. ”

Copyright was also a concern for photo editors, as many text-to-image generators did not make clear the origin of the source material.

Although generative AI copyright cases have made their way to court, such as the New York Times' case against OpenAI, Thomson said this is still an evolving field.

“Being more conservative and only using third-party AI generators trained on your own data, or only using them for brainstorming and research rather than publishing, will make it easier for courts to rule out copyright issues,” he said. “This will reduce legal risks until the matter is resolved.”

“Another option is to train the model using your organization's own content, which gives you confidence that your organization owns the copyright to the resulting generation. ”

Generative AI isn’t all bad.

The survey found that despite concerns about misinformation and disinformation, most photo editors recognize many opportunities to use generative AI, including brainstorming and idea generation. .

While many were happy with using AI to generate non-photorealistic illustrations, others were happy with using AI to generate images when there are no good existing stock images. I was satisfied with that.

“For example, Bitcoin's existing stock images are all very similar, so generative AI could help fill in the gaps in what's missing in the stock image catalog,” Thomson said.

While there are concerns that photojournalism jobs will be taken over by generative AI, one editor interviewed said he could imagine using AI for simple photography tasks.

“Employed photographers will be able to do more creative projects and less work shooting things against white backgrounds,” said an editor interviewed.

“Some might argue that these things are also very easy and simple and don't take much time for photographers, but they can also be a headache sometimes.”

“Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions, and Policy” was published in Digital Journalism. (DOI: 10.1080/21670811.2024.2331769)

Co-authors are TJ Thomson (RMIT University), Ryan Thomas (Washington State University) and Phoebe Matich (Queensland University of Technology).

Thomson is a visiting researcher at the German Internet Institute in Berlin, which allowed him to complete the European part of the study.

/Open to the public. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or stance, and all views, positions, and conclusions expressed herein are solely those of the authors. Read the full text here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *