Editor’s flawed human ramblings about AI, journalism, and the disappearance of truth.

AI Video & Visuals


This editor’s note was sent to members of the Times of Israel community early Wednesday as ToI’s weekly update email. To receive these Editor’s Notes when they are released, join the ToI community here.

About a year ago, two very smart young men gave a presentation to our staff about how rapidly advancing AI technology could be used to select and write stories.

As I understood what they were saying, they developed a process that scans the Internet, evaluates the most important stories in the global news cycle, and synthesizes content that other news organizations and writers have published in those stories, allowing media clients to quickly and cheaply produce “unique” versions based on content that has already been published elsewhere. Editors can tailor products to the needs of a media outlet, including specific style, prestige, political orientation, etc.

Again, I would emphasize that this is what I understood them to be trying to offer and that they have a growing roster of satisfied customers. (It was an informative presentation, not a hard sell. We haven’t spoken since.) But I might be a little wrong. I was so awed and horrified by what they said that my thoughts immediately went to mind about the devastating impact such technology would have on journalism, democracy, and truth. And those concerns have only intensified in the year since.

The most vocal objection I raised during our conversation was that their technology, and indeed any AI technology used for this, would be cannibalizing material reported and written directly on the internet. Any profit-driven media owner, or at least the many media owners who ultimately prioritize profit over authoritative content, will be able to fill their publications, websites, and social media platforms with demonstrably authoritative material tailored to their readerships without having to hire a staff of talented reporters and editors to actually go out and see things, write about them, talk about them, and film them.

Furthermore, I argued that over time, the very process of economically incentivizing and relying on customizing and synthesizing other journalists’ original content means that fewer and fewer journalists will be employed to produce original content. AI tools and technologies will therefore scan, summarize, and reconstruct the ever-dwindling repository of real words and pictures reported by humans.

Gradually, in turn, AI technology will simply regurgitate versions of previously produced proprietary material to be published online by a growing number of media outlets that are less involved in direct journalism. And by extension, the clear and precise picture of what the original news event was, what happened, where, to whom, and why will become increasingly distorted and eventually completely obscured. Finally, news items and all other kinds of developments are becoming less and less reported by actual humans in the first place.

As I write this, I am conscious that this sounds like the incoherent ramblings of an old newspaper/news site editor. I am aware that my concerns may turn out to be either very naive, modest and long-delayed, or grossly misplaced. Ultimately, with enough rigor and determination to document human behavior through journalism, and an obligation to inform the public about what is happening to protect the well-being of society, profit-driven indifference to the flow and accuracy of information may be alienated. But in my view, the signs are not encouraging.

To take one example, what has attracted attention, but has not caused a global chorus, is the fact that Google, the Internet’s primary search engine through which we seek information, has in recent months encouraged vast numbers of humans seeking knowledge online to use “AI mode” as a first resort. It appears more and more, but not always, in the top left of Google’s search results page, but in the old days of, say, two years ago, travelers on the Internet were directed to actual websites containing ostensibly original, human-created, and well-founded material.

Much of humanity may still be a little wary of using AI when searching for reliable information. Many people know that AI chatbots such as ChatGPT can vouch for Donald Trump to be a former president of the United States, create fictitious precedents relied on by lawyers in court, and create lists of the best books of the year that include volumes that were never actually written. But as technology advances and the temptation to embrace AI as the primary means of obtaining simple, one-click knowledge increases, such wariness will fade.

Remember, for example, the quaint days of five years ago when Wikipedia was considered an unreliable enemy of academic rigor? Today, it is one of the more reliable sources of information, despite constant political battles over the accuracy of its entries.

To bring this back to a personal and journalistic context, the job applications I have received over the past few weeks have every indication that they were not written by the applicant, that is, they are not plagiarized, and that they have been edited with the help of AI. In one case, the applicant stated that this is what they had done. While refreshingly honest, it didn’t do much to establish whether the applicant was actually capable of producing original, home-made work.

Similarly, an opinion piece was submitted for publication stating that AI detection tools showed 0% human presence, meaning that it was generated by AI.

I referenced a very long paper published in an academic journal by someone who knew the writing well. And after momentarily marveling at this person’s extraordinary improvement in research and writing skills, I decided pretty quickly that there was no way this person could cover this subject with this level of informed content, references, and stunning clarity without relying on AI. It was only recently revealed.

You may naturally wonder why that is important. AI is a great tool, and if it allows for greater speed, dexterity, consistency and success, and even accuracy in journalism, academia, and probably just about every other field, that’s certainly a good thing.

And perhaps it is…but isn’t there a danger of gradually shrinking humanity’s skill set, rigor, individuality, and interest in truth?

In addition, here at ToI, in recent days and weeks, we published an article provided by a news agency about the presence of US troops at the US-Israel-Gaza Coordination Headquarters in Kiryat Ghat, and at the beginning of the article was a photo from the same news agency, showing a uniformed American soldier talking with a reporter while eating falafel in Kiryat Ghat. That’s a great photo. It didn’t happen. The photo was generated by AI, as the station’s caption states. I didn’t notice the caption until I used the photo. I removed it as soon as I discovered it. (Published elsewhere, sometimes without captions.)

The photo itself was taken from an AI-generated TikTok video disseminated by the city of Kiryat Ghat (we were careful to explain this in the article), and seemed to have no problem broadcasting light-hearted content from the city that never happened in real life.

Should we care about that?

What should we make of the internet uproar over the alleged stares Miss Israel directed at her Miss Palestine rival during a recent beauty pageant? A video has surfaced showing Melanie Shiraz giving Nadine Ayoub a nasty look at a Miss Universe event in Thailand, which she denies. Was the video processed? In the first place, was she standing where the video was being shown, in a position where she could receive that dirty stare? We’ve updated the story to try to understand what we’re seeing.

Now, an inflammatory video created by AI is circulating online showing Israeli soldiers burning the American flag. Most people with a superficial knowledge of Israel and some common sense would naturally doubt its authenticity and would therefore discount it, assuming it was created by AI technology. But not all. And some people tend to think that the worst about Israel is hidden somewhere in the parts of the brain that form and modify beliefs and worldviews.

We all know that President Trump did not dump sewage from a plane on No Kings protesters last month, or sit on a Gaza Riviera beach with Prime Minister Benjamin Netanyahu in February, even though the White House posted an AI video to that effect. The real trouble begins when the lines between fact and AI-generated fiction become blurred.

I said above that this was rambling. Smarter people than me will be able to better understand the impact of the shrinking of native journalism and other sources of wisdom and knowledge and the exponential proliferation of unreliable material. So I’m doing my best to sound the alarm so you can think better about it.

In my world of journalism, I see a gradual decline in direct reporting. Because it’s difficult and expensive. Less investigative reporting, acting in the public interest and exposing corruption. The number of reporters on city council and parliamentary committees has decreased. As a democracy watchdog, fewer writers spend days, weeks, and months poring over data. Fewer relationships are cultivated with law enforcement. Fewer politicians feel obligated to subject themselves to difficult encounters with knowledgeable reporters. And the use of AI tools by media organizations is increasing. Not to assist in editing content; produce that.

More than that, I’m worried that I won’t be able to tell what’s real and what’s not on my phone or computer screen. And over time, indifference to the distinction deepens.

We see people of all generations turning to social media instead of traditional journalism for their news. And I know that social media owners are gradually moving away from delivering even short news posts to their users. Because let’s call it “fact-independent” material. Because it is much more provocative, persuasive, and potentially profitable.

I want to see tremendous resources and intelligence being dedicated to harnessing AI and other developing technologies for good. For example, small things like tackling global warming, conflict resolution, and ensuring the protection of human life and the planet.

But I worry that we are losing the ability to establish the truth of what is happening around us. And we may be moving in a direction where we don’t even try.

This work was written in the midst of everyday fatigue, using all my trademark overloaded sentences and unique expressions. All defects and mistakes are human. I used a spell checker.





Source link