Journalism students are more skeptical about AI than you think

Applications of AI


My colleagues and I are engaged in the complex and ever-changing process of finding ways to use artificial intelligence in journalism in ways that are both productive and ethical. There is a reasonable approach between “let students write stories using AI” and “we should ban the use of AI altogether” and we are trying to figure out what that is.

Our students learn from us. We learn from our students. However, keep in mind that we have yet to see so-called “AI natives” in our classrooms. Young people in their late teens and early twenties were part of an earlier era. However, in the not-too-distant future, there will be students who cannot remember a world without ChatGPT and Claude and his friends.

I recently taught a class on AI at a graduate school ethics seminar. A small group of five students, one of whom is a senior undergraduate. I was surprised to learn that they were as skeptical of AI as I was. Probably more, as I use Claude regularly to find sources and background information, brainstorm interview questions, and create summaries for my experimental local news podcast, What Works, hosted by Ellen Clegg and myself. (This is a recent example.) I thought I’d share my results for the AI ​​class in hopes that others will find it useful and get some advice on how to make it better.

I started with a lecture on AI and journalism. I’ve compiled some anecdotes about the good, the bad, and the ugly, but the ugliest of all was a shady local news site that published an article filled with AI hallucinations about a murder in New Jersey a few years ago. We then moved on to a class exercise with Claude. We chose this because Northeastern has an agreement with Anthropic to provide an enterprise version. I wanted to see what I could do with an interview with journalist Tracy Beim, director of the LGBTQ+ Media Mapping Project and Press Forward’s Chicago chapter.

Baim was a guest on “What Works” last fall, so I downloaded the episode and transcribed the conversation with Otter, an AI-powered online service, to prepare for class. I spent some time organizing it, but the results are not ready for publication yet. Still, we thought it was good enough for our purposes. A transcript can be found here.

Next, we divided the class into teams of two and teams of three. I asked Claude on one team to create five bullet points and a few sentence explanation from the transcript, and the other team to create a 600-word summary. I asked each team to look at the results and think about four questions.

  • Is it accurate or is there a mistake?
  • If you were to write an article about this interview, how would you use the AI ​​summary?
  • Is this helpful? Or do you think you can work more efficiently without this step?
  • Do you think it is ethically necessary to disclose that you used AI to find the most important points?

Refine and iterate

Students who had Claude create bullet points found the results too long to be useful. So we refined it and told Claude to limit each point to one sentence. You can see both results in the linked document.

Then we switched teams. This time, one team was tasked with creating a 600-word news article for Claude, and the other team was tasked with creating an 8-12 word headline and social media post. Discussion questions about news articles include:

  • Is it accurate or is there a mistake?
  • Do you think the AI ​​did a better job than you or a worse job?
  • Given that human writers can create something more nuanced and interesting, is it worth using AI to save time and free up reporters to do more productive work?
  • Everyone would agree that news organizations need to disclose that their articles were written by AI. What should that disclosure say?

Here are some discussion questions about headlines and social media posts:

  • Are they accurate?
  • What do you do with the results? As a human editor, do they help you write better headlines and social media posts, or do you just run with them?
  • Do you think this use requires disclosure?

What I didn’t expect was that when I run the command to write a news article, the headline is automatically created, so I ended up with two. I think requesting a specific headline would have been better (“LGBTQ+ news outlets struggle to survive amid advertising collapse and staff shortages”) rather than one that was automatically added to the news story (“LGBTQ+ media faces structural crisis, finds new mapping project”), but your opinion may differ.

We had fun discussions in class, but I wanted something more lasting. So I created an online discussion topic asking for 200-300 word reflections on the ethical use of AI in journalism. I asked the students to refer to the document Claude created during class and come up with another outline they could use to put together a 600-word story about the interview. Here is the summary that Claude created at my request. I don’t think it’s that different from what the students were looking for. Here are the questions I asked my students to answer:

  • Did you find this overview helpful or would you have been better off compiling it yourself? Why? Does the summary cover all the important points? Did it leave something important out or get the nuance or emphasis wrong?
  • Do you think it is ethically necessary for your news organization to disclose that AI was used to curate a story?
  • Consider Chris Quinn’s policies at Cleveland.com and The Plain Dealer, where I lectured. (Here’s what he wrote.) He assigns reporters to only report. They hand over their notes to the AI, which writes the story — checked by a human editor before publication, he said. As long as this practice is open to readers, do you think it’s an ethical way to practice journalism? Wouldn’t you like to work in such an environment?

The students’ answers were thoughtful, nuanced, and more skeptical about AI than I expected. I would like to reproduce everything they said, but here are some excerpts.

“I believe this summary is useful because the structure of the summary is detailed with separate sections and bullet points explaining each section,” one student wrote, adding that he still thinks even the use of AI-generated summaries should be made public. Quinn said he believes the use of AI to write stories based on reporters’ notes is unethical. “He claims it’s checked by human editors, but I believe this because it takes away the humane aspect of journalism and the use of skills in journalism. It feels lazy.”

Another student, who was particularly observant, said that on the transcript, the outlines didn’t seem to be a perfectly accurate gloss. “I can see how this could be useful for someone who wants to organize their thoughts or summarize an article. Maybe,” she writes. “But I noticed that the quotes were reversed or didn’t include things that I should have included. So it’s not useful to me.”

conscientious objector

Probably my favorite comment went much farther than I could say, but I appreciated his reasoning.

“I haven’t opened any yet because I’m scared of generative AI,” he wrote, adding: “I don’t create outlines, and I don’t think the exercises I did in class were really helpful. If you don’t have the time or creativity to put together an article or come up with a tweet on your own, this field may not be for you.”

He also said, “When a robot writes, something is always lost. Robots learn creativity from humans, and stolen ideas are not creative. Something is always lost. And secondly, when someone is replaced by a computer, it is a hard-working human being who has spent their life getting to the point of being replaced.”

Here’s another student’s review: “Overall, I think this overview is helpful. I may have emphasized different details, but it gave me a fair assessment of the central points of the conversation. Given that I wasn’t in the room when this discussion took place, it takes a great deal of brain power to understand the central points of the record. Beim is eloquent, but the recorded conversation is not as concise as the written text.” She added: added, “Journalists are supposed to be in the field and talking to people. … AI should be used sparingly and only when it is urgently needed. Its primary use should be cleaning and coding data, and any publication should always come with a disclaimer. In my opinion, generative AI has no place in written stories.”

And finally, “When you’re running out of time or facing a writer’s block, AI-generated outlines are definitely a useful tool for synthesizing and organizing ideas. However, in my experience, just providing a few bullet points here and there is most effective. AI usually Sometimes, mentioning something in the outline generated by AI sparks a new idea in my head, or a sentence brings up a point that I hadn’t thought of before.It’s essential to use AI as a supplementary assistant rather than a tool that replaces everything.Personally, I thought that if I was going to make a big difference in what I was getting from AI chat, I might as well do it myself.

Obviously this is not a problem that will be solved. On Sunday, I attended a welcome event for high school seniors entering Northeastern University. Not surprisingly, one of the questions we received was from a father who wanted to know about AI. My colleague Jeff Howe and I talked with Claude about the university’s arrangements and how students are trying to learn what is ethical and what is not. Like me, Jeff found that his students were more skeptical than we were, especially considering that one of the reasons they enrolled in journalism school was to learn how to write.

Generative AI is better than it was a year ago, and much better than it was two years ago. We will continue to make improvements. The temptation for news organizations to use AI in aggressive and unethical ways also persists. I’m not a fan of what Chris Quinn is doing in Cleveland, but at least he’s using real reporters to highlight how his news organization uses AI.

What I really don’t like is the proliferation of using AI to write articles based on government meeting recordings taken from YouTube. Ethics aside, the essence of journalism, especially local journalism, is to connect with the public and foster community, while providing people with the news and information they need to govern themselves in a democracy.

Stories reported and written by AI cannot reconstruct those relationships. And my students are amazing.



Source link