When Hoodline, a company that runs hyperlocal news sites in cities across the country, first launched, it seemed promising. The site's approach of combining data with in-depth local reporting caught the attention of my former colleagues at Nieman Institute in 2015, when it was focused on San Francisco. We wrote about it again in 2018, when it reinvented itself as a mostly automated local news outlet. As Shan Wang wrote for us at the time, the people who ran Hoodline stressed that “robots will never replace the work of human journalists.”
Apparently, that has changed. In recent weeks, Bloomberg, CNN and the San Francisco Chronicle have published stories about Hoodline's heavy use of AI in its article generation. “Some of the stories Hoodline creates look a lot like stories from other outlets,” Chase DiFeliciantonio wrote in the Chronicle, but Hoodline's CEO argues that they do much the same thing as human news aggregators. If you visit Hoodline's website now, you'll see a little “AI” badge next to almost every signature. Research for this article found that there are only two active signatures that at least appear to be real people's, judging by the presence of social media pages and personal websites: Cheryl L. Guerrero and Steven Bracco. Dozens of others are AI-generated.
Hoodline isn't the only publisher using AI to syndicate content. CNET, Sports Illustrated, and many others use fake writers to pad their sites. Meanwhile, Hoodline began adding its AI badges after another San Francisco publication, the Gazetteer SF, asked about the company's use of AI.
What really catches my attention about Hoodline is the name.
Hoodline San Francisco's front page is filled with articles by “Leticia Ruiz,” “Nina Singh-Hudson,” “Eileen Vargas,” “Eric Tanaka,” and “Tony Ng.” As diverse as San Franciscans are, it's a shame they don't actually exist. Until recently, according to Bloomberg, these bylines were accompanied by a photo and a bio. (Singh-Hudson's bio reads, “Longtime writer and Bay Area native.” The code that neatly generated that bio was probably written nearby.)
As Maggie Harrison Dupré reported in Futurism, both Sports Illustrated and The Street generated names and photos of non-existent people of color. In Sports Illustrated, for example, the name “Sora Tanaka” appeared next to a photo from the AI Headshot Marketplace, under a listing for “a cheerful young adult Asian female with long brown hair and brown eyes.”
Zach Chen, CEO of Impress3, the company that owns both Hoodline and the local news site SFist, told me in an email that Hoodline's AI personas are randomly generated by AI, and that their regions and cities are also randomly assigned. It's not intentional to give the impression that they are people of color, he told Bloomberg. And Chen stressed to me that the AI doesn't work alone. “We have a team of dozens of (human) journalistic researchers who are involved in information gathering, fact-checking, source identification, background research, etc.,” he wrote. These researchers suggest stories, which the AI then analyzes “for viability and suitability for local readers” before a human editor gives final approval.
But after spending some time reading many of these stories, questions began to arise.
I scoured each city’s Hoodline site, taking note of the writer’s name and clicking on the article to see if it was written by an AI. Each city has between three and six bylined personas for articles, but I quickly began to wonder about the randomness of the names. On the Hoodline site for Boston, where 13.9% of residents reported being of Irish descent in the 2022 census, “Will O’Brien” and “Sam Cavanaugh” have replaced “Leticia Lewis” and “Eric Tanaka.” Memphis, Tennessee, has six AI personas: “Alicia Freeman,” “Andre Washington,” “Bob Norris,” “Sophia Garcia Jones,” “Caleb Powell,” and “Elena Nguyen.” On the Washington, D.C. site, one of the personas, “Mike Johnson,” shares the same name as the Speaker of the House.
None of the AI writers seem to have a particular niche, but they’re all keen to cover what could presumably be described as “police success.” Hoodline’s site is packed with articles about both arrests and police PR events, and it seems to me that Hoodline’s AI tool relies heavily on press releases from local police departments.
Take Hoodline Dallas, for example, which published a story about a sergeant's retirement on June 2. That day, the same persona, “Nate Simmons,” created a story about an officer being praised for his work in an “unspecified incident.” (The incident was actually specific; details were included in an image posted by the Tarrant County District Attorney, but the AI tools Hoodline uses apparently couldn't parse them.) Many stories simply feature police pursuing or arresting suspects, often accompanied by mugshots culled from law enforcement social media posts or press releases. There's very little original reporting, and in perhaps the saddest example of Hoodline aggregation, a recent story about a restaurant closing in San Francisco linked to a Hoodline article from 2014, when the site first launched.
If it's there teeth While no humans are involved before the AI begins its work, it doesn’t appear that humans carefully check the work it produces. In multiple articles, I found mistakes and what appeared to be obvious hallucinations. A recent article about a community event the Boston Police Department held with a senior citizens group noted that the Boston Police Department’s press release “highlighted ongoing efforts to foster a sense of partnership between law enforcement and residents,” but the four-sentence press release I linked to made no mention of any such ongoing efforts. An article about the arrest described the suspect’s heart as “beating hard to the sound of a fugitive’s drum, but this drum was a ghost gun, with no traceable past and a loaded magazine of bullets being the immediate concern.” Literary qualities aside, the AI appears to have latched onto a small detail in the press release that the suspect was “breathing heavily” and ran with it.
Chen did not respond to questions about each persona's field of work or the different racial designations for each city. Setting aside the factual errors for a moment, what I didn't understand was why Hoodline decided to give its AI personas human-sounding names. For example, why not put a byline like “Hoodline San Francisco” on each article? Chen told me that's because they envision the personas evolving into short news stories. “Like AI news anchors,” he wrote. “These are inherently suited to having personas… It doesn't make sense for an AI news anchor to be named 'Hoodline San Francisco.'”
There's something particularly insidious about this. The news industry is already overwhelmingly white, but you'd never know it from the AI bylines and videos Chen envisions. The very existence of AI personas, the representation of people of color reporting the news, is a distortion of the truth, an illusion of greater diversity in the press corps. AI-generated articles, especially fake bylines, are fake fakes, and this will become more and more true as more and more media companies sign deals with companies like OpenAI. This creates an ouroboros of easily aggregated content with no original reporting: press releases.
Despite these issues, Hoodline is at least transparent about its use of AI. It has a badge next to its name and a disclaimer about AI on its site, but the way it uses AI (“helping in the background” and with a “human-centered approach”) is plain wrong. That's better than Sports Illustrated or CNET. Chen also claims that the publisher hopes that using AI to write stories will generate revenue to hire human journalists who can handle more complex stories, but he's not hopeful.
When Chen told me that Hoodline's editorial process involves human journalists, I asked if there was a masthead (standard practice at most publications) where I could find the names of some of those people, rather than clicking into articles to see which bylines don't include AI badges, as I did to find Guerrero and Bracco. “We don't have such a masthead at the moment,” Chen wrote. “But it would be a good idea to build one in the near future.”
A collage of Hoodline's AI bylined articles, created by Nieman Labs editor and real person Laura Hazard Owen.

