- A woman asked me to tell a story about the mastectomy tattoo she got after she had cancer.
- After a few introductory emails, I realized that the texts and photos she sent were AI-generated.
- I noticed early on, but experts say that as technology improves, more journalists can be fooled.
“Seeing the scar on my chest in the mirror always reminded me of what I had lost,” Kimberly Shaw, 30, told me in a moving email.
She reached out to me through Help a Reporter Out, a service journalists use to find sources. I was writing a skincare article and was using this site to find someone writing an article about hiding acne scars with tattoos.
Then I read Shaw’s reaction to her breast cancer diagnosis: How she knew a mastectomy was the only viable route to recovery, how emotionally painful it was. how she carefully worked with the tattoo artist to find the right design and how it helped her heal.
“I felt like I was taking back control of something that cancer had taken away from me and taking back my body,” she told me.
Julia Pugachevsky
Shaw’s experience may not have had anything to do with my acne story, but it tapped into the same sense of empowerment and control I wanted to explore. I replied to her, thinking it might produce.
But after days of conversation, something about the show’s emails started to make me feel a little sick. After my boyfriend vaguely thought she might be fake, he suggested checking the email with an artificial intelligence text checker.
The results were clear. The show’s emails were machine-generated. I was interviewing AI the whole time.
humans are enough
In approaching The Kimberly Show, I followed standard journalistic procedure. Generally, I start with basic email questions to see if the subject’s backstory is a good fit for what I’m working on, then ask to move on to a phone interview.
my question was simple. How old was she? When did she get cancer? what kind of tattoo was that? How was the collaboration with the artist? Would she mind sharing the photo?
Shaw answered my questions clearly and concisely. She told me that she had cancer two years ago and that she got a tattoo six months after she was in remission. It incorporates both a chest reconstruction design and an intricate lotus.
All she left out was the image I asked for and her age. But she made a request. Instead of participating, she wanted to mention her role as the founder of several of her websites, Dictionary.com knockoffs and her gaming pages online. Ideally I can link to them too.
Emotionally open yet untraceable
The request wasn’t all that unusual. Many HARO sources are entrepreneurs who want business plugins in exchange for interviews. It often includes a link to her personal website, LinkedIn profile, or her social handle. I usually refuse to include links that aren’t relevant to the story, but her question wasn’t strange to me.
what was Oddly enough, I couldn’t find her online. The company she mentioned was called her SC Hers was too obscure for me to find. Her email didn’t show up in her Google search results and was a Proton account (meaning it was encrypted). Her phone number had her 898 area code, but as far as I could tell it didn’t exist.
She wasn’t on LinkedIn. Her website looked, well, embarrassing—a poorly designed spam her page. Still, I didn’t want to criticize how anyone made their money, especially breast cancer survivors.
She then sent me a picture of herself.
“Kimberly Shaw” was not happy to send me a photo of his tattoo (or even the original design). But she sent this headshot.
Kimberly Shaw/AI
Something was wrong, but I couldn’t say exactly. her hair? her teeth?
I messaged my editor to tell him I was pausing the story.
connect pixels
Despite these red flags, I felt guilty about suspecting cancer survivors, especially those who were so vulnerable to me.
But one night my boyfriend and I talked about his new favorite topic of conversation: AI and how it will change all of our jobs. I joked that he was overly preoccupied with ChatGPT-related thoughts, and then paused. was The person I was talking to sounded a little robotic.
“Would you like to scan the text?” he said as he cleared the plate.
I hurried to my bedroom, googled “AI text checker” and landed on Writer. This is a free service that shows how sections of text were human-generated with the goal of AI-user-generated content being modified to sound more human-sounding. A score of 100% indicates that it was likely written by a real person. A low score, say 40%, indicates that the AI did most of the work.
Julia Pugachevsky
Then enter part of the answer to the question.
Julia Pugachevsky
“No way,” my partner said, walking around the room.
To cross-check the app, I tested some of my writing on Insider.
Julia Pugachevsky
More and more convinced that I had been duped, I opened the photo again and zoomed in. when you know what you’re looking for. I saw glitches everywhere. Off-center ear piercings, her phantom second eyebrows.
Upon closer inspection, “Kimberly”‘s skin texture was inconsistent, appearing to be a faint second eyebrow (left) and oddly placed earrings (right).
Julia Pugachevsky/Kimberly Shaw
Based on this Medium post about the deepfake image, we speculate that it was created with StyleGAN, which stitches together different photos to create a non-existent person.
I don’t know how many people combined to make one “The Kimberly Show”. But I’m sure few people know that some of their likenesses have been used to spread junk websites and cancer hoaxes.
why did someone try to cheat me?
It was clear that the man behind “Kimberly Shaw” was looking to leverage Insider’s platform (and strong rankings on Google) to enhance the profile of the spammy website.
Google has a strict policy against spam links, making it difficult for scammers to reach a wide audience on their own. But including links to Insiders, who average about 85 million visits per month, may be worth the trouble, even if it means fooling people who might ask a lot of questions. .
“It must have been such a huge win for them to target a journalist with such a skill set,” he said, studying how people use technology to deceive. says Jeff Hancock, professor of communications at Stanford University.
Hancock said spam gaming sites like the one I was asked to link to often collect and sell user data. To play his game “free” online, the user must first enter his name, email address and phone number. These information are distributed to scammers.
As shocking as it was to use cancer to slip past my defenses, Hancock said he believed it was a deliberate choice.
“Faking cancer is a scary thing that most people don’t seem to do, and it’s one of the things they rely on,” he said, adding that it’s “not the person on the other side who’s really a criminal.” added that it is an indicator of
Low cost learning curve for criminals
Scammers have useful tools. Merve Hickok, founder of AIethicist.org and lecturer in data science ethics at New York University, says his AI technology now allows “trusted AI-generated text, images, video, and even voice , anyone can create it very easily.” Michigan, said
It can be dangerous for journalists. Trust in the media is already near record lows. In a 2022 Gallup Poll of Americans, only 7% of respondents said they “very much trust” the news.
My usual protocol of verifying sources and requesting a phone interview helped me find the cheater, even without my boyfriend’s suggestion. Scammers may be able to use voice automation and video generation tools, Hickok said.
Hancock said fraudsters are likely to adapt as new technology emerges. “It’s a one-time thing for victims, but a learning process for criminals.”
This makes it harder for reporters to lie, which can lead to legal liability, reputational damage, and the unintended spread of fake news.
“It undermines trust in sources, what you see and what you hear,” Hickok said. “It undermines trust in journalism and institutions and ultimately in democracy.”
I will be more careful from now on, but the threat will only increase
After confirming that the source was fake, I immediately contacted HARO and they said the account had already been banned two days ago. A spokesperson said the issue is a “high priority” and that the service uses both technology and human reviewers to identify fake and harmful content.
I’ve put together a document about my experience for the entire Insider newsroom. We have sharpened our protocols and are more vigilant than ever about pre-checking sources, coercing phone interviews, and conducting email communications via text checkers.
I could understand the story line before I typed it in, but I was disappointed in myself. I knew deep down that the first email was full of clichés, and it pissed me off until I responded.
All I can do now, according to Hickok, is keep abreast of advances in AI and hope that tools to identify AI fakes will be developed as quickly as AI.
Hickok and I also agreed that governments and AI companies should realize how risky it is to bring these technologies to market before considering the consequences. Many tech leaders have already spoken out on this issue.
Next time, “The Kimberly Shaw” might wave in a video or give a compelling and passionate voice over the phone. I’m getting ready for her day she does.