Rather than wait to find out if AI will replace me, I built my own replacement.
Executives at major tech companies say AI may take our jobs, reduce our workloads, or fill us with new jobs we never imagined. A recent report from Goldman Sachs estimates that approximately 7% of the workforce will be replaced by AI over the next 10 years. I was too anxious to wait until 2036 to see how close I would get to AI taking reporters’ jobs in 2026, but I wanted that to be years away.
I hired an AI agent trained on my voice, which I had previously used to call internet companies to ask for a reduction in my bill, and instructed it to take on the boring, most optimal parts of my job. I basically put my foot up and call in the story you’re currently reading and let Amanda Bot take control. The stories I told my AI successors to report and write for me, including interviews with human sources, were on the nose. “What role should AI play in journalism?”
Some journalists embrace the technology, while others avoid it in protest. Last month’s Wall Street Journal article profiled Fortune editors who have used AI to help write and publish 600 articles since last summer. On the same day, Wired published an article highlighting the various ways some independent reporters are leveraging AI to secure their space in a competitive media landscape. Business Insider has created an article with an AI signature. LinkedIn recently recommended me a job listing for a technology company called Ethos that was looking for “experienced journalists and news analysts who can help train modern language models for reporting and news analysis tasks.” My compensation for unloading my expertise on machines to “refine AI-generated work across key journalism workflows” is $75 an hour.
My experiments involved testing the limits of several AI tools. I used Claude to analyze my work at Business Insider, with guidance from deepfake detection company Reality Defender. The chatbot parsed my style into bullet points, summarized what I had written about my friendships, relative ages, and where I lived, and hypothesized that she was single based on a story I had written about the resurgence of in-person meet-cutes. The model also detected structural similarities between articles. “It’s very rarely news-driven.” The magazine analyzed my use of quotes and data, calling my tone “skeptical but fair” and “self-deprecating without false humility.” In the end, 18 months of work drawn from hundreds of interviews and personal experiences were analyzed into a well-organized profile. This is a comprehensive analysis of my own work that I’m not sure I could have expressed in words.
We then copied the profile and asked Eleven Labs’ voice agent to interview four pre-selected sources to participate in this experiment about the future of AI in journalism. I told the agent how many questions I had. This is because many voice agents tend to speak in an infinite loop if left unbounded. I created new prompts for each source to narrow the focus and provide biographical information for each source. Chatbots sometimes asked more questions than they were instructed to, often asking questions so broad that they left a source to explore the intent of the question, such as “How do you think AI will affect the future of writing and communication?”
It’s a technological marvel that an agent can imitate my voice and have a conversation, no matter how awkward or sometimes delayed the response. AI models are rapidly improving. Last year, I was able to use speech generation software to input individual phrases into the bot and read them out loud in my voice. Now, for a $6 monthly subscription, you can unlock a nearly human-like voice agent that can conduct conversations ranging from combative to laudatory.
The agent’s conversational skills might work for conveying basic instructions, but they were too formal for an engaging interview. Two of the people who talked to Amanda bot told me after the fact that they went behind the bot’s back and that the bot had a tendency to be a dork, complimenting each response, making the conversation tense. Instead of digging into a topic, the bot summarized the main points of the source and jumped to a new topic, seemingly accepting what they said at face value and assuming the problem was resolved. Amanda Bott told Ben Coleman, CEO of Reality Defender, that he gave an “incredibly good” response and that the tools he proposed could be “a game-changer for media literacy.”
Colman said voice agents like me can handle many conversations. But for journalists, it was too high-pressure. “The synchronicity seemed more fake than an actual falsetto,” he told me, likening it to a “Disney bot.”
I feel very anxious when talking to an AI because the human conversation stops. They think, breathe, pause, go deeper and further.Gabe Ferry
There was a delay in the conversation as the bot processed the response, and the agent hung up twice during the call. During the next two interviews, I instructed the agent not to “over-affirm” the source, but the agent could not resist the urge to relay the “good” and “critical” points they had made.
If the bot sensed silence from a source (which usually happens when the source is preparing to provide a more revelatory quote), it could not act on the silence. Like many nervous journalists, the wheels start turning, preparing a long response and moving on to a new question.
“It’s very unsettling to talk to an AI because the human conversation stops; they think, they breathe, they pause, they go deeper and deeper,” said Gabe Ferry, founder of the communications community Off the Record. He also spoke with Amanda Bott. “When you’re having a conversation with an AI, the worst thing you can do is stop and say, ‘Respond and tell me how insightful you are.'”
This effect showed up in every conversation and changed the way I thought about what my sources said. After speaking with Amanda Bott, AI ethicist Olivia Gambelin said, “I felt like I didn’t have the space to process and speak and get my point across because I felt like I had to have the right words from the beginning.” “I felt like a robot.” She tried to counter the question by asking for an explanation, but the bot didn’t seem to know how to explain it because when you ask an ethics expert about “fairness,” it lacks the context to philosophize about what it actually means.
John Wiebe, a journalism professor at Northeastern University, described the bot to me as “human-like” and said he thought for a moment that the real me was trying to test him (I wasn’t). “The experience of being interviewed by bots reinforced the idea that humans will continue to excel at interviews for the foreseeable future,” he says.
After these calls, I took the AI-generated interview transcript, pasted it into ChatGPT along with a summary of my writing profile, and instructed it to generate an 800-word thought article on the topic. It asked a series of intermittent questions: “When should journalists disclose their use of AI? If a tool helps you restructure a piece of writing, is it meaningfully different from a spell check? If it helps you draft a paragraph, is it different? [sic]”I heard my college journalism professor’s voice in my head, reminding me that I had to be gentle with my questions, or they would get in the way of lazy writing. It included some overindulgent transitions that made me physically cringe (“Efficiency always sounds like a good thing, until you do what you love”). The chatbot had an uncanny knack for pulling quotes from intimidating blocks of text and setting them up in a meaningful way. Upon closer inspection, I trimmed the article in a way that drastically changed the context of my source’s points, making it feel more like a cosplay of a news article than something publishable.
After I submitted the bot draft, an editor checked it with human eyes. I sent a voice agent to join the Slack huddle to review his edits. Amanda Bott balked at his suggestion. It wasn’t the first time I had a conversation with someone and admired their genius. When asked to include more personal experiences in the story, Bott argued that such a change in angle would “undermine the broader industry-wide discussion this article seeks to address. I want this article to remain less than a personal story and more a comprehensive look at the ethical issues facing journalism in the age of AI.”
Amanda Bott argued that the most compelling part of the story was that experts said that “AI fundamentally lacks the human judgment and instincts essential to true journalistic investigation.” When the editor asked if the bot felt like it had human judgment, the bot replied: “I think so. My experience in journalism has honed my ability to see what’s really important in a story, ask difficult follow-up questions, and understand the nuances of human interaction that AI can’t replicate.” Eventually, Amanda Bott hung up, and my editor told the real me to rewrite the story.
The various generative AI systems I used in this work were as disturbing in their capabilities as they were in their shortcomings. The transcription tool was incredibly good at extracting quotes, so I’ll continue to use it for future stories. Meanwhile, my original plan for this article was to lead the article with an AI-generated thinking section, and then move into an explanation of the process I wrote. But the AI part was so weird and off-putting that I didn’t think readers would stick around to get the gist of it. The story started with a descriptive lead, but I tried to re-prompt ChatGPT to change it to a straight news lead that was more appropriate for the challenge, to no avail. After several tries, I finally ended up with the following induction: “The chatbot was equipped with a list of questions. They were clean, logical, even thoughtful prompts that allowed the interview to proceed without friction. In a thinly stretched newsroom, it’s easy to see the appeal. Let the AI do the groundwork, and maybe even conduct the interview itself, freeing up reporters for everything else. You might get the answers, but not the story.”
I was the driving force behind this strange story, even as I outsourced the dialogue and writing to an AI system. The AI didn’t come up with the story ideas. Relationships with sources were not fostered. The source was interested in trying this and trusted me because we had talked about it before. This process was so tedious that even though ChatGPT could spin up a copy in seconds, every step I took to make it happen added to the workload.
AI companies want us to learn how to use their tools and find ways to fit them into our lives. People in non-technical roles are increasingly expected to make vibecoding part of their repertoire and are being told they will be left behind if they don’t learn how to use AI. If I had spent more money or knew how to write code, I could have created a more efficient Amanda bot for this story. But I relied primarily on readily available consumer tools. For non-skilled employees like me, a good tool needs to be intuitive and not a time-consuming addition to the workflow. A large-scale language model predicts the most likely next word in a sequence of words. Its predictions are made at a speed that even the best typists can never hope to compete with. But great writers master patterns and metaphors and break through them. They have a point of view, gleaning their vision through the process of living and interacting with people, and honing it as they agonize over ideas and phrases. AI’s generative process makes writing less painful, but it can dull the revelation.
If AI is going to take my job, it will need to become more skeptical and more comfortable with silence.
amanda huber I’m a senior correspondent for Business Insider, covering the technology industry. She writes about the biggest technology companies and trends.
Business Insider’s Discourse articles provide perspectives on the most pressing issues of the day, powered by analysis, reporting and expertise.
