column Earlier this year, I was fired and replaced by a robot. And the manager who made the decision didn't tell me that it was happening.
The gig I lost started as a happy and profitable relationship with Cosmos Magazine, a rough analogue of Australia's new scientists. I wrote an occasional feature and a column that appears every three weeks in the online version.
Everyone seemed happy with the arrangement, my editor, my readers, myself and others. We found a gap that we believed would last for years to come.
I didn't do that. February – Just a few days after I submitted the line, me and all the other freelancers at Cosmos received an email informing me that no further submissions were accepted.
It is a rare business that can benefit both science and the public, and Cosmos was no exception. I understand that was floating around with financial support. When the funds ended, Cosmos ran into trouble.
Accepting the economic realities of our time, I lamented the loss of a big outlet for my more scientific investigation and moved on.
But it turned out that it wasn't the whole story at all. Six months later, on August 8th, a friend texted a news story from an Australian broadcasting company. To sum up (provided by ABC):
Cosmos was caught up in doing so using the AI generated to create articles for its website and using grants from the nonprofit organization that runs Australia's most prestigious journalism awards. That's why my job – writing an article for that website – suddenly disappeared.
But that's not even half of it. It is likely that AI was “feeding” my articles via “Common Crawl,” almost every giant turbo that was published on the web to ensure the accuracy of its content.
I wasn't just fired and replaced by a robot. The robot was programmed to represent me.
This article reports that the editors of Cosmos were unaware of this. It all went quiet – it speaks volumes about how this proposal would have been received if it were shared with staff responsible for working with freelancers. Cosmos' Mea culpa on the incident laments the lack of communication before the work that featured an article drawn by AI.
What an understatement.
Editors know that audiences want to read words written by people (like these). It's good for a summary, but bland “mid” content generated by AI has no human touch. It's done in a pinch, but no one is particularly satisfied.
Cosmos has decided to lean towards slop-filling all the marketing channels on the web as Generative AI offers more of what marketers want us, but there's little to read.
Cosmos was brave enough to label articles generated by AI. It's more transparent than you can see from other publications, with one individual managing the output of a large content farm becoming one person's show.

Wiley closes 19 academic journals amid issues with AI paper mills
read more
There are techniques for watermarking such as AI-generated content. Readers can easily warn them. But the idea has already been released by Openai CEO Sam Altman. SamAltman recently declared that AI watermarks are threatening at least 30% of ChatGpt-Maker's business. Organizations don't want to own that they generate us in slops and spam.
Without this kind of detection, it is necessary to show the paths of these words from the keyboard to the eyes, baring the process of writing, editing and publishing. With such transparency, we can see the human element shine.
That human touch had no rivals. Now, it has quickly become the most valuable thing readers will experience. That should be a good reason to make it happen. ®