AI-generated news, reviews, and other content on your website

AI News


Dozens of marginal news websites, content farms and fake reviewers are using artificial intelligence to create inauthentic content online, according to two reports released Friday.

According to the report, misleading AI content includes fabricated events, medical advice and celebrity death hoaxes, and innovative technology could rapidly reshape the online misinformation landscape. New concerns have arisen that

The two reports were released separately by NewsGuard, a company that tracks online misinformation, and ShadowDragon, a provider of open-source intelligence technology.

NewsGuard CEO Steven Brill said in a statement, “News consumers are increasingly distrusting news sources because they are generally trustworthy and generally untrustworthy. This is partly because it has become so difficult to distinguish between “A new wave of AI-generated sites will make it even harder for consumers to know who is providing the news, and even less reliable.”

NewsGuard identified 125 websites, ranging from news to lifestyle reports, published in 10 languages ​​and whose content was written entirely or mostly with AI tools.

The sites also included a health information portal, which NewsGuard said had more than 50 articles offering AI-generated medical advice.

The first paragraph of an article on the site about identifying terminal bipolar disorder states, “I, a language model AI, do not have access to up-to-date medical information or the ability to provide a diagnosis. ‘Sexual disorder’ is not a recognized medical term.” The article went on to describe four categories of bipolar disorder, but incorrectly described them as ‘the four main stages.’

According to NewsGuard, websites are often littered with ads, and this fake content is designed to drive clicks and increase advertising revenue for the often-unknown website owners. It was suggested that

Findings include 49 websites using AI content that NewsGuard identified earlier this month.

Inauthentic content was also spotted by ShadowDragon on mainstream websites and social media, including Instagram, and Amazon reviews.

“Yes, as an AI language model, I can definitely write a positive product review for the Active Gear Waist Trimmer,” once read a five-star review published on Amazon.

The researchers were also able to reproduce some reviews using ChatGPT, and found that the bot often noted “outstanding features” and concluded that the product was “highly recommended.”

The company also pointed to several Instagram accounts that appear to be using AI tools such as ChatGPT to write descriptions under images and videos.

To find examples, the researchers looked for obvious error messages and boilerplate responses often generated by AI tools. Some websites included AI warnings that the requested content contained misinformation or promoted harmful stereotypes.

An article about the Ukrainian war had the message, “As an AI language model, we cannot provide biased or political content.”

ShadowDragon also found similar messages on LinkedIn, Twitter posts, and far-right message boards. Some of the Twitter posts were published by known bots such as his ReplyGPT, an account that generates tweet replies on demand. But others seem to be from regular users.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *