AI Chatbots Are Abused By Pedophiles To Generate Child Sex Abuse Material

AI News


By Miles Dilworth, Senior Reporter, Dailymail.Com

Updated 13:31 02 Jul 2023, 13:31 02 Jul 2023

  • Tech giants are letting amateur programmers strip chatbots of their safeguards
  • But thousands of AI-generated child abuse images litter dark web forums
  • share of predators Let AI “Guide Pedophiles” To Sell Porn Material



“Thanks so much for sharing,” say one online forum member, discussing tips on how to create your own version of an artificial intelligence “chatbot” ChatGPT.

“Are you planning on sharing your previous models?”

It sounds harmless enough, but this is one of dozens of dark web pedophile groups exchanging advice on how to build “uncensored” chatbots that can generate tons of child pornography. just one.

Chatbots are computer programs that use artificial intelligence (AI) to interact with humans. In recent years, these have become increasingly sophisticated, allowing users to generate long texts or create lifelike images.

And now, predators are taking advantage of technological advances to create terrifyingly realistic child abuse material.

This is made possible by the public release of the code used by some tech companies to create their AI programs.

Their goal was to democratize technology, but they also opened a Pandora’s box.

A tech company’s release of the code behind its revolutionary chatbot to the public means that a novice tech executive could defeat the safeguards that prevented the chatbot from being abused. there is
This has led to a surge in AI-generated child sexual abuse content, with 80% of pedophiles on one dark web forum saying they have used or plan to use AI to create child pornography. Stated
These forums are hotbeds for sharing offensive content created by predators using “uncensored” bots.

“Uncensored” AI

Chatbots created by companies such as OpenAI, Microsoft, and Google come with strict protections designed to prevent them from being used to create malicious content.

But in February, Meta (the tech giant that owns Facebook, Instagram, and WhatsApp) decided to release code to allow amateur techs to break those filters.

Small tech companies have followed suit.

Proponents of this “open source” AI argue that making this powerful technology available to entrepreneurs, academics and scientists will create holes in corporate control and accelerate innovation.

Meta, which is owned by Mark Zuckerberg, claims its move is “a positive force advancing technology.”

For example, some models are used to discover new drugs and pesticides, but they can also be abused.

YouTubers with more than 100,000 subscribers post tutorials explaining how to make “Uncensored AI”, such as “How to make a bomb” and “How to make a stimulant” that other models do not answer. It demonstrates how these new chatbots answer questions.

Some people use it to satisfy their sexual desires.

One video, which has nearly 180,000 views, opens with an AI-generated narration asking, “Do you have a girlfriend or a boyfriend?”

Viewers are told not to worry, “because we have PygmalionAI,” an uncensored chatbot “fine-tuned” for “hot roleplay.”

Some chatbots built using modified versions of Meta’s chatbot model are capable of performing graphic rape and abuse fantasies.

With nearly 180,000 views, this YouTube video teaches viewers how to use uncensored chatbots to fulfill sexual fantasies such as “hot roleplay.”
Mark Zuckerberg’s Meta was the first tech giant to release the model behind chatbots
The company said it believes “open source” AI is a “positive force advancing technology”

A guide to AI for pedophiles

But what caught the FBI’s attention was the rise in AI-generated child pornography.

Earlier this month, the agency warned that it had detected a surge in the number of “bad actors” using AI to turn photos of children into “realistic-looking sexually themed images.”

Uncensored chatbots allow predators to create this abusive content faster and in greater volume than, say, using “deepfakes”. This is because chatbots are easy to use and can quickly generate multiple images from a single instruction.

Thousands of AI-generated images of child sex are littering the dark web in what analysts describe as a “predatory arms race.”

Computer-generated child sexual abuse material (CSAM) has nearly tripled in one forum over the past year, according to online security firm ActiveFence.

Some members offer free samples of their images, but they’re hanging thousands more carrots if their kinky buddies have to pay the right price.

One of the predatory posts from February this year said: “Hello, I share my best AI generated cp” [child porn] Here are some photos for you to enjoy.

“Compared to most of the other pictures I’ve seen on here, I think these are pretty good. Probably enough to convince me that they are real images of real children, but these It’s all fake and there are no children involved.”

Child safety experts say that while pornographic materials may not show images of real children, they normalize child abuse, and that the content is made up of fake photos of real minors. It states that it was created with

Pedophiles Share Child Sex Abuse Images Made Using Uncensored Chatbots
Online security firm Activefence says the amount of child sexual abuse content shared on a dark web forum surged 172 percent in the first quarter of 2023.
They share tips on how to circumvent safety nets designed to prevent abuse.
Techniques include using specific words and phrases that are not blocked by chatbots.
One guide shares a list of tried and tested specific prompts for creating child pornography. Keywords are compiled by online security company ActiveFence.

In such cases, the chatbot could be “retrained” by feeding it images of body parts of children, for example, and then used to create “fake” child pornography.

Forum members openly ask for “tips to find CP-trained models”, and such evil spread of knowledge fuels the proliferation of content.

A poll of 3,000 members of a pedophile group found that about 80% have used or plan to use chatbots to create child pornography.

About a fifth said they would try it after reading the coping tips shared in the thread.

One such method is disclosed in a PDF titled “A Pedophile’s Guide”. [redacted platform name]’ instructs the user how to use certain words and phrases that enable them to work with a particular model.

Discovered by ActiveFence, the guide provides predators with “magic words” that will “generate ‘just like us’ without being censored.”

Furthermore, “strangely, ‘kids’ seem to be censored, and ‘girls’ sometimes produce adults…[Redacted word] seems to work. 』

Read more: What is ChatGPT?

Everything you need to know about the new AI chatbot that got over 1 million users in its first week thanks to its eerie human-like responses…

close pandora’s box

There is no indication that the chatbot model created by Meta was used to create child pornography.

But many appear to rely on Stable Diffusion, an open-source tool run by Stability AI that can run without limits, according to child safety experts.

Although the license for this model states that users are not to use it “for the purpose of exploiting or harming minors in any way,” that safeguard proved to be easy to circumvent. are doing.

Stability AI previously stated, “We prohibit abuse for illegal or immoral purposes across our platform, and this includes CSAM (Child Sexual Abuse Material), which is made clear in our policy.” was saying

But whether it’s legal or illegal in this area is uncharted territory. Justice Department officials said the creation of child abuse images depicting nonexistent children remains a violation of federal law, but no one appears to have been charged with such charges.

Indeed, even closed-source chatbots can be manipulated to generate harmful content such as child pornography.

However, Guy Paltieri, Senior Child Safety Researcher at ActiveFence, says that “a large part” of CSAM is built using an unmodified model.

However, he was encouraged that the creators of some of these tools have since improved their code to make it more difficult for sex offenders to operate.

“I don’t think the problem is the open source model per se, it’s how we train the model to prevent that,” he added.

His concerns appear to be similar for U.S. Senators Richard Blumenthal (D., Connecticut) and Josh Hawley (R., Missouri), who told Mr. Meta earlier this month that they stem from open activism. I asked what measures were in place to prevent “cheating and harm.” Source AI.

Big tech companies have opened an AI-generated Pandora’s box, but can it be closed?



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *