Illegal trafficking of AI child sexual abuse images exposed

AI News


  • By Angus Crawford, Tony Smith
  • BBC news

Pedophiles are using artificial intelligence (AI) technology to create and sell lifelike child sexual abuse material, the BBC has revealed.

Some pay subscription fees to accounts on mainstream content sharing sites such as Patreon to access images.

Patreon said it has a “no tolerance” policy for such images on its site.

The National Police Chiefs’ Council said it was “outrageous” for some platforms to not take “moral responsibility” despite making “huge profits”.

Exploited image creators use AI software called Stable Diffusion, which aims to generate images for use in art and graphic design.

AI enables computers to perform tasks that normally require human intelligence.

Stable Diffusion software allows the user to describe the desired image using a word prompt and the program creates the image.

However, the BBC found that the images were used to create lifelike images of child sexual abuse such as infant and toddler rape.

The UK Police’s Online Child Abuse Investigation Team says it has already encountered such content.

image caption,

Journalist Octavia Sheepshanks Says AI-Generated Images Are Getting A “Massive” Influx

Freelance researcher and journalist Octavia Sheepshanks has been investigating the issue for several months. She reached out to her BBC through her children’s charity NSPCC to highlight her findings.

“Since AI-generated images became possible, there has been a massive flood…it’s not just very young girls, [paedophiles] I’m talking about toddlers,” she said.

Computer-generated ‘pseudo-images’ depicting child sexual abuse are treated the same as real images and are illegal to possess, publish or pass on in the UK.

Ian Critchley, a child protection lead for the National Police Chiefs Council (NPCC), argues that no one was harmed because such “composite” images do not depict real children. said it was wrong.

He warned that pedophiles “can move along a criminal scale, from thought to synthetic to real-life child abuse.”

Abuse images are shared through a three-step process.

  • Pedophiles use AI software to create images
  • They promote their photos on platforms such as the Japanese image-sharing website called Pixiv.
  • These accounts have links to direct customers to more explicit images, and you can pay to view them in your account on sites like Patreon.

Some of the image creators also post on Japan’s popular social media platform called Pixiv, which is mostly used by artists to share their manga and anime.

However, because the site is hosted in Japan and sharing child sexual cartoons and drawings is not illegal, the creators are encouraged to do so within groups and through hashtags that use keywords to index topics. I use the site to promote my work.

A Pixiv spokesperson said the company is very focused on addressing the issue. On May 31, it announced that it would ban all graphic depictions of sexual content involving minors.

The company said it is actively enhancing its surveillance system and allocating significant resources to address issues related to AI development.

Sheepshanks told the BBC that the findings suggested that users appear to be creating child abuse images on an industrial scale.

“Because of the sheer volume, people [creators] “We aim to produce at least 1,000 images per month,” she said.

Looking at user comments on individual images on Pixiv, it is clear that they have a sexual interest in children, and some users posted abuse images and videos that were not generated by AI. Some even offer to provide

Sheepshanks monitors several groups on the platform.

“Within a group of 100 members, people will share, ‘Oh, here’s a link to the real thing,'” she says.

“The worst thing was that I didn’t even know what to say” [the descriptions] It’s like something like that existed. “

Many Pixiv accounts include links in their bios that direct people to so-called “uncensored content” on US-based content-sharing site Patreon.

Patreon is valued at around $4 billion (£3.1 billion) and claims to be home to over 250,000 creators, most of whom are legitimate accounts belonging to well-known celebrities, journalists and authors is.

For just $3.85 (£3) a month, fans can support creators with a monthly subscription to access blogs, podcasts, videos and images.

However, our research found that a Patreon account was selling photorealistic child pornography generated by AI, at different price levels depending on the type of material requested. Did.

“I train my daughters on my PC,” one person wrote on her account, adding that they were demonstrating “obedience.” Another user offered “exclusive uncensored art” for $8.30 (£6.50) a month.

The BBC sent an example to Patreon, which the platform admitted was “semi-realistic and violates our policies.” The account was immediately deleted.

Patreon has a “zero tolerance” policy, arguing that “creators cannot fund sexually-themed content involving minors.”

The company said the rise in AI-generated harmful content on the internet was “real and disastrous”, adding that it “identified and removed an increasing amount” of this content.

“We have already banned synthetic child exploitation materials generated by AI,” he said, adding that the company is “very aggressive” with a dedicated team, technology and partnerships to “keep teens safe.” ‘ said.

image caption,

NPCC’s Ian Critchley said it was a ‘pivotal moment’ for society

The AI ​​image generator Stable Diffusion was created as a global collaboration between academia and many companies led by British company Stability AI.

Several versions have been released with restrictions written into the code that control the types of content that can be created.

But last year, an early “open-source” version was made public, allowing users to remove the filters and train them to generate any image, including illegal ones.

Stability AI told the BBC, “We prohibit abuse for illegal or immoral purposes across our platform and our policy clearly includes CSAM (child sexual abuse material). ‘ said.

“We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes.”

As AI continues to develop rapidly, questions are being raised about the future risks it may pose to people’s privacy, human rights, and safety.

NPCC’s Ian Critchley also said he was concerned that the proliferation of realistic AI and “synthetic” images could slow down the process of identifying actual abuse victims.

He explains: “It creates an additional demand for police and law enforcement to identify where real children are being abused anywhere in the world, rather than artificial or artificial children.”

Critchley said he believes this is a pivotal moment for society.

“We can ensure that the internet and technology provide great opportunities for young people, otherwise the internet could become a more toxic place,” he said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *