AI of the Week: OpenAI considers allowing AI porn

Applications of AI


Keeping up with an industry as rapidly changing as AI is a challenge. So until AI can do it for you, here's a quick recap of recent stories in the world of machine learning, as well as notable research and experiments that we couldn't cover on our own.

By the way, TechCrunch is planning to launch an AI newsletter soon. stay tuned. In the meantime, he will increase the frequency of his semi-regular AI columns from twice a month (or so) to weekly. So keep an eye out for more editions to come.

In the AI ​​space this week, OpenAI revealed that it is exploring ways to “responsibly” generate AI porn. Yes, that's right. OpenAI's new NSFW policy was announced in a document aimed at peeling back the curtain and gathering feedback on AI instructions on how and where the company allows explicit images and text in its AI products. OpenAI said it aims to start the conversation.

“We want to give people the most control as long as they don't violate the law or violate the rights of others,” Joan Jiang, a member of OpenAI's product team, told NPR. He added: “There are creative cases where content that includes sexuality or nudity is important to users.”

This isn't the first time OpenAI has expressed a willingness to step into controversial territory. Earlier this year, Mira Murati, the company's chief technology officer, told the Wall Street Journal that OpenAI will eventually allow its video generation tool, Sora, to be used to create adult content. “I don't know” whether he will do so, he said.

So what should we think about this?

There teeth A future where OpenAI opens the door to AI-generated porn and all is well. I think Chan is right when he says that there are legitimate forms of artistic expression for adults, and that there are expressions that can be created using AI-powered tools.

But I don't know if I can trust OpenAI, or any generative AI vendor for that matter, to do it right.

For example, consider the creator rights angle. OpenAI's models are trained on a vast amount of public web content, some of which is arguably pornographic in nature. But OpenAI didn't license all of this content, and didn't even allow creators to opt out of training until relatively recently (and even then only for certain forms of training).

It's hard to make a living from adult content, and creators will face even more competition if OpenAI makes AI-generated porn mainstream. Competition is based on the work of creators, so it is not a waste of time.

Another issue I'm thinking about is the possible fallibility of our current safeguards. OpenAI and its competitors have been refining their filtering and moderation tools over the years. However, users are constantly discovering workarounds that allow companies to exploit their AI models, apps, and platforms.

Just in January, Microsoft was forced to make changes to its designer image creation tool that uses OpenAI models after users discovered a way to create nude images of Taylor Swift. On the text generation side, it's easy to find chatbots built on a “safe” model, like Anthropic's Claude 3, that spew out erotica with ease.

AI is already creating new forms of sexual abuse. Elementary and high school students are using AI-powered apps to “delete” photos of their classmates without their consent. A 2021 poll conducted in the UK, New Zealand and Australia found that 14% of respondents aged 16 to 64 had been victimized by a deepfake image.

New laws in the US and elsewhere aim to counter this. But the jury is out on whether the justice system (which already struggles to stamp out most sex crimes) can regulate an industry that is changing as rapidly as AI.

Frankly, it's hard to imagine OpenAI taking a risk-free approach to AI-generated pornography. Perhaps OpenAI will reconsider its stance.Or perhaps, contrary to expectations, it is intention Figure out a better way. Whatever the final outcome of the incident, it seems likely that it will be revealed sooner or later.

Here are some other notable AI stories from the past few days.

  • Apple's AI plans: Apple CEO Tim Cook revealed some tidbits about the company's plans to advance AI during an earnings call with investors last week. Sarah has the full story.
  • Enterprise GenAI: Drew Houston and Dylan Field, CEOs of Dropbox and Figma, respectively, invested in Lamini, a startup building generative AI technology along with a generative AI hosting platform for enterprise organizations.
  • AI for customer service: Airbnb is launching a new feature that lets hosts choose AI-powered suggestions to answer guest questions, such as sending guests a property checkout guide.
  • Microsoft limits its use of AI. Microsoft reaffirms that it will ban US police from using generative AI for facial recognition. It also banned law enforcement agencies around the world from applying facial recognition technology to body cameras and dashcams.
  • Money for the cloud: Alternative cloud providers like CoreWeave are raising hundreds of millions of dollars as the generative AI boom drives demand for low-cost hardware to train and run models.
  • RAG has its limits. Illusions are a big problem for companies looking to integrate generative AI into their operations. Some vendors claim they can eliminate them using a technology called RAG. But those claims are highly exaggerated and yours turns out to be true.
  • Vogels' meeting summary: Amazon CTO Werner Vogels has open sourced a meeting summarization app called Distill. As you might imagine, the company relies heavily on his Amazon products and services.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *