The meta blunder regarding AI-driven imagery

AI For Business


The entire tech industry seems to be in turmoil: on one side, a bunch of office nerds are putting together upgraded language models to craft text, images, and videos, while on the other, there are groups figuring out how to let users know that such content wasn't actually generated by humans.

Of course, one could argue that the latter is the ethically correct thing to do, but when these two groups of nerds pledge allegiance to the same big tech company, it makes you question the inherent stupidity of such actions. The most recent example comes from Meta, which first tagged images as “made with AI” and has now changed the tag to “AI info.”

A rose by any other name smells just as sweet, but it's unclear whether changing a few words in a tag would make the idea any less ridiculous. Meta changed its tags after photographers complained that the company was labeling real photos that actually used basic editing tools.

What did Meta do and then revert?

User feedback and the confusion Meta has caused with the “Made with AI” tag highlight how such quick responses rarely produce good results, raising the question: how much AI is enough in this case for a platform to consign a good photo to the non-creative bin?

If this sounds absurd, what Meta (owner of Facebook, Instagram and Whatsapp) has done to fix this seems even more absurd: the company will now tag images with “AI information” across all of its apps. Why do they do that? Because the previous tag didn't make it clear to users if the image was created by AI or if AI-powered tools were used in the editing process.

“Like other companies across industries, we found that labels based on these metrics didn't always align with people's expectations and didn't provide enough context. For example, content that included small AI-based changes, like retouching tools, included industry-standard metrics and was labeled 'Made with AI.'” The company said in a blog post:.

How far can stupidity go?

Sound good? Well, not entirely. The company hasn't said anything about the underlying technology it uses to detect and label the AI ​​in photos. The company uses metadata standards like C2PA and IPTC, which include input on AI tools. So if you use Adobe's Generative AI Fill to retouch your wedding album, Meta might re-tag it.

So, for now, the labels aren't important, and Meta just hopes that the new labels will help people understand that tagged images aren't necessarily created by AI. Sounds great, right? Now, the company hopes that this change will help users understand things better, while Meta works with the industry to improve this process.

This naturally raises the question: why create and implement a system that isn't foolproof? Experts agree that the new tags don't solve anything, especially the problem of not detecting photos that are entirely AI-generated. What's more, they don't tell users to what extent AI was used in an image.

Ethical considerations or regulatory concerns?

In our workplace, we believe that such efforts by major tech companies only highlight their fears about the regulation that may be imposed on the field of AI, especially on the ethical and creative aspects.

You may have already read that the European Commission has ruled that Meta's pay-or-consent offers on Facebook and Instagram in Europe do not comply with the Digital Markets Act: the binary options Meta offers “force users to consent to the combination of their personal data and prevent them from offering a less personalized but comparable version of Meta's social networks,” the Commission said. the commission said in a press release..

Failure to comply with these regulations could result in fines of up to 10% of the company's annual global turnover, and 20% for repeat offenses, which could be very costly for the tech giant. More importantly, it could force Meta to abandon its business model, which requires users to consent to monitored advertising as the price of admission.

Already under fire for anti-competitive practices, Meta will likely want to avoid regulators when it comes to its random use of AI, which is why its sudden move to change image tags highlights how first-mover advantage in technology can be a double-edged sword, and how a lack of industry standards can devolve into a meme fest. This is just a bit of an embarrassment.

The only way to avoid this challenge is for the industry to sit down and create guidelines that punish copycats rather than being unfair to creators. In fact, this image adventure also raises the question of whether owners of AI tools should make ex ante statutory declarations to their users about the dangers of using technology to hide a lack of creative thinking.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *