Koo’s AI-powered content moderation tackles nudity, porn and fake news

AI News


Indian microblogging site Koo is on its way to becoming a platform that integrates artificial intelligence (AI) and machine learning to optimize content moderation. As fake news and AI-generated media content proliferate on social media, tech companies are developing effective mechanisms to combat the dangers posed by issues such as misinformation, impersonation, pornography, and violent graphic content. working to provide.

Koo has announced a number of AI-powered features to ensure that content moderation techniques are efficient and contribute to making the platform a healthier space for all parties.

The social network’s platform looks similar to Twitter’s, but the company claims it stands out thanks to its commitment to making Koo a safe and fair space.

Indianexpress.com joined Koo’s team for an exclusive demonstration of the latest update to their anti-content moderation.

About Nudity and Porn: As soon as a user posts nude photos to their Koo account, they receive a notification that “This Koo has been removed for graphic, obscene or sexual content.”

The company says the process is fully automated and begins within seconds of posting a photo. When an image is removed, the user will receive another notification indicating why it was removed and asked to appeal using the rectification form if they believe the image is in error. These notifications are displayed in the preferred language set by the user.

“Pornography is illegal in India and in many countries around the world. Anyone posting nude photos or pornography from an Indian IP is illegal and the platform should remove these. So this happened and I realized that such content has been around for a long time.Koo has the intention behind the introduction of these features.We are a platform of thoughts and opinions. and we want people to participate and interact with each other in a healthy way,” said Rajneesh Jaswal, Head of Legal and Policy at Koo.

Similarly, if a user posts a video containing nudity or pornography, the video will be removed in about 5 seconds, depending on the length of the video and how long it takes to process. After deleting the video, Koo will send a notification to the user. If a user posts a nude photo as a display image, Koo will use a similar mechanism to remove the photo.

Koo said the company’s nudity algorithm identifies actual nude images that lead to pornography, but excludes works of art.

Rahul Satyakam, senior manager of operations, said Ku alongside Twitter during the demonstration. Satyakam posted similar content on the Elon Musk-owned platform, indicating that it took no action against such posts. Similarly, Satyakam showed a lewd post he shared on Twitter a few days ago and how it continues to catch everyone’s attention today.

For posts containing violence: If a user shares an image containing gore or graphic violence, Koo will allow it to be posted, but will provide additional caution. A blurry image is displayed and a warning message has been placed “This content may not be suitable for all users. For their benefit, a warning message has been placed.” You have the freedom to play and comment.

Some of these images may be related to news developments, so Ku took a nuanced approach, eliminating bulk-deletion mechanisms previously seen for obscene content.

Impersonation: The platform uses machine learning to detect instances of impersonation, which the company claims is improving. Impersonation detection is supported by AI, but most of the actions taken are manual by human moderators. As part of the demonstration, Satyakam created an account with Shah Rukh Khan’s name and image.

The impersonation dashboard on the platform is intended for corporate staff only and shares important information about users and impersonated VIPs. One of its features, soft-delete, removes all spoof-worthy details, such as names and display images. “Even if there are no users on our platform, or someone tries to impersonate them, we will make sure we take the necessary steps,” Jaswal said.

After the impersonating content is removed, the platform will notify you that your profile details have been removed – Your profile details have been removed due to repeated violations of Koo community guidelines or legal requirements. will be issued.

fake news: The platform runs a detection cycle every 30 minutes and claims to be able to remove fake news instantly. When users share fake news, the dashboard detects it and provides information to track the source of the news, giving moderators enough information to take immediate action. Users receive a notification above fake news stating “Unverified or Fake Information: Reviewed by Fact Checkers”. Users can also ask for reviews if they believe their content is not fake.

Harmful Comments and Spam: As part of the demo, a user posted an abusive comment on the post. Koo identifies such posts and effectively hides them.Such posts may be posted by users[非表示のコメント]Only visible when the button is clicked. This feature works almost instantly. The company said banning comments would limit freedom of expression, but this is Ku’s way of allowing people to express their opinions.

Koo has not only secured their content, but also integrated ChatGPT for some Yellow Tick users. AI chatbots allow users to create posts about anything with prompts.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *