Launch of AI Support Assistant on Meta, Facebook, and Instagram

Applications of AI


Written by Modupe Gbadeyanka

TikTok announced at the third annual Sub-Saharan Africa Safe Internet Summit in Nairobi, Kenya that it will invest an additional $200,000 in artificial intelligence (AI) media literacy initiatives across sub-Saharan Africa.

The platform welcomed government officials, regulators, online safety partners, and industry leaders to the event, reinforcing its commitment to a collaborative approach to online safety.

This funding will be provided as advertising credits to help local organizations in the region expand AI media literacy.

The investment builds on the company’s initial $2 million AI Literacy Fund, established in November 2025. The fund provided funding to 20 global nonprofit organizations that create content that advances public understanding of AI.

In sub-Saharan Africa, TikTok initially supported three organizations to improve digital literacy and fight misinformation.

“As AI advances rapidly, we are committed to educating our community online so they feel empowered to have responsible experiences with AI, whether they are viewers or creators.

“We partner with trusted local organizations that our communities already know and trust, because their expertise and deep local connections are critical to making our AI literacy program truly impactful,” said Valiant Ritchie, Global Head of Partnerships, Elections, and Market Integrity at TikTok.

Earlier, Ms. Tokunbo Ibrahim, TikTok’s Head of Government Relations and Public Policy for Sub-Saharan Africa, said: “As we host the third Safe Internet Summit here in Kenya, our mission is clear: to share learnings and insights, address common challenges, and collaboratively advance viable solutions to protect our citizens online.”

“By bringing together a diverse coalition of policymakers, technology innovators, and creators, we will ensure that the dialogue at this summit is inclusive and leads to a more resilient digital environment.”

The summit featured expert panels and discussions on important topics such as TikTok’s commitment to trust and safety, protecting young people online, and policy frameworks for responsible AI governance.

A key highlight of the event was showcasing how TikTok is using AI to transform the way people share their creativity and discover new passions, while ensuring the safety of their community through transparency and responsible AI practices.

The platform also shared more details about how recent advances in AI have improved automated moderation, empowered human teams with better moderation tools, and enabled the platform to moderate content faster and more consistently at scale.

More than 100 million pieces of content are uploaded to TikTok every day, and these advancements, working in conjunction with our human moderation team, will ensure that violating content is removed faster and less likely to be visible to our community.

According to the latest Community Guidelines Enforcement Q3 2025, TikTok has removed more than 14 million videos across sub-Saharan Africa, 96.7% of which were actively detected and removed using automated technology, highlighting TikTok’s commitment to proactive moderation and swift action.





Source link