EU launches antitrust investigation into Google's use of data for AI

Applications of AI


Abu Dhabi: As AI-generated deepfakes and bots become more sophisticated, online privacy and protection of personal information has become an urgent global concern, especially for journalists, influencers and media professionals whose lives revolve around the digital spotlight.

The growing threat of identity theft, character assassination, and organized online abuse was at the center of high-stakes conversations on the second day of the Bridge Summit in Abu Dhabi. There, regional and international leaders in technology and media tackled the complex risks surrounding digital safety, security and trust in an AI-driven world.

Adaline Hulin, Head of Media and Information Literacy at UNESCO, highlighted the risks that many people, especially children and women, face online.

While her work has long centered on promoting safe internet practices, she argued that the responsibility for protecting online privacy and security lies primarily with technology companies, who are the only actors able to respond to the rapid evolution of AI.

“If the technology itself was more user-centric, instead of people having to constantly adapt to the technology, that would be very important,” she said at the summit.

“You can train people to recognize deepfakes, but you can do it faster with technology.”

Big tech companies have come under fire in recent years for failing to address harassment and misinformation. This led to a series of laws as the government sought to control the growing problem.

But some companies appear to be heeding the call. Erin Relford, Google's senior privacy engineer, said the company is working to build privacy protections into the infrastructure level below the platform.

“We want consumers to be able to choose how much data they want to share,” she said.

“The biggest challenge is making sure we have the right people in place to create these privacy-preserving platforms.”

Relford said Privacy Enhancing Technologies will be releasing several tools to help users understand how their data is monetized and aggregated.

She said Google has been working on changing its parental controls and making it easier for users to understand their protections, but acknowledged it's still difficult and more education is needed.

“Most of the power is in the users. It's the consumers who drive what's popular. For organizations that protect privacy, we want to encourage consumers and their use of their services, not give power to websites that don't protect them,” she said.

education is key

Still, Relford insisted that education is fundamental to deploying privacy tools. She says there's only so much tech companies can do without people becoming more aware online.

“The better we educate people about privacy tools, the more we fundamentally reduce the harm we do.”

Reflecting similar sentiments, Hulin promoted the idea of ​​incorporating online literacy into school curricula. Even high-profile measures such as Australia's recent headline-grabbing social media ban for under-16s will do little to reduce risk without further education.

“Even if there is a ban, misinformation and disinformation will not change. We still need to teach children about the information ecosystem,” she says.

“Parents need to take a serious interest in the news information their children are watching.”

Assel Mussagaliyeva-Tang, founder of Singapore-based startup EDUTech Future, said the AI ​​revolution requires closer collaboration between schools, universities and families to equip children with the skills to navigate new technologies safely and responsibly.

“We need to put guardrails in place to protect our children because they don't know how the model will meet their needs,” she said.

A UNESCO survey found that 62 percent of digital creators skip rigorous fact-checking, while a 2024 YouGov survey showed that only 27 percent of young people are confident in AI in education.

Musagalieva-Tan said educators need to focus on preparing and developing adults who are “world ready” by incorporating ethics, data literacy and critical thinking into the curriculum.

However, universities and the wider education system continue to lag behind in adapting to emerging technologies and equipping students with the skills needed for responsible digital activities, she said.

Similarly, she said technology companies need to be transparent and inclusive when training data in ways that represent different cultures.

While global regulations regarding AI remain fragmented, Dr. Luca Iando, dean and distinguished professor at the Collins School of Professional Studies at St. John's University, called on educational institutions to proactively collaborate with technology platforms to help shape educational content and reduce the potential harms of AI to children, especially as technology continues to grow.

He warned against over-reliance on AI among young people, saying that in the long term, educators need to focus on developing “durable human skills” in students and transform the types of tasks and lessons they teach to accommodate the new era of AI.

He said there needs to be guidelines for using AI responsibly to help students adapt to the workplace.

Highlighting the skills gap between educational institutions and the modern workplace, Musagalieva-Tan said: “Employers want experts. They don't have the time or budget to retrain after the outdated curriculum of universities.”

She said the rise of AI requires us to rethink the true purpose of education: developing individuals who strive to make a positive impact in a rapidly evolving world.



Source link