Digital Bridge: AI reality check — Global privacy battle — Mission ‘Critical’ – POLITICO

AI Basics


Press play to listen to this article

Voiced by artificial intelligence.

POLITICO’s weekly transatlantic tech newsletter for global technology elites and political influencers.

POLITICO Digital Bridge

By MARK SCOTT

Send tips here | Subscribe for free | View in your browser

WELCOME TO DIGITAL BRIDGE. I’m Mark Scott, POLITICO’s chief technology correspondent, and after a week of vacation, I’m honestly struggling to get myself up and running this week. With that in mind, here’s the pep song that has been keeping me going as I’ve written this week’s newsletter. Warning: it’ll get stuck in your mind.

This week goes out to all the privacy wonks among us:

— The ChatGPT craze misses the urgent need to deal with problems linked to artificial intelligence that are already harming society.

— There’s a fight already underway between the EU and U.S. on creating global data protection rules. We’ll tell you where to look for it.

— There’s no easy cure to Washington and Brussels’ headache over so-called critical raw materials.

GENERATIVE AI: WE ALL NEED TO CALM DOWN

HERE’S MY PUBLIC SERVICE ANNOUNCEMENT OF THE WEEK: let’s cool the hype around OpenAI, Google’s Bard and the sudden tsunami of so-called generative artificial intelligence use cases that have just popped up (looking at you, Pope in a puffer coat.) I get this technology has made the often incomprehensible world of machine learning and complex algorithms accessible to the general public. But there are more immediate concerns with mundane AI uses that need policymakers’ attention. And they need it now.

Where do I begin? Law enforcement agencies worldwide now routinely use the technology for identifying potential suspects via facial recognition. That has not gone well. People’s social benefits, too, are increasingly overseen by obscure algorithms that determine who should — and should not — get government support. Again, that hasn’t gone well. The business cases for AI — everything from deciding what insurance premiums people should pay to automating increasingly complicated tasks — equally are often based on flawed data, inherent coding bias and a litany of problems related to how people view the analogue world.

None of this has anything to do with using ChatGPT to write college essays or Bard to come up with dangerous disinformation. But it’s real world harm, happening now, that is getting lost in the policymaker anxiety to be seen as doing something (anything!) around a technology that has caught the public’s attention. Yet what is needed is a focus on the underlying issues related to artificial intelligence — how to ethically create these systems; how consent is given for data to be used; how to overcome underlying systemic biases — that are already in the purview of existing rules and regulations.

“We’ve been talking about the proliferation of automated facial analysis tools for a while now and their use in law enforcement. That discussion is now getting lost in favor of sci-fi futuristic concerns,” Timnit Gebru, the former technical co-lead of the Google’s ethical artificial intelligence team, told me via email. “We need to think about the actual entities building and using these systems, how they are building them, as well as how they are being used, without talking about these systems as if they’re entities on their own.”

That’s the problem with ChatGPT hysteria. It has given life to misconceptions of what the technology can actually do that obscures what is actually going on under the hood. These aren’t sentient Skynet-style machines. They are based on (forgive me, all readers with a technical background) the collection of reams of often conflicting datasets; overseen by flawed individuals (because we all are); and used for purposes that are either opaque or can lead to unintended consequences. It’s less science fiction; more flawed science.

But I have a potential next step. Let’s do away with the regulatory overreach and focus on the basics. Existing privacy rules — even the sectoral ones in the United States — already give powers to officials to hold companies to account for how they collect mostly publicly-available data. Let’s start with that, and focus on the existing real-world complications. Police use of facial recognition; greater transparency on corporate data collection; and more accountability for government AI use in social benefits, for me, would be a great start.

Such transparency, according to Gebru, who founded the Distributed Artificial Intelligence Research Institute “would 1) show us what data is being used. Was it obtained with opt-in informed consent or stolen? 2) show us what the quality of the data is. What data sources are they using?” For her, the onus should be on companies/governments to explain themselves, not on people adapting to as-yet-unknown AI use cases. “Society should be the one building technology that helps us, rather than adjusting ourselves to “cope” with whatever technology comes,” Gebru added.

LET’S GET READY TO RUMBLE (OVER PRIVACY)

WHEN I SAY INTERNATIONAL DATA TRANSFERS, I can already hear you switch off. But bear with me. There’s a battle underway between the European Union and the U.S. over who gets to set the rules for how people’s personal information is handled when it criss-crosses countries, other than China.

Currently, Brussels has the advantage via its so-called “adequacy decisions” in which EU officials unilaterally decide if other jurisdictions meet their threshold to receive EU data. Yet Washington is on the ascendency with its Global Cross-Border Privacy Rules (CBPRs) Declaration that wants to set up a rival rulebook — and already has the backing of Asia-Pacific Economic Cooperation, a trade body.

What this comes down to is who is the final arbiter of (Western) privacy standards? The EU has jealously guarded its status as the world’s de facto privacy rulemaker — something the U.S. has repeatedly butted up against. Washington wants to rebalance that with a more business-friendly approach. But without federal privacy legislation, it’s a hard sell to say U.S.-led international data protection rules are worth the paper they are (not) written on.

The latest skirmish will come later this month, in London, when U.S. officials (and those from like-minded countries) hold their second summit to hammer out what their global data transfer playbook should look like. Three individuals told me that the three-day meeting will take place the week of April 17. Holding it in the United Kingdom is a bold choice. As the country with the most similar rules to the EU (albeit ones that British policymakers are eagerly seeking to rewrite), the possibility of the U.K. signing up to a U.S. initiative that counters that of the EU would show the world there’s a new player in town. It would also likely massively annoy Brussels policymakers.

To be fair to Washington, the current system — in which individual countries and regions unilaterally determine where their citizens’ data can be shipped — is increasingly unworkable. More than 70 jurisdictions (everyone from the EU to Turkmenistan) now have their own “adequacy systems,” based on research from the International Association of Privacy Professionals, a trade group. “There are so many countries with data transfer rules in place,” Joe Jones, the ex-deputy director for international data transfers for the U.K. government (and now an IAPP official) told me.

Not surprisingly, EU officials balk at claims that the status quo is flawed. Many of the 73 countries with existing adequacy regimes, they argue, are based on that of the 27-country bloc — a sign, they claim, of the so-called “Brussels Effect,” in which the EU’s rulebook becomes the de facto global standard. Of course, it doesn’t hurt that the bloc remains in the driving chair of this debate. Case in point: it’s currently weighing up whether to give the U.S. adequacy status in a decision expected by July.

Policymakers are aware, though, that something needs to be done to ensure the world’s largest economies don’t end up in data siloes that do not allow companies and governments to share information across borders. Last year’s German G7 presidency made so-called international data transfers a priority, including a meeting between all countries’ privacy regulators in Bonn last year. Japan has carried on that work into 2023 — with the goal of creating some form of privacy baseline for how such data is moved between countries.

For a sign of where that may go, take a look at this report from the Future of Privacy Forum, a think tank, that compared how forms of data transfers across the EU, Latin America and the Asia Pacific region compared — and how they could (and it’s still a could) become interoperable. “It’s likely” that a quasi-benchmark will be created, Gabriela Zanfir-Fortuna, the group’s vice president for global privacy, told me. “There’s certainly an intent from authorities to work toward that.”

**Curious for some insights from upcoming legislation on gig platforms? Be sure to take part in our exclusive roundtable on platforms at POLITICO Live’s Europe Tech Summit on April 26!**

BY THE NUMBERS

infographic

KEY TO NEXT TTC SUMMIT: CRITICAL RAW MATERIALS

AHEAD OF NEXT MONTH’S EU-U.S. TRADE AND TECH COUNCIL SUMMIT, one thing is clear. A transatlantic deal on so-called “critical raw materials” (see last week’s newsletter) is fundamental for the meeting in northern Sweden to be viewed as a success. That potential agreement would allow European automakers and suppliers to receive electric vehicle subsidies via the U.S. Inflation Reduction Act — an effort to assuage concerns among mostly French and Germany politicians that their American counterparts were favoring domestic automakers over those from the 27-country bloc (heaven, forbid!).

But a deal is not going to be easy. Already, U.S. lawmakers led by Senator Joe Manchin are questioning the legality of Washington allowing American dollars to be spent on European supplies in ways that push the boundaries of what is deemed permissible under the law. A lot of this comes down to what, technically, is viewed as a free trade agreement (a requisite for IRA cash to be spent on foreign goods) versus what is not. Washington does not currently have a free trade deal with Brussels. So for the critical raw materials deal, expect a lot of fudging to be done to get this over the line by the end of May.

Then the question becomes: will such a critical raw materials pact stand up to the likely legal challenges?

WONK OF THE WEEK

WE’RE BACK IN WASHINGTON THIS WEEK — days after the so-called Summit for Democracy — to shine a light on Tim Maurer, director for technology and democracy at the White House’s National Security Council. He’s held that position for just over a year after joining from the U.S. Department of Homeland Security where Maurer focused on cybersecurity and emerging technologies.

It’s fair to say that Maurer has done the rounds within Washington. He’s worked with several beltway think tanks like the Center for Strategic and International Studies and New America — and was a member of the Biden-Harris transition team dedicated to national security and foreign policy.

“Cyber resilience and strengthened international norms can facilitate collective response through law enforcement actions or multilateral reaction with industry,” he wrote in an analysis for the International Monetary Fund in 2021. “Governments can support these efforts by establishing entities to assist in assessing threats and coordinating responses.”

THEY SAID WHAT, NOW?

In an increasingly volatile and interconnected world, to be a truly responsible cyber power, nations must be able to contest and compete with adversaries in cyberspace,” Jeremy Fleming, head of the U.K.’s Government Communications Headquarters, or national security agency in charge of signals intelligence, said in an overview of the country’s offensive cybersecurity capabilities.

**On April 19 at 6:30 p.m. BST, a senior cabinet minister will headline POLITICO Tech U.K. Launch Event. Register today for online attendance.* 

WHAT I’M READING

— While artificial intelligence has existed for years, the latest iteration in the technology represents a step change that could upend trillions of dollars of advertising-based businesses, argues Joe Marchese from Human Ventures.

Twitter’s plans to remove free academic access to its reams of social media data would devastate public interest research in ways that could harm the fundamental tenets of democracy, according to an open letter from the Coalition for Independent Technology Research.

— Reddit published its transparency report for 2022. Highlights include: 69 percent of content removals were automated; the Australia government asked for the most amount of content to be deleted; and over 650,000 accounts were removed for evading existing bans. More here.

— Meta updated how it collects data on EU citizens following demands from Ireland’s privacy regulator that its current mechanism for such collection did not comply with Europe’s privacy standards. More here.

— The Atlantic Council has a breakdown on all the digital-focused announcements that were made last week during the U.S.-led Summit for Democracy. If you missed the event, it’s worth a read.

California’s new privacy regime will come into force on July 1. For everything you need to about those EU-style standards, the Golden State’s government has you covered.

SUBSCRIBE to the POLITICO newsletter family: Brussels Playbook | London Playbook | London Playbook PM | Playbook Paris | POLITICO Confidential | Sunday Crunch | EU Influence | London Influence | Digital Bridge | China Direct | Berlin Bulletin | D.C. Playbook | D.C. Influence | Global Insider | All our POLITICO Pro policy morning newsletters





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *