Apple Workshop on Machine Learning Provides Privacy 2025

Machine Learning


Apple believes privacy is a fundamental human right. As AI experiences become more and more personal and become part of people's daily lives, it is important that technologies are created in parallel with advances in AI capabilities to provide new privacy.

Apple's basic research consistently drives cutting edge in using privacy differences in machine learning, and earlier this year we held a Machine Learning Workshop (PPML) to Provide Privacy. This two-day hybrid event brings together Apple and the broader research community to discuss the cutting edge of PPML and focuses on four key areas: private learning and statistics, attack and security, discriminatory privacy foundations, fundamental models and privacy.

Presentations and discussions on these topics explored the intersection of the rapidly evolving landscapes of privacy, security, and artificial intelligence. Workshop participants discussed the theoretical foundations and practical challenges of building a privacy-protecting AI system. By addressing privacy and security concerns from a theoretical and practical perspective, it aims to promote innovation while protecting user privacy.

In this post, we will share recordings of selected lectures and summary of publications discussed in the workshop.

Apple Workshop on Machine Learning 2025 Videos that Provide Privacy

Published works presented at the workshop

Airgapagent: Eugene Baghdasalian (Google Research), Peter Kairouz (Google Research), Ren YI (Google Research), Marco Gruteser (Google Research), Sahra Ghalebikesabi (Google Deepmind), SewOong Oh (Google Oh (Google Research), Borja Balle (Google Balle (Google Deepmind), Dainiel Ramage), Google Ramage

A generalized binary tree mechanism for differentially private approximation of all pair distances by Michael Dinitz (Johns Hopkins University), Chen Ling Hwang (Seoul National University), Jin Chen Liu (Nanjing University), Jaraj Upadhyai (Rutgers University), and Zonglui Zhou (Nanjing University).

Discriminantly private synthetic data via the Basic Model API 1: Images by Zinan Lin (Microsoft Research), Sivakanth Gopi (Microsoft Research), Janardhan Kulkarni (Microsoft Research), Harsha Nori (Microsost Research), Sergey Yekhanin (Microsoft Research).

Discriminantly private synthesis data via foundation model API 2: Thurin Xie (University of Illinois, Urbana-Champaign University), Zinan Lin (Microsoft Research), Arturs Backers (Microsoft Research), Sivakanth Gopi (Microsoft Research), Da Yu (Sun Yat-Sen University), Husoft Inan (Microsoft Research), Harsha Nori (Microsoft Research), Hasha Research), Huishuai Zhang (Microsoft Research), Yin Tat Lee (Microsoft Research), Bo Li (Urbana-Champaign, University of Illinois), Sergey Yekhanin (Microsoft Research)

Krishnamurthy (DJ) Dvijotham (Google Deepmind), H. Efficient and near-optimal noise generation for streaming privacy with Brendan McMahan (Google Research), Krishna Pillutla (ITT Madras), Thomas Steinke (Google Deepmind), and Abhradeep Thakurta (Google Deepmind) efficient and near-optimal noise generation for streaming privacy with Krishnamurthy (DJ) Dvijotham (Google Deepmind)

Don't forget the elephant: Jiankai Jin (University of Melbourne), Chitchanok Chuengsatiansup (University of Melbourne), Toby Murray (University of Melbourne), Benjamin IP Rubinstein (University of Melbourne), Yuval Yarom (Ruhrment bocho (Olga (olga)), and jiankai Jin (University of Melbourne) Discriminatory privacy with state continuity of privacy budgets)

Improvement of discriminatory, private, continuous observations using group algebra by Monica Hentzinger (ISTA Austria) and Jaraj Upadyai (Rutgers University).

Instance-Optimal Private Density Estimation Estimation at Wasserstein Distances by Vitaly Feldman, Audra McMillan, Satchit Sivakumar (Boston University) and Kunal Talwar

Guidance to leverage model guidance Extract training data from personalized diffusion models by Xiaoyu Wu (Carnegie Mellon University), Jiaru Zhang (Purdue University), and Steven Wu (Carnegie Mellon University)

Local versatility for federal analysis by Vitaly Feldman, Audra McMillan, Guy N. Rothblum, and Kunal Talwar

Nearly tight black box audit of differentially private machine learning by Meenatchi Sundaram Muthu Selva Annamalai (University of London) and Emiliano de Cristofaro (University of California, Riverside)

About the pricing of hierarchical clustering privacy differences by Chengyuan Deng (Rutgers University), Jie Gao (Rutgers University), Jalaj Upadhyay (Rutgers University), Chen Wang (Texas A&M University), and Samson Zhou (Texas A&M University).

Operationalizing Contextual Integrity in Privacy-Conscious Assistants by Sahra Ghalebikesabi (Google DeepMind), Eugene Bagdasaryan (Google Research), Ren Yi (Google Research), Itay Yona (Google DeepMind), Ilia Shumailov (Google DeepMind), Anesh Pappu (Google DeepMind), Chongyang Shi (Google DeepMind), Laura Weidinger (Google DeepMind), Robert Stanforth (Google Deepmind), Leonard Berrada (Google Deepmind), Pushmeet Kohli (Google Deepmind), PO-Sen Huang (Google Deepmind), and Borja Balle (Google Deepmind)

Preamble: Private and efficient aggregation via blocksparse vectors from Hilal Asi, Vitaly Feldman, Hannah Keller (Aarhus University; work done at Apple), Guy N. Rothblum, Kunal Talwar

Privacy amplification through random allocations between Vitaly Feldman (Apple) and Moshe Shenfeld (Hebrew University University of Jerusalem)

Privacy in noisy stochastic gradient descent: More repetitions that do not increase privacy losses with Jason Altschuler (MIT) and Kunal Talwar

Personally estimated single parameters by John Duchi (Stanford University), Hilal Ali and Kunal Talwar.

Scalable private search by Hilal Asi, Fabian Boemer, Nicholas Genise, Muhammad Haris Mughees, Tabitha Ogilvie, Rehan Rishi, Guy N. Rothblum, Kunal Talwar, Karl Tarbe, Ruiyu Zhu and Marco Zulianiianiianiianiiani

Shifted Configuration I: Inequality of Harnack and reverse transport by Jason Altshuller (University of Pennsylvania) and Singh Ho Chewy (IAS)

Changed discriminatory privacy interpolation by Ginho Bok (University of Pennsylvania), Weiss (University of Pennsylvania), and Jason Altschler (University of Pennsylvania).

Easy-to-handle consensus protocols from Natalie Corina (University of Pennsylvania), Survi Goel (University of Pennsylvania), Varun Gupta (University of Pennsylvania), and Aaron Ross (University of Pennsylvania).

Tukey depth mechanism for practical private average estimation by Gavin Brown (University of Washington) and Lydia Zakynthinou (University of California, Berkeley)

User inference attacks on large-scale language models by Nikhil Kandpal (University of Toronto & Vector Institute), Krishna Piltra (Google), Alinao Pleia (Google, Northeastern University), Peter Kylows (Google), Christopher A. Chocket Chor (Google), Zheng Xu (Google), and Zheng XU (Google)

Universally optimal mechanism for private statistical estimation by Hilal Asi, John C. Duchi (Stanford University), Saminul Haque (Stanford University), Zewei Li (Northwestern University), and Feng Ruan (Northwestern University).

“What do you want just for theory?” Experiments experimenting with close audits of differentially private synthetic data generation by Meenatchi Sundaram Muthu Selva Annamalai (London University College), Georgi Ganev (Hazy, London, University of London), and Emiliano de Cristofaro (University of California, Riverside).

Acknowledgments

Many people contributed to the workshop, including Hilal Azi, Anthony Tibetta, Vitaly Feldman, Harris Mugies, Martin Pelican, Rehan Rishi, Guy Rosblum and Kunal Talwar.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *