
Today, Apple was published on the Machine Learning Research blog, selecting recordings from the 2024 Human-centered Machine Learning (HCML) workshop, highlighting work on responsible AI development.
Almost 3 hours of content is now available
Originally held in August 2024, the event brought together Apple researchers and academic experts to explore everything from model interpretability to accessibility and strategies to predict and prevent large-scale negative outcomes from AI evolution.
Here is the complete list of videos that are now available.
- “Better UIS Engineering in collaboration with Screen Aware Foundation Model” by Kevin Moran (University of Central Florida)
- “Understanding UI” by Jeff Nichols (Apple)
- “Ai-Resilient Interfaces” by Elena Glassman (Harvard University).
- Mary Beth Kelly (Apple) “Small but Powerful: Human-centered Research to Support Efficient On-Device ML”
- Colin Lee and Diana Yi (Apple) “Voice Technology for People with Language Disabilities”
- “AI-equipped AR Accessibility” by John Florrich (University of Washington)
- “Customizing Vision-Based Hand Gestures from a Single Demonstration,” Cori Park (Apple)
- “Creating Super Hurling: Enhancing Human Auditory Recognition with AI,” Shyam Gollakota (University of Washington)
Apple doubles responsible AI development
The event took place almost a year ago, but the consultations remain extremely insightful. This is because it focuses primarily on the human and responsible aspects of machine learning development, not on frontier technology itself.
The blog post highlights that Apple focuses on responsible AI development. This includes a set of principles that guide the development of AI tools.
- Empower users with intelligent tools: Use AI responsibly to identify areas where you can create tools to address specific user needs. Respect the way users use these tools to achieve their goals.
- Representing Users: We deeply build personal products with the goal of authentically representing users around the world. We are constantly working to avoid perpetuating stereotypes and systematic biases across AI tools and models.
- Design carefully: Take precautions at every stage of the process, including design, model training, functional development, quality assessment, and more to identify how AI tools are misused and potentially harmful. Continuously and proactively improve your AI tools using user feedback.
- Protecting Privacy: We protect your privacy with powerful on-device processing, such as private cloud computing, and groundbreaking infrastructure. When training foundation models, we do not use user personal data or user interactions.
Are you in charge of machine learning development? How often is responsible development a major part of the conversation? Please let us know in the comments.
External drive trading on Amazon
FTC: We use income-earning car affiliate links. more.

