Apple presents multiple AI research and demos at NeurIPS 2025

Machine Learning


Today, Apple announced the list of research it will present at the 39th Annual Conference on Neural Information Processing Systems (NeurIPS) in San Diego. Here are the details:

This year’s NeurIPS will be held in San Diego from December 2nd to 7th, with a satellite event in Mexico City from November 30th to December 5th.

At the San Diego event, Apple plans to present several papers, including “Illusions of Thinking: Understanding the Strengths and Limitations of Inference Models Through the Lens of Problem Complexity,” which drew criticism from industry researchers earlier this year.

In addition to a series of research presentations, Apple is also sponsoring multiple affinity groups, including Women in Machine Learning, LatinX in AI, and Queer in AI, all of which will also host Apple employees.

Apple’s presentation will cover a variety of topics related to machine learning research, including privacy, the strengths and limitations of inference models, and innovative approaches to generative AI. Here is the complete list of research Apple will present at NeurIPS: Some of them are 9to5 mac In the past we have covered:

At this event, Apple will also have a booth (#1103) where attendees can experience live demos of the company’s multiple machine learning initiatives, including:

  • MLX – An open source array framework designed for Apple silicon that enables fast and flexible ML and scientific computing on Apple hardware. This framework is optimized for Apple Silicon’s unified memory architecture and leverages both the CPU and GPU. Visitors can experience two MLX demos.
    • Image generation using large diffusion model on iPad Pro equipped with M5 chip
    • Distributed Computing with MLX and Apple Silicon: Visitors can explore text and code generation using a trillion-parameter model running in Xcode on a cluster of four Mac Studios, each powered by M3 Ultra chips and running with 512 GB of unified memory.
  • FastVLM – A family of mobile-friendly vision language models built using MLX. These models use a combination of CNN and Transformer architectures for vision encoding specifically designed to handle high-resolution images. Together, they demonstrate a powerful approach that provides an optimal balance between accuracy and speed. Attendees will be able to experience a real-time visual Q&A demo on the iPhone 17 Pro Max.

Click this link to learn more about Apple’s presence at NeurIPS.

Accessories sale on Amazon

FTC: We use automated affiliate links that generate income. more.



Source link