Photonic neural networks offer a potentially revolutionary path to faster and more energy-efficient machine learning, but there are currently significant hurdles to simulating and training these networks, especially when scaling up to complex architectures. Together with their colleagues, Tzamn Melendez Carmona from Politecnico di Turin, Federico Marchesin from Ghent University and imec, and Marco P. Abrate from University College London are tackling this challenge using LuxIA, a new framework for photonic neural network training. LuxIA employs a new technique called the slicing method. This greatly reduces the computational demands of simulating the behavior of light within these networks, allowing researchers to train significantly larger and more complex systems than previously possible. Demonstrating significant speed and scalability improvements across benchmark datasets, including those used for image recognition, this research represents an important step toward realizing the full potential of photonic AI hardware and accelerating innovation in this field.
Photonic neural networks tackle AI challenges
Photonic neural networks (PNNs) are a promising approach to overcome the limitations of traditional electronic computing in artificial intelligence, especially deep learning. By utilizing light instead of electrons, PNNs offer the potential for faster and more energy-efficient computations, but simulating and training these networks poses considerable challenges. Researchers are actively investigating different PNN architectures and comparing their strengths and weaknesses to optimize performance. To address the difficulty of simulating large-scale PNNs due to memory constraints, scientists have adopted techniques such as Docker containerization and leveraged tools such as Photontorch and Transfer-Matrix Method.
Optimizing Photonic Neural Network Simulations with Slicing
Researchers have developed a new transfer matrix approach, slicing, to overcome limitations in simulating and training large-scale photonic neural networks (PNNs). Current simulation tools struggle to cope with the computational demands of transfer matrix calculations, resulting in high memory usage and long processing times. The team integrated Slicing into a unified simulation and training framework named LuxIA, significantly reducing both memory requirements and execution time. The core of this work lies in optimizing the computation of transfer matrices by dividing large matrices into smaller, more manageable slices.
This technique enables efficient backpropagation, a key algorithm for training neural networks, within the photonic domain. The scientists modeled the behavior of the unitary transfer matrix using a photonic unit mesh, a circuit that utilizes programmable optical elements like a Mach-Zehnder interferometer, and employed singular value decomposition to factorize the weight matrix. Extensive experiments using standard datasets such as MNIST, Digits, and Olivetti Faces demonstrate that LuxIA outperforms existing tools in both speed and scalability. This advancement will enable the exploration and optimization of larger and more complex PNN designs, driving broader adoption of photonics technology and accelerating innovation in AI hardware.
Slicing improves photonic neural network simulations
Scientists have developed slicing, a new method that significantly improves the simulation and training of large-scale photonic neural networks (PNNs). Recognizing the computational demands of calculating transfer matrices in complex PNNs, the team integrated slicing into a unified framework called LuxIA designed for scalable PNN research. This method addresses memory usage and execution time limitations and allows exploration and optimization of larger and more complex PNN architectures. Experiments conducted across various photonic architectures and standard datasets, such as MNIST, Digits, and Olivetti Faces, demonstrate that LuxIA consistently outperforms existing tools in both speed and scalability.
This breakthrough represents an important advance in PNN simulation, making it possible to explore and refine increasingly complex designs that were previously limited by computational resources. Measurements confirm that the slicing technique significantly reduces demands on both memory and processing time during PNN training. The team's efforts will foster widespread adoption of photonics technology in artificial intelligence hardware and accelerate innovation in this rapidly evolving field. The performance of this framework represents an important step toward realizing the potential of photonic computing in advanced artificial intelligence applications. This framework has been proven to be effective across a variety of visual recognition tasks, providing increased flexibility and ease of use for researchers. Future work will focus on refining the framework to accommodate even larger datasets and architectures, suggesting continued improvement and expansion of photonic neural network simulation capabilities.
👉 More information
🗞 LuxIA: A lightweight unitary matrix-based framework built on iterative algorithms for photonic neural network training
🧠ArXiv: https://arxiv.org/abs/2512.22264
