…That is, if you are a researcher in physics or astrophysics and you are working on machine learning.
At the end of the summer, from September 23 to 25, we will meet in Valencia, Spain to discuss the latest developments in the application of deep learning to the optimization of fundamental science experiments. This is the fourth workshop of the MODE collaboration focused on the emerging area of deep learning applications: collaborative design and high-level optimization, and the tools to make it happen.

If you’ve only been doing basic science research for a decade or so, you may not have witnessed how dramatically the field has changed since two serendipitous breakthroughs in computer science and particle physics in 2012. The former was the advent of deep learning, which achieved superhuman performance in image classification for the first time. The latter was the discovery of the Higgs boson, made possible by machine learning technology. From a limited perspective, you might think that machine learning has always been a tool that researchers have been actively adopting. But that wasn’t the case. A few researchers were working with neural networks 40 years ago, but their work was considered strange and outside the scope of basic research, and was marginalized and underappreciated.
Today, the opposite is true. And for good reason: we cannot ignore the power these algorithms have put in our hands. But with power always comes the responsibility to put it to good use. And we have tried to do so in good faith. In fact, since 2012, we have been using deep learning for all kinds of supervised learning tasks we face in data analysis: classification of various physical processes, high-energy jets in particle accelerators, stars and galaxies in astronomical data, flavor tagging of interaction signatures in neutrino telescopes, regression of parameters of interest in multidimensional data, and everything in between. More recently, semi-supervised and unsupervised learning tasks have started to be leveraged for more complex applications, such as generative models used for fast simulations and anomaly detection. Each of these separate topics has actually become its own subfield, with dedicated workshops on its application to fundamental science research problems.
But using deep learning for analytical tasks is very limiting. It's like owning a dune buggy but only using it to commute to work every morning. Our problem is that we are daunted by the sheer scale of today's experiments. The idea that it is now theoretically possible to systematically and continuously study the interplay of detection hardware and reconstruction software as a function of the thousands of parameters that determine how an experiment is constructed and how information is extracted from its output is well outside our realm of comfort. Yet that is precisely where innovation is possible.
Hardware and software co-design is happening more and more in market-driven applications. Of course, there are big resources there and big profits can be made. In basic science, there are big profits to be made as well, but it's not money that flows into our pockets. Rather, it's a much more ephemeral reward, and one that correlates much more loosely with the possibility of deploying resources for it. This is the challenge we now face, and the reason for the existence of the MODE Collaboration, an effort towards a more effective and common use of deep learning in basic science.
The MODE workshop brings together experts in computer science and physics, as well as mathematicians. In fact, one of the keynote speakers is Professor Andrea Walter, a well-known applied mathematician who published in 2008 the book “Evaluating Differentials: Principles and Methods of Algorithmic Differentiation”. This book is the main reference for the engine behind most deep learning algorithms, a method capable of considering and solving optimization problems with hundreds of free parameters.
The other two keynote speakers come from the fields of computer science and physics: Danilo Rezende, Head of Generative Modeling at DeepMind, whose talk will hear about the latest developments from the research frontier, and Professor Riccardo Zecchina, a theoretical physicist at Bocconi University of Milan, who works at the intersection of physics and computer science.
The workshop will feature sessions on different areas of scientific research, each of which will consider the latest developments and use cases of optimization tools. Thus, in addition to sessions on applications in high energy physics, we will also discuss astroparticle and neutrino physics, nuclear physics, and even muography and medical applications. The sessions dedicated to computer science developments will cover the technical aspects of the tools used for these optimization problems.
We will also have a poster session where young participants can present their recent research. The best posters will receive a prize and a related certificate. Potential participants are encouraged to submit their poster abstracts. We hope to meet many young researchers in Valencia. These workshops are a great place to enrich your research network and establish connections to prepare for larger projects.
