× close
At the bottom of the figure, we can see that L2O utilizes a set of training problem instances from the target optimization problem class to acquire knowledge. This knowledge helps identify algorithms (configurations) that perform well for unseen problem instances. Credit: Science China Press
Optimization algorithms are critical to machine learning and artificial intelligence (AI) in general. For a long time, it has been widely believed that designing/configuring optimization algorithms is a task that relies heavily on human intelligence and requires a design customized to a specific problem.
However, with the increasing demand for AI and the emergence of new complex problems, the manual design paradigm faces significant challenges. If machines could somehow automatically or semi-automatically design optimization algorithms, these challenges would not only be greatly alleviated, but the horizons of AI would also be greatly expanded.
In recent years, researchers have been exploring ways to automate the algorithm composition and design process by learning from a set of training problem instances. These efforts, called Learn to Optimize (L2O), take a large number of optimization problem instances as input and attempt to train an optimization algorithm in a configuration space (or code space) with generalization capabilities. .
Results across fields such as SAT, machine learning, computer vision, and adversarial example generation show that automatically/semi-automatically designed optimization algorithms perform as well as or better than manually designed optimization algorithms. It shows what is possible. This suggests that the field of optimization algorithm design may be at the dawn of “replacing humans with machines.''
This article reviews three main approaches to L2O: training a performance prediction model, training a single solver, and training a portfolio of solvers. We also discuss theoretical guarantees for the training process, successful application cases, and L2O generalization issues. Finally, this article points to promising future research directions.
The research will be published in a journal National Science Review.
× close
The diagram above shows the training performance prediction model. This can be used to predict which algorithm will perform best for an unseen problem instance. The middle diagram shows the training of a single solver applied directly to an unseen problem instance. The diagram below represents the training of the solver's portfolio, applied directly to unseen problem instances. Credit: Science China Press
“L2O is expected to grow into an important technology to alleviate the increasingly burdensome human labor in AI,” Tang says. However, we also point out that ensuring reasonable generalization remains a challenge for L2O, especially when dealing with complex problem and solver classes.
“Many real-world scenarios may require a second stage of fine-tuning,” Tang suggests. “The learned solver can be seen as a base model for further fine-tuning.”
He believes that building synergies between basic model training and fine-tuning will be a key direction for future developments to realize the full potential of L2O.
For more information:
Ke Tang et al., “Learning Optimization—A Brief Overview” National Science Review (2024). DOI: 10.1093/nsr/nwae132
