Awarded every three years by the Society for Industrial and Applied Mathematics (SIAM) and the Mathematical Optimization Society, the Lagrange Prize in Continuous Optimization recognises research work in the field of mathematical optimisation.
Researchers do not put themselves forward as candidates for this award, but are nominated by their peers - thereby increasing the prize's appeal.
Francis Bach, head of the Sierra team, underlines. He is the winner of the 2018 edition, along with two former post-doctoral researchers, Mark Schmidt - currently a professor at the University of British Columbia - and Nicolas Le Roux, who is now a researcher with Google Brain in Montreal. It was a paper entitled "Minimizing Finite Sums with the Stochastic Average Gradient" that earned the three researchers this prize, which is a benchmark in the world of applied mathematics.
An algorithm to save time
"The paper was published last year, based on work from 2012, Francis Bach continues. It focuses on an optimisation algorithm called SAG (for Stochastic Average Gradient). This algorithm represents an evolution of the method known as stochastic gradient optimisation, which was developed around 60 years ago but still appeals to researchers in the context of large-scale learning, for example for applications such as click-through rate prediction for digital advertising or sales forecasts for Web sellers." What is the advantage of this algorithm? Significant computing time savings thanks to much quicker convergence, whilst maintaining a good predictive performance. "Ultimately our method is very simple, since it can be expressed in two lines of code...However, it enables an improvement of factor 10 to 100! In fact, it also allows for bigger problems to be tackled and the testing of more methods."
Implemented within Scikit-Learn, the statistical learning model library managed by Inria, the method developed by Francis Bach, Nicolas Le Roux and Mark Schmidt has subsequently been enriched and developed further by the Sierra team, but also by other researchers, convinced of its potential. "Sierra's work now focuses on guaranteeing the future performance of the algorithms, with the goal of developing rapid learning methods liable to cope with major changes of scale without a significant loss of efficacy, Francis Bach concludes. Learning and optimisation are very broad fields and many avenues still remain to be explored."
>>> For more information on Francis Bach's career
A graduate of the École Polytechnique in 1997, Francis Bach obtained his PhD from Berkeley in 2005 under the guidance of Michael Jordan , "high priest of artificial intelligence". He has won numerous prizes and distinctions, including two ERC grants (Starting Grant in 2009 and Consolidator Grant in 2016), the 2012 Inria Young Researcher Prize and the ICML (International Conference in Machine Learning) Test of Time Award in 2014. He joined Inria in 2007, initially with the Willow team. He has been head of the Sierra team since its creation in 2011, following on from the ERC project of the same name.