Sites Inria

Version française

Award

Nathaly Mermet - 24/05/2013

On the future of accelerators for energy efficiency

Olivier Temam, responsable de l'action exploratoire ByMoore

Olivier Temam, senior research scientist at Inria and leader of the ByMoore exploratory action, has received a Google Research Award for his research into alternative computation architectures. The trend is towards designing accelerators (specialised circuits) based on neural networks intended for embedded systems - such as data centers - or for use in machine learning.

What is the background to your work?

The background is the obsolescence of Moore's Law *, a prediction from the 1970s that the size of transistors would be reduced by half around every 18 to 24 months... which proved to be case for 40 years, but has now reached its limit . Through two key aspects - increases firstly in the number of transistors on a given surface area and secondly in switching speed - computer performances have improved over four decades. However, since 2004, several technology setbacks have arisen to impede continued improvement in the power of processor architectures, owing in particular to the energy consumed and therefore the heat emitted, because when transistors are switching they emit more heat in a given time.

What are the current challenges and trends?

In 2004, the idea was to move towards multi-core processors (i.e. made up of more than one processor). Since 2010, new energy constraints have made simultaneous use of all the transistors impossible, forcing a move away from multi-core processors towards more specialised circuits, which are more efficient in energy terms.
The key issue is now finding a compromise between circuits' flexibility and their efficiency . This therefore means a striking a delicate balance between the best efficiency and the lowest consumption.

Why this Google Research Award?

Architecture SPIDER, projet Arch2Neu

A vast range of applications (voice recognition, music, image handling, GPS navigation) could benefit from the idea of specialised circuits , which make gains of a factor of around 100 possible in terms of energy efficiency while running much faster. ByMoore has focused on the introduction of neural network accelerators for several reasons, including the very large number of applications based on machine learning, the existence of new, highly effective machine learning algorithms (deep neural networks), and neural networks' tolerance of manufacturing defects. As the ByMoore Exploratory Action has now finished, it will become the NIAC project (Neuro Inspired Accelerators for Computing), with this as its particular focus.

What does this support from the Google giant actually give you?

Beyond financial help, Google gives us two things; firstly validation of the potential offered by this unconventional avenue of research , and secondly valuable feedback about the most important properties and features of these accelerators.

How is the future shaping up?

If we manage to demonstrate sufficient energy and/or performance gains with these neural network accelerators for real applications (such as those of Google), we increase the probability that they will one day be integrated within computation systems, and we are helping to transform these processor-only systems into a combination of processors and accelerators.  
* A law described by Gordon Moore, co-founder of Intel

Keywords: Inria Saclay Île-de-France Google research award Olivier Temam Neural network Processor

Top