Séminaires des équipes de recherche
Programming and scheduling parallel computations on multicore systems
While multicore computers have become mainstream, multicore programming remains full of pitfalls. Using P cores to speed up one's program by a factor close enough to P turns out to be far trickier than it sounds. Beyond the immediate difficulty of parallelizing existing algorithms, one faces the problem of amortizing the costs of task creation, the challenge of dealing with dynamic load balancing, the hazards of concurrent programming and the headaches associated with weak memory models.
- Date : 7/11/2011
- Lieu : Inria Paris - Rocquencourt, amphithéâtre Alan Turing, bâtiment 1
- Intervenant(s) : Arthur Charguéraud, MPI-SWS
However, by using the right programming language abstractions together with a well-designed scheduler, multicore programming can become all of a sudden much more accessible.
In this talk, I will present multicore programming techniques that I have developed recently together with Umut Acar and Mike Rainey at the Max Planck Institute. In particular, I will discuss the following:
- High-level language constructs for describing parallelism; these constructs include spawn/sync constructs like in Cilk, but also more general constructs allowing one to build arbitrary computation DAGs and to help the scheduler execute them efficiently.
- An efficient synchronization-free variant of the work-stealing scheduler; our scheduler can be implemented without any atomic operation (on x86), thereby avoiding weak memory model issues and offering a lot of flexibility.
- A general approach to controlling granularity, and thereby avoiding the creation of too many small tasks; this approach relies both on user-provided asymptotic complexity functions and on lightweight runtime profiling.