Prasanth Chatarasi, Polyhedral Optimizations of Explicitly Parallel Programs

Slides

A key challenge for optimizing compilers is to keep up with the increasing complexity related to locality and parallelism in modern computers, especially as computer vendors head towards new designs for extreme-scale processors and exa-scale systems. A synergy of explicitly parallel programming models, compilers and runtime systems is required to achieve high performance on a variety of these architectures. However, given the rapid growth of parallel software, there is a need for increased attention to using compiler frameworks to optimize explicitly parallel programs.

We extend polyhedral based compiler frameworks to enable analysis and transformation of OpenMP programs, that contain both explicit parallelism and unanalyzable data accesses, to achieve higher performance and portability across a variety of architectures including multi-cores, accelerators etc. Our work focuses on explicitly-parallel programs that specify potential logical parallelism, rather than actual parallelism. It can be used to enable larger set of optimizations (by mitigating conservative dependences), compared to what might have been possible if the input program is sequential.

As a first step in this direction, we introduced an approach that reduces spurious dependences from the conservative dependence analysis by intersecting them with the happens-before relations from parallel constructs (that satisfies serial-elision property). The final set of the dependences can then be passed on to a polyhedral transformation tool, such as PLuTo or PolyAST, to enable transformations of explicitly parallel programs.