CnC-CUDA — a programming model for CPU-GPU hybrid parallelism, Alina Sbirlea

A recent trend in mainstream desktop systems is the use of graphics processor units (GPUs) to obtain order-of-magnitude performance improvements relative to general-purpose CPUs.We see that computing is evolving from using many core CPUs to "co-processing" on the CPU and GPU, however hybrid programming models that support the interaction between the two are not widely accessible to mainstream programmers and domain experts who have a real need for such resources.

In this talk, we extend past work on Intel's Concurrent Collections (CnC) programming model to address the problem of hybrid programming while keeping the ease of programming introduced by CnC. We will focus on a C based flavor of CnC in respect to the CPU steps, interacting with CUDA GPU steps.

The extensions in this talk include the introduction of tag functions in the graph file specifications, the comparison between two scheduling approaches of the CnC steps, the definition of steps for execution on GPUs and the automatic generation of data and control flow between the CPU and GPU. Experimental results show that this approach can yield significant performance benefits when using a hybrid CPU/GPU execution.