TY - GEN
T1 - Transparent CPU-GPU collaboration for data-parallel kernels on heterogeneous systems
AU - Lee, Janghaeng
AU - Samadi, Mehrzad
AU - Park, Yongjun
AU - Mahlke, Scott
PY - 2013
Y1 - 2013
N2 - Heterogeneous computing on CPUs and GPUs has traditionally used fixed roles for each device: the GPU handles data parallel work by taking advantage of its massive number of cores while the CPU handles non data-parallel work, such as the sequential code or data transfer management. Unfortunately, this work distribution can be a poor solution as it under utilizes the CPU, has difficulty generalizing beyond the single CPU-GPU combination, and may waste a large fraction of time transferring data. Further, CPUs are performance competitive with GPUs on many workloads, thus simply partitioning work based on the fixed roles may be a poor choice. In this paper, we present the single kernel multiple devices (SKMD) system, a framework that transparently orchestrates collaborative execution of a single data-parallel kernel across multiple asymmetric CPUs and GPUs. The programmer is responsible for developing a single data-parallel kernel in OpenCL, while the system automatically partitions the workload across an arbitrary set of devices, generates kernels to execute the partial workloads, and efficiently merges the partial outputs together. The goal is performance improvement by maximally utilizing all available resources to execute the kernel. SKMD handles the difficult challenges of exposed data transfer costs and the performance variations GPUs have with respect to input size. On real hardware, SKMD achieves an average speedup of 29% on a system with one multicore CPU and two asymmetric GPUs compared to a fastest device execution strategy for a set of popular OpenCL kernels.
AB - Heterogeneous computing on CPUs and GPUs has traditionally used fixed roles for each device: the GPU handles data parallel work by taking advantage of its massive number of cores while the CPU handles non data-parallel work, such as the sequential code or data transfer management. Unfortunately, this work distribution can be a poor solution as it under utilizes the CPU, has difficulty generalizing beyond the single CPU-GPU combination, and may waste a large fraction of time transferring data. Further, CPUs are performance competitive with GPUs on many workloads, thus simply partitioning work based on the fixed roles may be a poor choice. In this paper, we present the single kernel multiple devices (SKMD) system, a framework that transparently orchestrates collaborative execution of a single data-parallel kernel across multiple asymmetric CPUs and GPUs. The programmer is responsible for developing a single data-parallel kernel in OpenCL, while the system automatically partitions the workload across an arbitrary set of devices, generates kernels to execute the partial workloads, and efficiently merges the partial outputs together. The goal is performance improvement by maximally utilizing all available resources to execute the kernel. SKMD handles the difficult challenges of exposed data transfer costs and the performance variations GPUs have with respect to input size. On real hardware, SKMD achieves an average speedup of 29% on a system with one multicore CPU and two asymmetric GPUs compared to a fastest device execution strategy for a set of popular OpenCL kernels.
UR - http://www.scopus.com/inward/record.url?scp=84887437601&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84887437601&partnerID=8YFLogxK
U2 - 10.1109/PACT.2013.6618821
DO - 10.1109/PACT.2013.6618821
M3 - Conference contribution
AN - SCOPUS:84887437601
SN - 9781479910212
T3 - Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT
SP - 245
EP - 255
BT - PACT 2013 - Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques
T2 - 22nd International Conference on Parallel Architectures and Compilation Techniques, PACT 2013
Y2 - 7 September 2013 through 11 September 2013
ER -