Adaptive Cooperation of Prefetching and Warp Scheduling on GPUs

Yunho Oh, Keunsoo Kim, Myung Kuk Yoon, Jong Hyun Park, Yongjun Park, Murali Annavaram, Won Woo Ro

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

This paper proposes a new architecture, called Adaptive PREfetching and Scheduling (APRES), which improves cache efficiency of GPUS. APRES relies on the observation that GPU loads tend to have either high locality or strided access patterns across warps. APRES schedules warps so that as many cache hits are generated as possible before the generation of any cache miss. Without directly predicting future cache hits/misses for each warp, APRES creates a warp group that will execute the same static load shortly and prioritizes the grouped warps. If the first executed warp in the group hits the cache, grouped warps are likely to access the same cache lines. Unless, APRES considers the load as a strided type and generates prefetch requests for the grouped warps. In addition, APRES includes a new dynamic L1 prefetch and data cache partitioning to reduce contentions between demand-fetched and prefetched lines. In our evaluation, APRES achieves 27.8 percent performance improvement.

Original languageEnglish
Article number8515055
Pages (from-to)609-616
Number of pages8
JournalIEEE Transactions on Computers
Volume68
Issue number4
DOIs
Publication statusPublished - 2019 Apr 1

Bibliographical note

Publisher Copyright:
© 1968-2012 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Adaptive Cooperation of Prefetching and Warp Scheduling on GPUs'. Together they form a unique fingerprint.

Cite this