Integrating 3D Resistive Memory Cache into GPGPU for Energy-Efficient Data Processing

Jie Zhang, David Donofrio, John Shalf, Myoungsoo Jung

Research output: Contribution to journalConference articlepeer-review

3 Citations (Scopus)

Abstract

General purpose graphics processing units (GPUs) have become a promising solution to process massive data by taking advantages of multithreading. Thanks to thread-level parallelism, GPU-accelerated applications improve the overall system performance by up to 40 times [1], [2], compared to CPU-only architecture. However, data-intensive GPU applications often generate large amount of irregular data accesses, which results in cache thrashing and contention problems [11], [12]. The cache thrashing in turn can introduce a large number of off-chip memory accesses, which not only wastes tremendous energy to move data around on-chip cache and off-chip global memory, but also significantly limits system performance due to many stalled load/store instructions [18], [21].In this work, we redesign the shared last-level cache (LLC) of GPU devices by introducing non-volatile memory (NVM), which can address the cache thrashing issues with low energy consumption. Specifically, we investigate two architectural approaches, one of each employs a 2D planar resistive random-access memory (RRAM) as our baseline NVM-cache and a 3D-stacked RRAM technology [14], [15]. Our baseline NVM-cache replaces the SRAM-based L2 cache with RRAM of similar area size; a memory die consists of eight subarrays, one of which a small fraction of memristor island by constructing 512x512 matrix [13]. Since the feature size of SRAM is around 125 F2 [19] (while that of RRAM around 4 F2 [20]), it can offer around 30x bigger storage capacity than the SRAM-based cache. To make our baseline NVM-cache denser, we proposed 3D-stacked NVM-cache, which piles up four memory layers, and each of them has a single pre-decode logic [16], [17].

Original languageEnglish
Article number7429338
Pages (from-to)496-497
Number of pages2
JournalParallel Architectures and Compilation Techniques - Conference Proceedings, PACT
DOIs
Publication statusPublished - 2015
Event24th International Conference on Parallel Architecture and Compilation, PACT 2015 - San Francisco, United States
Duration: 2015 Oct 182015 Oct 21

Bibliographical note

Publisher Copyright:
© 2015 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Integrating 3D Resistive Memory Cache into GPGPU for Energy-Efficient Data Processing'. Together they form a unique fingerprint.

Cite this