A cache hierarchy aware thread mapping methodology for GPGPUs

Bo-Cheng Lai, Hsien Kai Kuo, Jing Yang Jou

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

The recently proposed GPGPU architecture has added a multi-level hierarchy of shared cache to better exploit the data locality of general purpose applications. The GPGPU design philosophy allocates most of the chip area to processing cores, and thus results in a relatively small cache shared by a large number of cores when compared with conventional multi-core CPUs. Applying a proper thread mapping scheme is crucial for gaining from constructive cache sharing and avoiding resource contention among thousands of threads. However, due to the significant differences on architectures and programming models, the existing thread mapping approaches for multi-core CPUs do not perform as effective on GPGPUs. This paper proposes a formal model to capture both the characteristics of threads as well as the cache sharing behavior of multi-level shared cache. With appropriate proofs, the model forms a solid theoretical foundation beneath the proposed cache hierarchy aware thread mapping methodology for multi-level shared cache GPGPUs. The experiments reveal that the three-staged thread mapping methodology can successfully improve the data reuse on each cache level of GPGPUs and achieve an average of 2.3× to 4.3× runtime enhancement when compared with existing approaches.

Original languageEnglish
Article number6747979
Pages (from-to)884-898
Number of pages15
JournalIEEE Transactions on Computers
Volume64
Issue number4
DOIs
StatePublished - 1 Apr 2015

Keywords

  • Multithreaded processors
  • cache memories
  • performance analysis and design aids
  • shared memory

Fingerprint Dive into the research topics of 'A cache hierarchy aware thread mapping methodology for GPGPUs'. Together they form a unique fingerprint.

Cite this