Files in this item

FilesDescriptionFormat

application/pdf

application/pdfNasser_Anssari.pdf (2MB)
(no description provided)PDF

Description

Title:Using hybrid shared and distributed caching for mixed-coherency GPU workloads
Author(s):Anssari, Nasser
Advisor(s):Hwu, Wen-Mei W.
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:M.S.
Genre:Thesis
Subject(s):Graphics Processing Unit (GPU) Computing
Cache Coherence
Memory Consistency
Sharing Tracker
High Performance Computing (HPC) Workloads
Abstract:Current GPU computing models support a mixture of coherent and incoherent classes of memory operations. Workloads using these models typically have working sets too large to fit in an economical SRAM structure. Still, GPU architectures have last-level caches to primarily fulfill two functions: eliminate redundant DRAM accesses servicing requests from different L1 caches to the same line, and maintain on-chip memory coherence for the coherent class of memory operations. In this thesis, we propose an alternative memory system design for GPU architectures better fit for their workloads. Our architectural design features a directory-like sharing tracker that allows the incoherent private L1 caches to directly satisfy remote requests for shared data. It also retains a shared L2 cache with a customized caching policy to support coherent accesses on-chip and better serve non-coalesced requests that contend aggressively for cache lines. This thesis characterizes the novel and intriguing tradeoffs between the components of our proposed memory system design for area, energy, and performance. We show that the proposed design achieves a 22% average reduction in DRAM data demand over a standard GPU architecture with 1MB L2 cache, leading to an overall 28% reduction in the memory system energy consumption on average. Conversely, our results show that the DRAM data demand of the proposed design with 256KB L2 cache is on par with a standard GPU architecture with 1MB L2 cache, albeit at a smaller area overhead and power leakage. Our results, while drawn on motivations from the GPU realm, are not architecture-specific and can be extended to other throughput-oriented many-core organizations.
Issue Date:2013-02-03
URI:http://hdl.handle.net/2142/42361
Rights Information:Copyright 2012 Nasser Salim Anssari
Date Available in IDEALS:2013-02-03
Date Deposited:2012-12


This item appears in the following Collection(s)

Item Statistics