Files in this item
|(no description provided)|
|Title:||Shared Cache Organization for Multiple-Stream Computer Systems|
|Department / Program:||Electrical Engineering|
|Degree Granting Institution:||University of Illinois at Urbana-Champaign|
|Subject(s):||Engineering, Electronics and Electrical|
|Abstract:||Organizations of shared two-level memory hierarchies for parallel-pipelined multiple instruction stream processors are studied. The multicopy of data problems are totally eliminated by sharing the caches. All memory modules are assumed to be identical and cache addresses are interleaved by sets. For a parallel-pipelined processor of order (s,p), which consists of p parallel processors each of which is a pipelined processor with degree of multiprogramming, s, there can be up to sp cache requests from distinct instruction streams in each instruction cycle. The cache memory interference and shared cache hit ratio in such systems are investigated.
The study shows that the set associative mapping mechanism, the write through with buffering updating scheme and the no write allocation block fetch strategy are suitable for shared cache systems. However, for private cache systems, the write back with buffering updating scheme and the write allocation block fetch strategy are considered in this thesis.
Performance analysis is carried out by using discrete Markov Chain and probability based theorems. Performance is evaluated as a function of the hit ratio, h, the processor order, (s,p), and the cache organization characterized by the number of lines, l, the number of modules per line, m, cache cycle time, c, and the block transfer time, T. Results shows that for reasonably large l high performance can be obtained for shared cache with small (1-h)T. Shared-cache systems may perform better than private-cache systems if shared cache results in a higher hit ratio than private cache. The shared-cache memory organization is suitable for single pipelined processor systems because of the low access interference. Access interference of shared cache systems may be reduced to extremely low levels with a reasonable choice of system parameters.
Some design tradeoffs are discussed and examples are given to illustrate a wide variety of design options that can be obtained. Performance differences due to alternative architectures are also shown by a performance comparison between shared cache and private cache for a wide range of parameters.
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1981.
|Date Available in IDEALS:||2014-12-12|
This item appears in the following Collection(s)
Dissertations and Theses - Electrical and Computer Engineering
Dissertations and Theses in Electrical and Computer Engineering
Graduate Dissertations and Theses at Illinois
Graduate Theses and Dissertations at Illinois