Files in this item



application/pdf9702609.pdf (7MB)Restricted to U of Illinois
(no description provided)PDF


Title:Data prefetch mechanisms for accelerating symbolic and numeric computation
Author(s):Mehrotra, Sharad
Doctoral Committee Chair(s):Padua, David A.
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Engineering, Electronics and Electrical
Computer Science
Abstract:Despite rapid increases in CPU performance, the primary obstacles to achieving higher performance in contemporary processor organizations remain control and data hazards. Primary data cache misses are responsible for the majority of the data hazards. With CPU primary cache sizes limited by clock cycle time constraints, the performance of future CPUs is effectively going to be limited by the number of primary data cache misses whose penalty cannot be masked.
To address this problem, this dissertation takes a detailed look at memory access patterns in complex, real-world programs. A simple memory reference pattern classification is introduced, which is applicable to a broad range of computations, including pointer-intensive and numeric codes. To exploit the new classification, a data prefetch device called the Indirect Reference Buffer (IRB) is proposed. The IRB extends data prefetching to indirect memory address sequences, while also handling dense scientific codes. It is distinguished from previous designs in its seamless integration of linear and indirect address prefetching. The behavior of the IRB on a suite of programs drawn from the Spec92, Spec95, and public domain codes, is measured under a variety of abstract models.
Next, a detailed hardware design for the IRB that can be easily integrated into modern CPUs is presented. In this design, the IRB is decomposed into a recurrence recognition unit (RRU) and a prefetch unit (PU). The RRU is tightly coupled to the CPU pipelines, and monitors individual load instructions in executing programs. The PU operates asynchronously with respect to the processor pipelines, and is coupled only to the processor's bus interface. This division has two important ramifications. First, it allows the PU to pull ahead of the processor as a program executes. Second, it makes it possible to tune the IRB for processors with varying memory subsystems, simply by redesigning the PU. An early embodiment of the design is evaluated via detailed timing simulation on our benchmark suite.
The dissertation concludes by outlining compile-time transformations to enhance IRB performance and offering suggestions for possible extensions to this work.
Issue Date:1996
Rights Information:Copyright 1996 Mehrotra, Sharad
Date Available in IDEALS:2011-05-07
Identifier in Online Catalog:AAI9702609
OCLC Identifier:(UMI)AAI9702609

This item appears in the following Collection(s)

Item Statistics