This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Reducing list access time for LISP execution
Sung, Shu-Hui Helen
Doctoral Committee Chair(s)
Davidson, Edward S.
Department of Study
Degree Granting Institution
University of Illinois at Urbana-Champaign
Prefetching items into cache can either increase or decrease memory access time, depending on how well the prefetching algorithm matches the memory reference pattern. A popular prefetching algorithm is one block lookahead (OBL). Caches help to improve system performance mainly because of the localities that exist in most program executions. The locality characteristics of list reference patterns are: both car and cdr pointer distances are likely to be small, pointers can point in either the forward or backward direction, linearization often shortens these distances, and cdr encoding is an efficient way to represent lists. Two algorithms, called two-way reflection prefetch (TRP) and forward one-block-lookahead and backward reflection prefetch (FOBRP), are designed to reduce list access time due to cache misses by taking advantage of these characteristics.
Prefetching algorithms have usually been evaluated and selected according to the cache miss ratio and bus transfer ratio achieved. However, bus contention is one issue that is not properly reflected by miss ratio and transfer ratio alone. Thus, a timing model is designed and used to provide information in terms of time elapsed for accessing. This model estimates individual delay components, miss delay, contention delay, load-through completion wait, and prefetching completion wait, so that situations concerning bus contentions can be appropriately analyzed. The timing model determines the efficiency of a prefetching algorithm by its effective access time and its individual delay components, as well as its miss ratio and transfer ratio.
The performance of the proposed prefetching algorithms are evaluated by running several Lisp benchmarks, both with and without linearization, on a Lisp interpreter, and their memory traces are recorded. These traces are then fed to the timing model to evaluate the effectiveness of the proposed algorithms. Timing simulation shows: (1) increasing cache size is the best way to achieve better performance when cache size is small, (2) TRP is the best prefetching algorithm for all data accessing if a moderate size cache is provided, (3) FOBRP is the best prefetching algorithm for list prefetching, and (4) judicious application of linearization enhances the performance regardless of whether prefetching is used.