IDEALS Home University of Illinois at Urbana-Champaign logo The Alma Mater The Main Quad

Run-time parallelization: A framework for parallel computation

Show full item record

Bookmark or cite this item: http://hdl.handle.net/2142/23071

Files in this item

File Description Format
PDF 9624468.pdf (7MB) Restricted to U of Illinois (no description provided) PDF
Title: Run-time parallelization: A framework for parallel computation
Author(s): Rauchwerger, Lawrence
Doctoral Committee Chair(s): Padua, David
Department / Program: Computer Science
Discipline: Computer Science
Degree Granting Institution: University of Illinois at Urbana-Champaign
Degree: Ph.D.
Genre: Dissertation
Subject(s): Computer Science
Abstract: The goal of parallelizing, or restructuring, compilers is to detect and exploit parallelism in sequential programs written in conventional languages. Current parallelizing compilers do a reasonable job of extracting parallelism from programs with regular, statically analyzable access patterns. However, if the memory access pattern of the program is input data dependent, then static data dependence analysis and consequently parallelization is impossible. Moreover, in this case the compiler cannot apply privatization and reduction parallelization, the transformations that have been proven to be the most effective in removing data dependences and increasing the amount of exploitable parallelism in the program. Typical examples of irregular, dynamic applications are complex simulations such as SPICE for circuit simulation, DYNA-3D for structural mechanics modeling, DMOL for quantum mechanical simulation of molecules, and CHARMM for molecular dynamics simulation of organic systems. Therefore, since irregular programs represent a large and important fraction of applications, an automatable framework for run-time parallelization is needed to complement existing and future static compiler techniques.In this thesis we present several original techniques that together sketch how automatic compilation can go beyond statically analyzable codes. The methods described are fundamentally efficient, scalable and general, i.e., their characteristics are not based on heuristics with a wide performance distribution across their input domain but are algorithms that can be analytically proven to produce speedups given the necessary resources and available parallelism. We introduce the idea of testing only for full parallelism in the presence of run-time transformations rather than computing an execution schedule. We introduce the aggressive strategy of speculative parallel execution (scalable to any parallel system--from micros to multiprocessors). We describe a new technique for analyzing and scheduling loops which are only partially parallel. Additionally we present a framework for the parallelization of loops which contain recurrences and have an unknown iteration space.We believe that within the domain of automatic parallelization the true importance of this work is in breaking the barrier at which automatic parallelization had stopped: regular, well-behaved programs.We also attempt to convey a few general ideas and dispel some older ones. Namely, optimizing at run-time implies overhead but, can actually reduce overall execution time through better exploitation of resources. Speculating about optimizations, more specifically parallelism, may be an attractive and more generally applicable alternative to the 'inspect first and execute later' strategy. Thus we view this thesis as the first step into the full integration of these techniques into commercial parallelizing compilers.
Issue Date: 1995
Type: Text
Language: English
URI: http://hdl.handle.net/2142/23071
Rights Information: Copyright 1995 Rauchwerger, Lawrence
Date Available in IDEALS: 2011-05-07
Identifier in Online Catalog: AAI9624468
OCLC Identifier: (UMI)AAI9624468
 

This item appears in the following Collection(s)

Show full item record

Item Statistics

  • Total Downloads: 0
  • Downloads this Month: 0
  • Downloads Today: 0

Browse

My Account

Information

Access Key