Files in this item
|(no description provided)|
|Title:||Compile-Time Analysis of Explicitly Parallel Programs|
|Doctoral Committee Chair(s):||Harrison, Williams Ludwell, III,|
|Department / Program:||Computer Science|
|Degree Granting Institution:||University of Illinois at Urbana-Champaign|
|Abstract:||Explicit parallelism not only complicates the semantics of a programming language, but also invalidates most current compiler techniques for program analyses and optimizations. The problem stems from the inadequacy of control flow graphs to describe information flows in a parallel program, and the insufficiency of data dependences and control dependences to describe the constraints for a correct execution. Because of the difficulties in analyzing explicitly parallel programs, most compilers simply side-step the problems either by restricting data sharing among concurrent activities or not optimizing parallel codes. Some even produce wrong codes that violate the program semantics.
This work proposes a general framework for analyzing parallel programs where concurrent activities interact through shared variables and presents the analyses of side effects, data dependences, object lifetimes, and concurrent expressions for a language with dynamic allocations, pointers, first-class functions, and cobegin parallelism.
This framework is based on abstract interpretation, a semantics-based analysis technique. We present a formal semantics of the language, based on a labeled transition system. Because every possible interaction between concurrent activities has to be considered, the state space of the analysis must be reduced. We approach this problem in two ways: eliminating redundant interleavings, which do not contribute to final results, and abstract interpretation, which provides systematic methods for combing related states. An iterative algorithm is presented for state space reduction in the presence of pointers and closures. Then, we present the abstract domains and the analyses under these abstract domains. We prove the correctness and termination of this abstract analysis. The results of the analyses can facilitate many applications, such as program optimization, memory management, and race detection. A graph representation of parallel programs useful for general compiler transformation is presented. Finally, implementation of this framework and its preliminary results are described.
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1993.
|Date Available in IDEALS:||2014-12-17|