Files in this item



application/pdf9624292.pdf (6MB)Restricted to U of Illinois
(no description provided)PDF


Title:Symbolic analysis techniques for effective automatic parallelization
Author(s):Blume, William Joseph
Doctoral Committee Chair(s):Padua, David A.
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Computer Science
Abstract:To effectively translate real programs written in standard, sequential languages into parallel computer programs, parallelizing compilers need advanced techniques such as powerful dependence tests, array privatization, generalized induction variable substitution, and reduction parallelization. All of these techniques need or can benefit from symbolic analysis.
To determine what kinds of symbolic analysis techniques can significantly improve the effectiveness of parallelizing Fortran compilers, we compared the automatically and manually parallelized versions of the Perfect Benchmarks. The techniques identified include: data dependence tests for nonlinear expressions, constraint propagation, interprocedural constant propagation, array summary information, and run time tests. We have developed algorithms for two of these identified symbolic analysis techniques: nonlinear data dependence analysis and constraint propagation.
For data dependence analysis nonlinear expressions, (e.g., A(n * i + j), where $1 \le j \le n),$ we developed a data dependence test called the Range Test. The Range Test proves independence by determining whether certain symbolic inequalities hold for a logical permutation of the loop nest. We use a technique called Range Propagation to prove these symbolic inequalities.
For constraint propagation, we developed a technique called Range Propagation. Range Propagation computes the range of values that each variable can take at each point of a program. A range is a symbolic lower and upper bound on the values taken by a variable. Range propagation also includes a facility to compare arbitrary expressions under the constraints imposed by a set of ranges. We have developed both a simple but slow algorithm and a fast and demand-driven but complex algorithm to compute these ranges.
The Range Test and Range Propagation have been fully implemented in Polaris, a parallelizing compiler being developed at the University of Illinois. We have found that these techniques significantly improve the effectiveness of automatic parallelization. We have also found that these techniques are reasonably efficient.
Issue Date:1995
Rights Information:Copyright 1995 Blume, William Joseph
Date Available in IDEALS:2011-05-07
Identifier in Online Catalog:AAI9624292
OCLC Identifier:(UMI)AAI9624292

This item appears in the following Collection(s)

Item Statistics