Files in this item
|(no description provided)|
|Title:||Evaluation of Programs and Parallelizing Compilers Using Dynamic Analysis Techniques|
|Author(s):||Petersen, Paul Marx|
|Doctoral Committee Chair(s):||Padua, D.A.|
|Department / Program:||Computer Science|
|Degree Granting Institution:||University of Illinois at Urbana-Champaign|
|Abstract:||The dynamic evaluation of parallelizing compilers and the programs to which they are applied is a field of abundant opportunity. Observing the dynamic behavior of a program provides insights into the structure of a computation that may be unavailable by static analysis methods.
A program may be represented by a dataflow graph generated from the dynamic flow of information between the operations in the program. The minimum parallel execution time of the program, as it is written, is the longest (critical) path through the dynamic dataflow graph. An efficient method of finding the length of the critical path is presented for several parallel execution models. The inherent parallelism is defined as the ratio of the total number of operations executed to the number of operations in the critical path. The effectiveness of a commercial parallelizing compiler is measured by comparing, for the programs in the Perfect Benchmarks, the inherent parallelism of each program, with the parallelism explicitly recognized by the compiler.
The general method of critical path analysis produces results for an unlimited number of processors. Upper and lower bounds of the inherent parallelism, for the case of limited processors, can be derived from the processor activity histogram, which records the number of concurrent operations during each time period.
Stress analysis is a derivative of critical path analysis that determines the locations in a program that have the largest contribution to the critical path. Inductions are a computation that introduce an internal stress. A specific method is presented which measures the effects of removing the serializing effects of inductions on the inherent parallelism.
Dependence analysis is crucial to the effective operation of parallelizing compilers. Static and dynamic evaluation of the effectiveness of compile-time data dependence analysis is presented, the evaluation compares the existing techniques against each other, and against the theoretical optimal results. Special attention is paid to the dependences which serialize interprocedural parallelism. In addition to evaluating the static dependence analysis techniques, a method for finding dynamic dependences is presented that includes a record of the dependence distances that were present during an execution.
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1993.
|Date Available in IDEALS:||2014-12-17|