SIMD (Single Instruction stream, Multiple Data stream) and VLIW (Very Long Instruction Word) architectures have been successful as targets for parallelizing compilers in large part because they resolve synchronizations statically, at compile-time, yielding zero run-time cost for synchronization between processes. However, since SIMD and VLIW machines support only a single control flow thread, they cannot execute distinct loops, conditional statements, subroutine calls, nor even variable-time-instructions in parallel. Hence, much program parallelism must be ignored, and the resulting serialization of code can dramatically reduce speedup and efficiency. Recently, a new architecture called barrier MIMD (Multiple Instruction stream, Multiple Data stream) architectures has been introduced as a new parallel MIMD architecture that can retain the static scheduling advantages of SIMD and VLIW, but also execute in parallel almost any programming language construct. The result is that a far larger amount of program parallelism can be exploited by a parallelizing compiler and efficiently executed on a barrier MIMD. The objective of this project is to improve and advance the compiler technology capable of exploiting barrier MIMD hardware. Instruction and data memory hardware designs which exploit static, compile-time knowledge to dramatically increase barrier MIMD performance will also be investigated.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
9110261
Program Officer
Yechezkel Zalcstein
Project Start
Project End
Budget Start
1991-07-15
Budget End
1993-12-31
Support Year
Fiscal Year
1991
Total Cost
$63,296
Indirect Cost
Name
University of Minnesota Twin Cities
Department
Type
DUNS #
City
Minneapolis
State
MN
Country
United States
Zip Code
55455