Computers with traditional architectures suffer from an inherent limitation known as the "von Neumann bottleneck." Instructions are retrieved from memory and are then executed sequentially, one-by- one, by the Central Processing Unit (CPU). However, the CPU is generally capable of higher speeds than the memory sub-system in which the program resides, so the system as a whole is slowed by the rate at which instructions can be "fetched." Furthermore, many algorithms have portions which could be executed at the same time or in "parallel" if facilities to do this exist. Except for a few expensive special-purpose machines, computers have been Single Instruction Single Data (SISD) processors, performing commands one at a time without the possibility of concurrency. There are applications which cannot be computerized without simultaneous execution of many instructions involving separate vectors or partitions of the data. A modern Electrical Engineering curriculum should include the application of parallel techniques to problems at the frontiers of computing speed, including image and speech analysis, high speed arithmetic, waveform synthesis and signal processing. Therefore, this project establishes a laboratory dedicated to parallel computing using general purpose PCs as hosts to multiprocessor boards. This provides maximum flexibility in configuring systems, the opportunity for incremental growth and the immediate ability for undergraduates to take advantage of new and emerging technology. The award is being matched by an equal amount from the principal investigator's institution.