This research is based on the theme of using algorithmic techniques to enhance the performance and flexibility of parallel architectures. Specific projects span a spectrum from traditional topics, such as task scheduling and load balancing, to less familiar ones, such as using emulations to achieve architectural flexibility (e.g., reconfigurability); from foundational questions concerning the potential and the limitations of various modalities of computation and communication (e.g., the limitations of bufferless communication), to more immediately applicable studies of algorithmic techniques that endow a parallel architecture with facilities that would be prohibitively expensive to provide in hardware (e.g., a multigauging capability). The program is motivated by three observations: (1) the speed of processors in parallel architectures increases faster than the speed of the communication subsystem; (2) the potential of parallel computation is realized only when programmers are liberated from having to manage the detailed physical resources (number of processors, interconnection topology, etc.) of their machines; (3) technological developments are continually creating new modalities of parallel computation. All three of these phenomena increase the importance of developing algorithms that enhance the computing power of parallel architectures. Observation (1) suggests that we can now relegate to software chores that previously demanded the speed and concomitant cost of hardware implementation. Observation (2) points out the increasing need for algorithms that enhance the portability of parallel programs. Observation (3) suggests the need to understand emerging modalities of parallel computation, in order to exploit them effectively.