The ultimate goal of this research is to develop sophisticated compilers for parallel machines. Memory latency is the focus of our research because it is the bottleneck that drives many design decisions in parallel architectures today. As systems become larger, it is increasingly difficult to have all memory close to all processors which can increase effective memory latency. Current solutions for reducing memory latency can be categorized as either dynamic or static. Dynamic solutions include multi-caches and shared virtual memory. These methods are based on the assumption that programs exhibit a high degree of locality and therefore, the cost of servicing a cache miss or a page fault can be amortized over many references that cache line or page. Current static solutions rely on the programmer to use memory efficiently. Three related research directions are proposed: a static solution to the problem of reducing effective memory latency that requires much less detailed knowledge of the machine by the programmer than present solutions, the combination of static and dynamic techniques for reducing memory latency, and static and dynamic methods for exploiting locality in recursively defined dynamic data structures.//