This work is a continuation of work on interprocedural data-flow analysis and optimization in a programming environment. The new work will emphaize three main directions. 1. Algorithms for interprocedural analysis that produce sharper information than standard techniques. This includes formulations of the side effect problems that determine the largest subsection of an involved array, improvements to constant propagation to retain valuable information discarded by current methods, and approximation methods for flow- sensitive sets. 2. Incremental methods for returning a program to a consistent state after a change to one or more of its constituent subparts. In a compiler that uses interprocedural information, three distinct problems arise: correcting the interprocedural information to reflect the new state of the program, determining which modules must be recompiled because of changes to the interprocedural information, and regenerating compiled code for those modules. 3. A global approach to optimization that uses interprocedural information to direct the optimization of the program as a whole. In such a scheme, the compiler examines the entire program to plan application of optimizations like linkage tailoring or active record merging, and determines what levels of optimization are merited for individual entry points. The setting for this research will be the compilation system of the Rn FORTRAN Programming Environment. The environment has been designed as a test bed for research into whole program analysis and optimization. Currently, it is being extended to perform incremental dependence analysis in support of parallel programming research.