Significant numerical computations now commonly require vast processing power. At the core of many of these problems is the solution of large, sparse systems of equations. A host of specialized high-speed processors have been developed to attack large problems. Systolic architectures have proven to be excellent candidates for a range of tasks, and they promise to continue as powerful computing tools for some time to come. But, without appropriate algorithm development, these specialized processors will never reach their full potential. The most obvious of the missing systolic algorithms are the sparse matrix methods, including the conjugate gradient techniques (the "technology" that is arguably the most important advance in large scale scientific computing during the past decade and the one to come). Phase I will develop a conjugate gradient method for systolic architectures. Such a method will greatly aid the use of these architectures: the machine cycles will be used more efficiently as will the time of the scientists and engineers using them. During Phase II the theoretical concepts will be extended to include nonsymmetric conjugate gradient methods and preconditioners.