Many problems in science and engineering require solution of symmetric positive definite systems of linear equations. Although practical systems can be huge, each equation often involves only a small number of the variables. These systems are called sparse, and recently there has been considerable interest in solving such systems on parallel message-passing machines such as the hypercube. However, efficient parallel solvers that run significantly faster than standard sequential codes do not yet exist. Several approaches to remedying this situation will be examined. By studying the relationship between sparsity, elimination tree height, and parallelism, one can devise good algorithms for reordering the coefficient matrix to allow for efficient parallel factorization. The height of the elimination tree is one measure of parallelism; an algorithm will reduce the height of this tree. In addition, it is planned to investigate methods for limiting the communication overhead on message- passing multiprocessors and for distributing the load more carefully among the processors. These techniques will reduce the current difficulties with factoring sparse matrices in parallel and will enable an efficient implementation.