The central focus of this work is to develop a computational theory of numerical techniques for parallel processing and to apply state of the art computing machinery and methods to large systems problems. More specific objectives are to develop and expand the theory of sparse vector methods and factorization paths to the design of parallel algorithms for matrix operations including applications to problems in large system analysis using supercomputer technology. There are many instances where these algorithms might represent dramatic improvements over present techniques: three immediate appplications of the parallel sparse vector methods are parallel triangular factorization, inversion, and partition of matrices. A graph-theoretic analysis of the factorization path graph has not yet been accomplished and it is essential to the understanding of these methods. It is also fundamental for efficient conventional and parallel sparse vector operations to have effective ordering procedures. New methods are proposed in this research. The methods proposed will be tested on the particular problem of power systems using supercomputers. The essential problem that arises with vector computers is matching the corresponding matrix elements from source and target rows when performing operations using sparse vector techniques. This work elaborates on the earlier proposed approaches Scatter- Gather and Explicit-Enumeration and brings out their relative advantages. Arguments are given for the unification of both techniques in an efficient sparse solver and it is expected that an Overlap-Scatter sparse matrix representation will be used to achieve this goal. This research relates to the statistical properties of parallel sparse vector methods in detail. This includes the computational complexity of conventional and parallel sparse vector applications in matrix triangular factorization, matrix inversion, refactorization, partition, network reduction, state estimation and Hessian optimization algorithms, as well as the computational complexity of ordering schemes. The techniques of parallel sparse vector methods algorithms are completely general and due to the significant savings in time and space the methods have many potential applications in other studies. The expected result of this research is powerful new parallel methods for a variety of large matrix problems.