This project continues the past research program in unconstrained and constrained optimization. The project has three main parts. First, limited memory methods will be developed for large bound constrained and nonlinearly constrained optimization problems. A new algebraic representation of the limited memory update developed earlier will greatly facilitate the use of the limited memory approached for constrained optimization, and should help produce very efficient methods for large constrained problems. Second, tensor methods will be developed for large, sparse nonlinear problems, including nonlinear equations, nonlinear least squares, and unconstrained optimization, and also for nonlinearly constrained optimization problems. This approach shows great promise of producing methods that are very robust and efficient on nonsingular and singular problems and that, in the sparse case, can efficiently use any desired direct or iterative solver. Third, new trust region methods will be developed and analyzed for nonlinearly constrained optimization problems with inequality and equality constraints. These methods are expected to have strong global convergence properties even in the presence of linearly dependent constraint gradients, and to perform robustly and efficiently in practice. In addition, research will be carried out on a modified Cholesky factorization for large sparse constrained optimization, and on efficient methods for implicit nonlinear least squares problems.