This grant provides funding to study efficient algorithms for solving stochastic and large-scale convex programming (CP) problems. The development of CP algorithms is now facing a few new challenges. First, the incorporation of uncertainty, and possibly dynamics, has considerably increased the difficulty of solving CP. Second, the size of CP problems arising from many emerging applications, such as statistical learning, far exceeds the capability of existing polynomial-time solvers. In this project, we will study a new and promising class of stochastic first-order methods for solving stochastic CP and establish their convergence and large-deviation properties. Moreover, using these stochastic methods as the driving-force, we aim at enhance our capability for solving extremely large-scale CP through the development of novel randomization schemes. Finally we will advance the optimization solvers for various statistical learning models which are crucial in order to transform data into knowledge in our information age.

If successful, the results of this research will lead to algorithmic innovation for solving Large-scale and stochastic CP problems arising from diverse areas in logistics, finance, energy, medicine, defense and science. In green energy systems, stochastic programming models are used for long-term operations planning of hydro-thermal power systems to deal with the stochastic streamflows. In medicine, statistical learning techniques can be applied to diagnose breast cancer by utilizing characteristics of individual cells to discriminate benign from malignant breast lumps. These are just a few application examples of SP, large-scale CP and statistical learning problems to be studied in this research that will impact society. Additionally, this research will contribute to open research infrastructure in optimization through the development of CP-solver software. We will also engage students in our research efforts.

Project Report

Stochastic optimization problems arises natuarally when one intends to minimize or maximize a certain performance measure for a complex system under uncertainty. Recently it also found great applications in statiscial learning where one intends to extract useful information from huge data sets under uncertainty. This project supported the PI's research on the design and analysis of stochastic first-order methods, which could exhibit the optimal (unimprovable) rate of convergence for solving different classes of stochastic optimization problems. With the support of this grant, the PI had published 8 journal papers and submitted another 8 papers from 2013 to 2014. Most of these papers have been accepted by or submitted to the top optimization journals, including Mathematical Programming and SIAM Journal on Optimization. Main research results that the PI had discovered in 2013-2014 under the support of this NSF grant are briefly summarized as follows. 1. The PI devised novel stochastic primal-dual methods and showed that they are optimal for solving an important class of saddle-point problems. 2. The PI devised novel stochastic accelerated mirror-prox method and showed that they can achieve the optimal convergence for solving a class of composite variational inequality problems. 3. The PI devised novel randomized block mirror descent method and demonstrated that they can significantly reduce the iteration cost of existing stochastic approximation type algorithms. 4. The PI developed new stochastic gradient methods for solving general stochastic nonlinear (not necessearily convex) stochastic optimization problems. 5. The PI invented novel gradient sliding techniques which can skip the computation of gradients for solving an important class of composite optimization problems. In addition, the PI has obtained various new results on conditional gradient methods and alternating direction methods of multipliers, and pursued applications in image processing, machine learning, and national security. Four Ph.D. students and one master student are involved in this research. The PI and his students have given numerous invited talks in academic conferences such as INFORMS annual meeting, SIAM Optimization conference, and the International conference on continuous optimization. Some research outcome has been incorporated into the PI course "Introduction to Nonlinear Optimization". The PI has also been invited to give seminars in different universities, including MIT, Carnegie Mellon University, University of California at Los Aneglas, Rice University, Peking University, etc. The PI had also created a website which contains all the software and test problems related to this research (www.ise.ufl.edu/glan/computer-codes/) .

Project Start
Project End
Budget Start
2010-05-01
Budget End
2014-04-30
Support Year
Fiscal Year
2010
Total Cost
$200,000
Indirect Cost
Name
University of Florida
Department
Type
DUNS #
City
Gainesville
State
FL
Country
United States
Zip Code
32611