Convex optimization methods are important in control, signal processing, machine learning, and many other fields of engineering and applied science. Advances in algorithms for convex optimization over the last twenty-five years have resulted in reliable and user-friendly software tools that are widely used in academic research and industry. The most popular software packages are based on a separation between a modeling front-end and a general-purpose solver for semidefinite optimization (a matrix extension of classical linear programming, and a special case of conic linear optimization). The task of the modeling front-end is to translate the optimization problem into the canonical form required by the semidefinite optimization solver. The techniques used in this translation step are the outcome of extensive research on how to represent nonlinear convex constraints in the semidefinite optimization format. The algorithms used in the solvers are primal-dual interior-point methods for semidefinite optimization, which reached a high level of maturity in the early 2000s. Each of these two layers brings a limit on scalability. The reduction to semidefinite optimization in the modeling step often requires the introduction of auxiliary variables and constraints, which can increase the size of the optimization problem considerably. In addition, the semidefinite optimization algorithms that are commonly used in convex optimization solvers are second-order methods and require in each iteration the solution of large, often dense, sets of linear equations. This further limits the size of the problems that can be solved. This proposal is motivated by the increasing demand for large-scale convex optimization algorithms in control, signal processing, and system identification. The project focuses on developing specialized methods for two types of constraints that underlie some of the most important convex optimization applications in these areas, and that have recently found new applications in statistical signal processing and machine learning.

The first class of problems consists of convex optimization problems involving convex cones of nonnegative Popov functions. This includes nonnegative matrix polynomials and trigonometric polynomials, and is of fundamental importance in linear system theory, control, and signal processing. The second class includes system identification methods based on minimizing the nuclear norm (trace norm) of structured matrices. The focus on these two problem classes is motivated by several reasons: first, their central position in system theory and signal processing; second, well-known difficulties in solving them using general-purpose semidefinite optimization software; and, third, their importance in recently discovered techniques that extend 1-norm optimization methods for sparse signal recovery to sparse signal recovery problems over continuous domains and to matrix rank minimization problems. Two algorithmic approaches will be considered for each of the two problem classes: interior-point methods for non-symmetric conic optimization, that handle the constraints directly without embedding them in a much larger semidefinite optimization problem, and first-order proximal algorithms based on operator splitting and decomposition techniques.

Project Start
Project End
Budget Start
2015-09-15
Budget End
2020-08-31
Support Year
Fiscal Year
2015
Total Cost
$343,483
Indirect Cost
Name
University of California Los Angeles
Department
Type
DUNS #
City
Los Angeles
State
CA
Country
United States
Zip Code
90095