Most optimization problems arising in practice involve uncertainty: some parameters of interest may arise from a future process (such as market forces affecting the demand for a good), or from an imprecise measurement, or may be ``fudged'' on purpose in the interest of privacy. However, one can usually obtain some limited distributional information about such parameters, such as through past observations of the same process or by repeated measurements. How can this limited information be utilized to find solutions to the optimization problem that mostly work well? This question forms the crux of stochastic optimization. Recent theoretical work has introduced novel techniques for dealing with uncertainty, as well as exposed challenges unique to stochastic optimization. A primary focus of this research is to further this theory of approximability of stochastic optimization problems.
The PI will investigate optimization problems arising in scheduling, robot motion planning, network design, and resource allocation, with emphasis on the following issues: (1) How, and for what problems, can we transform algorithms that work well in the full-information setting to those that work well in the stochastic setting?; (2) How much information do we require about input distributions in order to obtain a good approximation?; (3) Optimal solutions to multi-stage stochastic problems can be complex exponential-size decision diagrams. For which problems can such complex solutions be approximated by much simpler ones?; (4) For which multi-stage stochastic problems is it inherently hard to obtain approximation factors independent of the number of stages? Another area of focus for this project is the design of approximately optimal algorithms for Bayesian mechanism design and pricing problems arising in contexts such as online retailing and sponsored search auctions. This research program will involve students at all levels, including undergraduate projects aimed at experimentally evaluating algorithms. In addition, the PI plans to revamp algorithms courses at the University of Wisconsin-Madison, adding new undergraduate course content related to practical applications of algorithms, and new graduate courses based on advanced applications of algorithms such as algorithmic game theory.
Most optimization problems arising in practice involve uncertainty -- it is often not possible to determine all the parameters affecting a problem precisely. For example, the input may arise from a future process (such as market forces affecting the demand for a good), or from an imprecise measurement, or is "fudged" on purpose in the interest of privacy. However, one can usually obtain some limited distributional information about unknown inputs, for example, by past observations of the same optimization process or by repeated measurements. The goal of this project was to understand how we can use this limited information to find solutions to the optimization problem that mostly work well. We studied several such "stochastic" or "online" optimization problems, and developed algorithms for them. For the problems that we studied, even with complete information, it is computationally infeasible to obtain optimal solutions. Our algorithms are computationally efficient and produce solutions that are provably close to optimal, in particular, with a performance guarantee that is within a small constant factor of optimal. One class of problems that we studied arise as packet routing problems in communication networks. Another class that we studied arises in economic contexts where a seller is offering multiple items or services to buyers with varied preferences. As part of this project, the PI also developed a series of graduate courses in algorithms at the University of Wisconsin-Madison covering advanced topics and applications such as approximation algorithms, algorithmic game theory, and randomness in algorithm design. The lecture notes for these courses are available freely to the public from the websites of these courses.