Increasingly, automated economic transaction systems in eBay, Google and other institutions negotiate, buy and sell goods, services, advertisements etc. They use auctions to make decisions including pricing, allocation and optimization. As a result, we now have auctions systems far greater in scale than the traditional "human scale" of negotiations and specialized auctions, and they impact our lives in sophisticated ways. These systems face many algorithmic challenges in the interface of Economics, Learning Theory, and Optimization, which is the focus of this project.

In this project researchers will (a) design and analyze models for the various parties (users, auctioneer, buyer and seller) and their impact on the auctions; (b) design and analyze mechanisms in the presence of parties with mixed utilities that go beyond the traditional linear profit; (c) quantify impacts of budgets in mechanisms on truthfulness, equilibria and utilities, which has been traditionally underemphasized; (d) study the effect of bounded computational power and rationality on mechanisms; (e) design richer mechanisms for futures, combinatorial goods as well as dynamic settings; (f) study privacy, security and verifiability of auction mechanisms; (g) study the various learning and optimization problems that are fundamental to the tasks above.

This project ultimately addresses the questions of how various parties with natural knobs (budget, utility) interact with automated economic transaction systems, how information is learned, used and controlled in such systems, and how these systems will evolve over the long term. The project explores these questions via the specific research tasks above, as well as via training undergraduate and graduate students to work in the interface of Economics, Optimization and other areas.

Project Report

Increasingly, automated economic transaction systems in eBay, Google and other institutions negotiate, buy and sell goods, services, advertisements etc. They use auctions to make decisions including pricing, allocation and optimization. As a result, we now have auctions systems far greater in scale than the traditional "human scale" of negotiations and specialized auctions, and they impact our lives in sophisticated ways. These systems face many algorithmic challenges in the interface of Economics, Learning Theory, and Optimization, which is the focus of this project. Specifically, this project addressed research problems at the confluence of computer science and decision-making, with a focus on game theory, optimization, data analysis, and underlying mathematical foundations for a variety of applications including Internet advertising systems, privacy, social networks, and science writing. The project produced more than 20 publications in high-quality computer science conferences and journals. We describe some notable project results in more detail. "Bandit" problems arise in decision-making contexts when an agent has to make tradeoffs between exploitation and exploration in a context where only partial information is known about different choice paths, and the paths must be at least partly explored in order to learn more. Our project developed the first known approximation for bandit problems in the case where the payoff functions are adaptive and submodular. We studied the setting where the expected gain is initially unknown, and it is learned by interacting repeatedly with the optimized function. We developed an efficient algorithm whose expected cumulative regret increases logarithmically with time and captures the inherent property of submodular maximization that earlier mistakes are more costly than later ones. We also considered a multi-armed bandit problem where payoffs are a linear function of an observed stochastic contextual variable. In the scenario where there is a gap between optimal and suboptimal rewards, several algorithms have been proposed that achieve O(log T) regret after T time steps. However, previously proposed methods either had a computation complexity per iteration that scales linearly with T or achieve regrets that grow linearly with the number of contexts |X|. We developed an ε -greedy type of algorithm that solves both limitations: when contexts are d-dimensional real-numbered variables, our algorithm has a constant computation complexity per iteration of O(poly(d)) and can achieve a regret of O(poly(d) log T) even when |X| = Ω(2d). Unlike previous algorithms, its space complexity scales like O(Kd2) and does not grow with T. We designed and analyzed a recommendation policy based on modeling the recommendation decision as a multi-armed bandit problem and showed it has logarithmic regret. Our analysis also showed that regret depends linearly on d, the size of the underlying persistent group. We evaluated our policy on movie recommendations over the MovieLens and MoviePilot datasets. We studied privacy in data analysis, enabling decision-making applications that can provide privacy on the underlying data and understanding the potential costs of providing such privacy. We developed new methods, using sketches, for pan privacy. Our results are very general, applying to many places where linear sketches find uses, and therefore, contributing to the use of pan privacy as a distributed decision-making tool. Previously shown pan private algorithms for basic counting tasks such as distinct counts, heavy hitters, and others are nontrivial and rely on sampling. We reexamined these basic counting tasks using sketching and show improved algorithms. Using noisy decoding, we also presented the first known lower bounds for pan privacy with respect to a single intrusion. Our lower bounds show that, even if allowed to work with unbounded memory, pan private algorithms for distinct counts cannot be significantly more accurate than our algorithms. We also studied privacy for a general class of data analyses useful for monitoring algorithms, computing predicate sums, for sliding windows (and other well-known decay models of data, i.e. exponential and polynomial decay) in a model that extends the continual privacy model. We present ε-differentially private algorithms for accurately estimating decayed sum. For window and exponential decay sums, our algorithms are accurate up to additive 1/ε and polylog terms in the range of the computed function. We show lower bounds, tight within polylog factors and tight with respect to the dependence on the probability of error. Educational activity focused on developing a graduate course covering the entire gamut of ad systems from game theory to optimization and data analysis, developing and presenting tutorials about the area to the broader research community, and training postdoctoral researchers, graduate students, and undergraduate students in this area. The project contributed to the training of 7 graduate students, leading to employment at Google, Technicolor, and Wellesley College, as well as undergraduate students and postdocs. Several tutorials on project-related topics were given at major conferences and are available online.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
0916782
Program Officer
Balasubramanian Kalyanasundaram
Project Start
Project End
Budget Start
2009-09-01
Budget End
2013-08-31
Support Year
Fiscal Year
2009
Total Cost
$300,000
Indirect Cost
Name
Rutgers University
Department
Type
DUNS #
City
New Brunswick
State
NJ
Country
United States
Zip Code
08901