People make choices, trying to choose the best of many options, for example, choosing items at a grocery store based on their cost, value and their budget. Sometimes to choose the truly best option requires solving hard computational problems.

Economic theory has mostly ignored these computational issues, or has used very simple cost models of computation. The field of computational complexity has developed sophisticated models for handling computational costs and this project will apply the models and tools from computational complexity to economic situations. The project will apply these tools to answer a variety of questions such as

- How do we give an economic justification for a cryptographic protocol?

- How do we model complex interactive games such as chess?

- How do people choose from a large range of possibilities (such as choosing a restaurant in a large city)?

- Is there an economic difference between true randomness and pseudorandom number generators?

With the vast number of options and data that one has from the ever-growing Internet, the limited computational power of humans and even our computers cannot hope to give a complete analysis of the best choices. This research will generate a large number of tools that will help us understand the limits of our decision-making ability as well as guide the techniques we need to make choices in our ever-more-complex society.

Project Report

Consider going to the supermarket and choosing a set of goods to buy based on prices, your food preferences and your budget. Traditional economics assumes that you will make the optimum choices but it usually is not feasible to find the absolute best decision of what items to buy. Computational complexity provides us with the tools to understand the actions of buyers when faced with computational costs in the decisions they make. Traditional economic theory gives a method for supermarkets to develop a preference profile for its buyers if the buyers make optimal decisions. This research extends that work in the case that the buyers make non-optimal but still good decisions based on simpler, easy to compute algorithms. In another example, markets can aggregate people's beliefs into a prediction of future events. Conside a security on whether Jones will win a senate seat that pays off $100 if Jones win and $0 if he doesn't. If we allow the security to be bought and sold (with short sales), the price of the security will capture an aggregate probability that Jones will win. In several earlier works, the PI has shown how to view these prediction markets as a computational model that aggregates probabilities. Prediction markets do not work well in low liquidity situations. Robin Hanson describes a market-scoring rule mechanism that will work even in low liquidity if someone seeds the market. Basically a buyer moves the price of a security in the direction of their true belief. The research shows this analogy breaks down in multiple dimensions, for example a market predicting the latitude and longitude of a tornado strike. A market buyer might not move the market in a straight line but more in a circuitous route towards their belief, meaning these markets may not be quite as efficient (in the economic sense) as originally thought. This research also examined several questions on the power of efficiently verifiable computation towards a better understanding of the famous P v NP question that asks whether we can find a solution to any problem where that solution can be efficiently verified.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Type
Standard Grant (Standard)
Application #
1255900
Program Officer
Balasubramanian Kalyanasundaram
Project Start
Project End
Budget Start
2012-09-01
Budget End
2014-08-31
Support Year
Fiscal Year
2012
Total Cost
$152,000
Indirect Cost
Name
Georgia Tech Research Corporation
Department
Type
DUNS #
City
Atlanta
State
GA
Country
United States
Zip Code
30332