Decision theory and game theory have proved to be remarkably useful as guides to decision making and strategic thinking. However, people often behave in predictable and systematic ways that are inconsistent with utility maximization models of decision theory. Moreover, players who follow the recommendations of game theory will do far worse than those who do not. These predictive failures of current theories stem from a number of sources, including:
1) current theories implicitly assume that agents are perfect reasoners, who can compute the consequences of their actions;
2) current solution concepts assume that computation is free, so do not take into account the cost of computation;
3) equilibrium concepts implicitly assume that agents know what all other agents are doing;
4) current approaches do not take language into account - they implicitly assume that all agents playing a game (or making a decision) describe the game (or decision problem) the same way, and have the same theory of the world;
5) all agents are assumed to be rational, according to the modeler of the game/decision problem.
Most of these observations are not new (references to previous work are provided in later sections). The time is now ripe to put all these observations together and come up with more realistic foundations for game theory and decision theory. The goal of this project is to construct such a foundation, by using a broad range of techniques from logic, cryptography, and robust distributed computing, and then apply the ideas to the problem of robust mechanism design (i.e., rules for making choices).
The project explores new research directions in both computer science and game theory, and the resulting synthesis should enrich both fields. It should have a broader societal impact as well, including more robust design of mechanisms for auctions; better design of peer-to-peer networks; more secure wireless networks; better software agents, that make decisions more comprehensible to users and that take into better account the actions of other users; and, more generally, a deeper understanding of the actions of interacting agents.