Utilitarian reasoning has been an area of emerging interest in multi-agent systems. The research supported by this award is in this area, though it will make use of more classical concepts of symbolic reasoning as well. This project will investigate how non-cooperative game-theoretic tools and methods can be usefully applied to model the interactions of artificial computer-based agents. Since such agents must be programmed, the research must provide explicit models of how agents reason to a solution, a task which has never been addressed in traditional game theory. Game theory must be supplemented with representations in which actions, references, and rationality for agents can be defined, as well as with mechanical procedures that allow agents to compute solutions to their decision problems. This project aims to view agents as simple inferential machines that can derive a solution from a set of axioms and inference rules. Since agents may repeatedly interact, and thus may have a chance to learn from their environments, we need to provide them with belief revision tools and embed such tools into viable computational procedures. The results of this research will form basis for studying how cooperation and coordination can be attained in multi-agent systems.