9514908 Easley This project studies the following important question: What does an individual do when confronted with a new potentially complex, stochastic decision problem? According to standard expected utility theory, before making any choices the individual determines all possible stochastic processes that could be driving the states of the world and accesses a prior over these processes. As appealing as this theory is in relatively simple settings in which no further learning need occur, it is difficult to accept in complex settings. This project develops an alternative evolutionary foundation for static expected utility maximization. Within this alternative framework individuals do not access beliefs over all possible stochastic processes. They do not even attempt to determine the set of possible processes. Instead, they act and learn from their experience. Here an individual begins with a set of candidate decision rules, starts with weights on these rules, and selects an initial rule using these weights as probabilities. She/he then acts according to the recommendation made by the selected rule. After observing the state and the payoff that is achieved, the individual updates the weights on rules using an evolutionary dynamic. He/she selects the rule to use at the next date based on these updated weights, and so on. Based on the work completed so far, it is known that with three natural axioms on the dynamic: symmetry, monotonicity, and independence, the individual eventually chooses a rule that is a subjective expected utility maximizer with correct expectations. One part of the project is to determine if these axioms are minimal for this conclusion. This project extends the process oriented model of choice to richer settings and an analysis of interaction between multiple evolutionary individuals.