This is the first year of funding of a 4-year continuing award. Preference models determine which one of several plans to prefer. It is important that planners use the same preference models as human decision makers because planners should make the same decisions as their human users, otherwise the planners are not of much use. The PI will investigate how to build planners that fit the preference models of human decision makers better than current planners, by combining constructive methods from artificial intelligence with more descriptive methods from utility theory in order to take advantage of the strengths of the two decision-making disciplines and to extend the applicability of Al planners. The PI will study optimal vs. good or near-optimal ("satisficing") planning with a variety of preference models. He win explore how to exploit the structure of complex sequential planning tasks to solve them efficiently for realistic preference models suggested by utility theory, with an emphasis on preference models in high-stakes decision situations. To this end, he will focus on representation changes that make use of existing planners from AI by transforming planning tasks with nonlinear utility functions into others that these planning methods can solve, and will study the errors that result for the original planning task when satisficing planning methods are used instead. The research will be performed in the context of managing environmental crisis situations, such as cleaning-up marine oil-spills.