The investigators work on three aspects of the foundations of Bayesian statistics and decision theory: (i) They develop measures of incoherence which permit them to quantify the extent to which non-Bayesian procedures violate principles of subjective expected utility. (ii) They implement algorithms for assessing partially ordered preferences, according to their representation in terms of sets of agreeing probability/utility pairs. These are relevant for modeling Pareto consensus of a group of coherent (Bayesian) expert decision makers. (iii) They explore connections between merely finitely additive probability and valid computations with "improper" priors, in the context of so-called "Marginalization" paradoxes.
Bayesian statistics aims to put statistics on a sound theoretical footing by modeling a rational statistical decision maker as if he or she were a bookie, announcing probabilities which act as prices at which risky bets can be bought or sold. An important general result is that a bookie either behaves as a Bayesian, or would accept a series of bets that would make the bookie a sure loser. The investigators explore extensions of this theory to account for non-Bayesian behavior, groups of bookies, and more general interpretations of probability.