All statistical procedures result in an inference, a statement that is based on both the observed data and the assumed model. Since there are often model assumptions that are difficult (or impossible) to check, it is desirable for inferences to be robust against model assumptions. The inferences that we are most concerned about are post (data accuracy measures for confidence sets and hypothesis tests. Such measures are often derived through decision theoretic or Bayesian arguments, hence are based on assumptions about sampling distributions, loss functions, and prior distributions. Here we want to investigate the performance of accuracy estimators, using both frequentist and Bayesian criteria, as the underlying assumptions are relaxed. We are most interested in procedures constructed from default priors, for these tend to perform well under both frequentist and Bayesian scrutiny. One way that we will address the sampling robustness is to combine the default priors with an empirical likelihood to obtain a (somewhat) automatic inference robust procedure. Robustness of inference will be judged using a variety of criteria such as ranges of posterior probabilities, multiple and distance penalizing loss functions and a new (and promising) theory of dilation of probabilities.