Vis – a – vis the comments in sigfpe’s last post, Krzystof Burdzy has a booklet online: Probability is Symmetry. On Foundations of the Science of Probability which introduces, discusses and critiques the Frequentist and Subjectivist foundational positions of Probability Theory. For any of our readers unfamiliar with either the terms or the arguments behind these philosophies of Probablility, this book forms an excellent primer on the subject, in addition to arguing for Burdzy’s own interpretation.

Another useful place to look is this article at the Stanford Encyclopedia of Philosophy. Since I’ve come to probability from the philosophical side, I’ve been surprised that frequentism is a position that’s taken so seriously elsewhere. In philosophy the debate is primarily between bayesians and “propensity” theorists (and to betray what side I prefer, I’ll admit that I can’t understand what a propensity is supposed to be). Though there are a few other positions as well, all detailed in that article. More philosophers also seem willing to admit that both subjective and objective interpretations of probability can be correct, and that they are just two applications of the same formal system (just as distances and probabilities are two applications of the system of real numbers).

One reason that many us who are not philosophers reject Bayesianism is that it is a theory defined in terms of subjective assignment of probabilities. How can one construct an objective science on the basis of subjective probabilities? The Bayesian answer to this is that, in theory, bayesian updatings from conflicting subjective prior probabilities should converge to common posterior probabilities. But exactly how does this happen in practice when there are thousands (perhaps, scores of thousands) of scientists working in a particular domain, each with access to different information and each valuing the information they receive differently?

In truth, I think that most practicing statisticians are pragmatists, using Bayesian methods when this seems suitable and reverting to frequentist approaches when not (as when testing model fitness, an inherent limitation of Bayesian methods).

This pragmatism is not true of many Bayesians, who seem to have an ideological fervour rarely seen elsewhere in the academy. This fervour, IMO, is a key reason why Bayesian methods have risen to prominence (that, and military funding, but that’s a topic for another post).

I suppose the reason people reject Bayesianism is that they have a different goal of what they want to do with probabilities. In philosophy, an important use of probabilities is to model someone’s subjective degrees of belief – and there are arguments that such degrees of belief should form a probability function, but there’s no reason to suppose that they have anything to do with the facts in the world. David Lewis proposed what he called the “principal principle”, seeking to connect bayesian and objective probabilities, saying that if we know the objective probability of some event, then we should set our subjective probability equal to that. And this sort of reasoning is what justifies bayesians in using objective, “scientific” methods in setting their subjective, “non-scientific” function. However, in many cases, even if we believe in objective probabilities of some sort or another, we have no evidence about what those objective probabilities are. Therefore, the bayesian is free to use some subjective means to set her priors. And these cases are exactly the ones that are most controversial among bayesians, from what I understand.