# Use Bayesian inference

“When the facts change, I change my opinion. What do you do, sir?” — John Maynard Keynes

In many situations, we don’t follow Keynes’ approach. In fact, in light of new evidence, we usually don’t update our initial beliefs as much as we should. Using Bayesian inference can help us become better at updating our beliefs.

The central motivation is simple. The work of University of Pennsylvania psychologist Philip Tetlock on forecasting has shown that people who think not in terms of certainties but in terms of probabilities tend to do much better in forecasting and anticipating events than people who think in certainties.

Bayesian inference allows you to think in terms of probabilities by helping you revise the likelihood of a hypothesis h (the prior) in light of a new item of evidence (or datum, d) to get to a posterior. The posterior P(h|d) equals the prior P(h) multiplied by the conditional probability of the item of evidence  given the hypothesis P(d|h) divided by the probability of the evidence P(d). That is,

Bayes’ rule

It may look and sound scary but it really isn’t that bad. Let’s take a look.

## Bayesian inference, a very short introduction

Facing a complex situation, it is easy to form an early opinion and to fail to update it as much as new evidence warrants.

Consider Tversky and Kahneman famous example:

“A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. You are given the following data:

a. 85% of the cabs in the city are Green and 15% are Blue.

b. A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time.

What is the probability that the cab involved in the accident was Blue rather than Green knowing that this witness identified it as Blue?”

(Tversky and Kahneman, 1982 [pp. 156–157])

Here, we are interested in the probability of the cab being Blue (this is our hypothesis, h: Cab is Blue) given that it was identified as blue (this is our evidence, d: Seen as Blue): P(Cab is Blue|Seen as Blue). To use the equation above, we need three pieces of data:

1. The prior probability of the cab being blue. That is given to us by the base rate, since we know that only 15% of the cabs in the city are blue, P(Cab is Blue) = 0.15.
2. The probability that the cab was seen as blue when it was indeed blue. That’s given to us by the court’s reliability test. We know that the witness identifies the color of the cab correctly 80% of the time, so P(Seen as Blue|Cab is Blue) = 0.80.
3. The probability that the cab was seen as blue. That can happen in two different ways: Either the witness correctly identified a blue cab or incorrectly identified a green cab as blue. In other words, P(Seen as Blue)=(0.80) x (0.15) + (0.20) x (0.85) = 0.29.

So, we can now calculate the posterior: P(Cab is Blue|Seen as Blue) = (0.15) x (0.80) / (0.29) = 0.41 or 41%.

That is, even though the witness identified the cab as blue, it is more likely that it was green.

The take away is that, when testing hypotheses, chances are that after you form an opinion you do not change it as much as new evidence warrants you to (this is a form of confirmation bias; see, for instance, Nickerson, 1998). Using Bayesian inference may help you update your thinking in a more rational way.

### References

Chevallier, A. (2016). Strategic Thinking in Complex Problem Solving. Oxford, UK, Oxford University Press, pp. 104–109.

Nickerson, R. S. (1998). “Confirmation bias: a ubiquitous phenomenon in many guises.” Review of General Psychology 2(2): 175.

Tversky, A. and D. Kahneman (1982). Evidential impact of base rates. Judgment Under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovic and A. Tversky. New York, Cambridge University Press.

Also, for an introduction to Bayesian inference, see: McGrayne, S. B. (2011). The theory that would not die: how Bayes’ rule cracked the enigma code, hunted down Russian submarines, & emerged triumphant from two centuries of controversy, Yale University Press.

Image credit: Pixabay.

### Join the discussion 2 Comments

• Hi Arnaud, I find switching the sides of the equation, and using more natural language, makes Bayes Theorem more understandable and memorable:

Pr(h before new evidence) x plausibility ratio = Pr(h after new evidence)

“plausibility ratio” is of course just my name for Pr(d|hi)/Pr(d). Informally it is: how likely is it that you would see the evidence if h is true, as opposed to how likely is it you would see d at all (i.e. if h or any other other hypothesis is true).

The re-ordering lets the equation match the temporal or “narrative” structure of updating – old situation, change given new evidence, new situation. The classic format is very counter-intuitive, starting with a conditional probability.

I created a simple calculator to do Bayesian updating in this format here: http://www.vangeldermonk.com/bayescalculator.html

• Oh, nice one, Tim! I’ve never seen the equation flipped, but it makes sense. I also like your calculator. Very handy.

Arnaud

This site uses Akismet to reduce spam. Learn how your comment data is processed.