As a statistician I'd say that winning or losing a particular bet does not in itself indicate whether or not the bet was good value. A bet is good value if you expect (on average) to make money from it. For example, suppose someone offers to let you bet that England will win their next match at odds of 50% (two in decimal odds, evens in traditional parlance) – the bet is good value if you think the chance of that happening is greater than 50%. If the true odds are 75%, then you expect to make:
I got odds of about 5.5 (18% or 9 to 2) on UKIP winning in Heywood. This seemed like a bit of a long-shot, so when I saw that the election had indeed been so close this seemed like evidence that the true chance of them winning had been higher than 18%. This is obviously slightly illogical, since by the time I had that information the chance of them winning had dropped to zero.
So is there an objective way to say whether a particular bet was good value or not? For a one-off event like the Scottish referendum, or even the by-election, the answer is pretty clearly no. There are many expert opinions available about which outcomes were plausible (from both before and after the event) but these are ultimately subjective.
Purely subjective probabilities are of limited practical use in this context: if I 'expect' to make money, but I'm also an idiot, then this doesn't help me much. Such probabilities are most useful when they have some sensible long-run interpretation: e.g. if I roll a die 6,000 times about 1,000 times it will show a six. Hence we'd say the probability is . Referenda have no such interpretation without resorting to rather contorted abstract ideas, because there aren't any other events which are the same (or even vaguely similar, usually) to a particular plebiscite. You can try to compare it to other elections, but you'll never know whether this one was somehow 'different'. But surely probabilities for these events aren't totally meaningless, are they?
For a probability forecaster like a weather service or a bookie, you can check whether they are 'calibrated'. Take a large number of predictions of (say) 60% rain, and check whether approximately 60% of them were followed by rain. The events don't even have to be related – you can throw in any events given a 60% probability and see if they're jointly calibrated.
But for any individual prediction, there isn't much you can say about its quality from purely statistical evidence. What about a pundit? Well I'd say they only add value if you'll make money by betting on his or her predictions, i.e. he or she is telling you something that the bookie's odds don't. If they just tell you the most likely outcome this isn't very useful – in a sporting fixture it's usually clear who the favourite is.
For a gambler of course, the proof is in the winning. A good gambler will make money eventually, and a bad one will lose it – so far I'm up. My next wager is that the Conservatives will win the most seats at the next UK general election… I'd be quite happy to lose that bet.
(If you're interested in my reasoning for the bets above: I bet on the Scottish Referendum just after a YouGov opinion poll showed 'Yes' ahead, guessing that people would over interpret this single poll. I got odds of 1.4 (71% or 2 to 5). I bet on the Heywood and Middleton by-election after reading that Labour were nervous about the result. The odds would have been much higher if I'd waited until nearer the day.)
The article first appeared on Robin's blog, It's A Stat Life.