Imagine you are somewhere on a road that you have never been on before. Picture it. It’s peaceful and calm. A car comes down the road. As it gets to a corner, the driver appears to lose control, and the car crashes into a wall. Fortunately the driver is OK, but they can’t recall what happened.
Let’s think about what you experienced. The car crash might involve a number of variables an investigator would be interested in.
- How fast was the car going? Where were the brakes applied?
- Look on the road. Get out a tape measure. How long was the skid before the car finally stopped?
- How big and heavy was the car? How loud was the bang when the car crashed?
These are all physical variables. We are used to thinking about the world in terms of these kinds of variables: velocity, position, length, volume and mass. They are tangible: we can see and touch them, and we have physical equipment that helps us measure them.
To this list we might add variables we can’t see, such as how loud the bang was. We might not be able to see it, but we can appreciate that loudness is a variable that ranges from very quiet to extremely loud indeed! With a decibel meter we might get an accurate reading, but you were not expecting a crash, and if you are trying to explain how loud something was to the Police from memory, the best you might be able to do is a rough-and-ready assessment.
We are also used to thinking about some other variables that might be relevant to our car crash investigation. If we were investigating on behalf of the insurance company, we might want to know the answers to some slightly less tangible variables. What was the value of the car before the accident? How wealthy is the driver? How dangerous is that stretch of road?
We are used to thinking about the world in terms of physical variables but we are also brought up in a social world of economic value: the value of the car, the wealth of the driver. These social variables are a bit more ‘slippery’ than the physical variables. ‘Value’ can be highly subjective: the car might have been vintage, and different buyers might place a different value on it. The buyer, being canny, might resell it for a higher value. Nonetheless, everyone brought up in a world of trade and capital understands the idea that a car can be sold and, in the process, a price attached to it. Likewise, ‘wealth’ might be measured in different ways, or in different currencies. So although monetary attributes are not physical variables, we are comfortable with the idea that they are tangible to us.
But what about that last variable? I asked, how dangerous is that stretch of road?
This variable is a risk value. It is a probability. We can rephrase my question as “what is the probability that for every car that comes down the road, it crashes?” If we can measure this in some way, and make repeat measurements elsewhere, we could make comparisons. Perhaps we have discovered an accident ‘black spot’: somewhere where there is a greater chance of a road accident than at other locations.
But a probability cannot be calculated on the strength of a single accident. It can only be measured by a different, more patient, process of observation. We have to observe many cars driving down the road, count the ones that crash, and build up a set of observations. Probability is not a tangible variable, and it takes an effort of imagination to think about.
I argue that the first thing that makes the subject of statistics difficult, compared to, say, engineering, is that even the most elementary variable we use, observed probability, is not physically tangible.
Let us think about our car crash for a minute. I said that you have never been on this road before. You have no data on the probability of a crash on that road. But it would be very easy to assume from the simple fact that you saw a crash that, if the road surface seemed poor, or it was raining, these facts contributed to the accident and made it more likely. But you have only one data point to draw from. This kind of inference is not valid. It is an over-extrapolation. It is little more than a guess.
Our natural instinct is to form explanations in our mind, hypotheses, and to look for patterns and causes in the world. (Part of our training as scientists is to be suspicious of that inclination. Of course we might be right, but we have to be relentlessly careful and self-critical before we can conclude that we are right.)
If we wanted to make a case that this location is an accident black spot, we would need to set up equipment and monitor the road for accidents. We would need to continue to observe the road over a prolonged period of time to get the data we needed. This is called a natural experiment, where we don’t attempt to interfere with the conditions of the road but simply observe driver behaviour and car crashes.
Alternatively, we might conduct an actual experiment and drive various cars down the road to see how they handled. Either way, we would need to observe many cars going past before we could make a realistic estimate of the chance of a crash.
If probability is difficult to observe directly, this has an effect on our ability to think about it. Probability is more difficult to conceive of in the way we conceive of length, say. We all vary in our spatial reasoning abilities, but we experience reinforcement learning from daily observations, tape measures and practice. As we have seen, probability is much more elusive because it is only observed from many observations. This makes it difficult to reliably estimate probability in advance, or to reason with probabilities.
Even experienced researchers make mistakes. The psychologists Tersky and Kahneman (1971) reported the findings from a questionnaire they gave to professional psychologists. The questions concerned the decisions they would make in research based on statements about probability. They showed that not only were their expert subjects unreliable, they provided evidence of persistent biases in human cognition, including the one we mentioned earlier – a belief in the reliability of their own observations, even when they had few observations on which to base their conclusions.
So, if you are struggling with statistical concepts, don’t worry. You are not alone. Indeed, I have come to the conclusion that it is necessary to struggle with probability. We have all been there, and one of my main criticisms of traditional statistics teaching is that most treatments skate over the core concepts and goes straight to statistical testing methods that the experimenter, with no conceptual grounding (never mind mathematical underpinnings), simply takes on faith.
Probability is difficult to observe. It is an abstract mathematical concept that can only be measured indirectly, from many observations. And simple observed probability is just the beginning. In discussing inferential statistics I try to keep to three notions of probability and a simple labelling system: observed probability, for which I will use the label lower-case p, the ‘true’ population probability, capital P, and a third type, the probability that our observed probability is reliable, which we denote with α. Many people make mistakes reasoning about that last little variable. But we are getting ahead of ourselves.
The best way to get to grips with probability is to replace my thought experiment with a physical one.
But: safety first! Please don’t crash an actual car — use a Scalextric instead!
Tversky, A., and Kahneman, D. 1971. Belief in the law of small numbers. Psychological Bulletin 76:2, 105-110. » ePublished