Expert Evidence part 6
It was Mark Twain who famously said that there are three types of lies, lies, dammed lies, and statistics. Statistics are an area which has proved extremely problematical in expert evidence, particularly in criminal cases.
This has long been perceived as a problem. For example in the case reported in 1997 the present Lord Chief Justice referred to one illustration of false reasoning. Only one person in a million has a DNA profile matching that found at the scene of the crime. The defendant has a DNA profile matching the sample. Therefore there is a million to one probability that the defendant is guilty of the crime. That does not follow at all. There might be more than 20 people in the country with such a DNA profile, and perhaps more who are temporarily in the country. The DNA profile is a useful piece of evidence, but of itself the statistic does not demonstrate anything with certainty. (Obviously depending upon the DNA evidence there might be an absolute match which excludes other people, but the statistical evidence would be relevant where the match can only be considered in terms of statistical probability).
A key illustration of how statistics can go badly wrong occurred in the Sally Clark murder trial. The Crown expert was Professor Meadow. He relied upon statistics in a report compiled by another professor. That report had calculated that the risk of cot death (technically, sudden infant death syndrome) in a standard non-smoking household (subject to various other relevant conditions) was one in 8,543 (although this was not based upon observation but mathematical modelling, but that is a separate issue). Professor Meadow stated that if the chance of one cot death was one in 8,543, then the chance of two was 8,543 times 8,543, roughly 73 million. The professor added that that was the equivalent of backing different horses to win the Grand National at 80 to 1 for four consecutive years and winning every time. This however was mathematically incorrect. You would only multiply one figure by the other where the events were definitely independent and not causally related to one another. Two deaths in one family were likely to be interrelated because there could possibly be a common cause. Therefore the statistic was completely incorrect.
As can be imagined such a statistic would have had a huge impact upon the jury. The statistic was unimaginably large and the comparison with successive Grand National winners meant that the prospect of this occurring naturally would have seemed so remote as to be fanciful. That meant that they were overwhelmingly likely to regard the cause of death as murder. Hence the conviction.
This illustrates a number of important points regarding giving expert evidence. The first is for the expert to know the limits of his or her expertise. You might be the world's greatest expert on DNA, but no expert at all in relation to statistics or mathematical modelling. That does not mean that you cannot rely upon statistics - the courts have ruled that experts can rely upon statistical tables and the like - but the limits of any attempt to draw conclusions from those tables should be made clear. Likewise it is important for experts and those making submissions on that expert evidence not to be drawn into fallacious suggestions such as in the DNA sample illustration above. It also shows the importance of the expert giving expert opinion and not seeking to draw ultimate conclusions. In the example above getting the 73 million statistic wrong was one thing, but condescending to the Grand National example should be regarded as a step too far.