Categories
Home experiments Observations Science history

To err is human…

Press Room coffee Twickenham
A smaller V60. For one cup you would use less coffee, but the errors on the measurement will always be there.

Preparing a good V60 requires 30g of coffee (for 500 ml of water)*. This can be measured using a set of kitchen scales, but a first estimate can also be obtained, if you are using whole coffee beans, by timing the passage of the grind through the grinder. Using an Ascaso burr grinder, my coffee used to come through at an approximate rate of 1g/s, so that, after 30 seconds, I’d have the perfect amount of coffee. Recently however this has changed, depending on the bean, sometimes 30g is 40 seconds, sometimes just less than 30 seconds.

Clearly there is an error on my estimate of the rate of coffee grinds going through the grinder. This may be influenced by factors such as the hardness of the bean (itself influenced by the degree of roast), the temperature of the kitchen, the cleanliness of the grinder and, the small detail that the ‘seconds’ measured here refers to my counting to 30 in my head. Nonetheless, the error is significant enough that I need to confirm the measurement with the kitchen scales. But are the scales free of error?

Clearly in asking the question, we know the answer will be ‘no’. Errors could be introduced by improper zero-ing of the scales (which is correct-able), or differences in the day to day temperature of the kitchen (not so correct-able). The scales will also have a tolerance on them meaning that the measured mass is, for example, only correct to +/- 5 % Depending on your scales, they may also only display the mass to the nearest gramme. This means that 29.6g of coffee would be the same, according to the scales, as 30.4g of coffee. Which in turn means that we should be using 493 – 507 ml of water rather than our expected 500 ml (the measurement of which also contains an intrinsic error of course).

Turkish coffee
A Turkish coffee provides a brilliant illustration of the type of particle distribution with depth that Jean Perrin used to measure Avogadro’s constant. For more information see here.

The point of all of this is that errors are an inescapable aspect of experimental science. They can also be an incredibly helpful part. Back in 1910, Jean Perrin used a phenomenon that you can see in your coffee cup in order to measure Avogadro’s constant (the number of molecules in a mole of material). Although he used varnish suspended in water rather than coffee, he was able to experimentally verify a theory that liquids were made up of molecules, by the fact that his value for Avogadro’s constant was, within error, the same as that found by other, independent, techniques. Errors also give us an indication of how confident we can be in our determination of a value. For example, if the mass of my coffee is 30 +/- 0.4 g, I am more confident that the value is approximately 30 g than if the error was +/- 10 g. In the latter case, I would get new scales.

But errors can also help us in more subtle ways. Experimental results can be fairly easily faked, but it turns out that the random error on that data is far harder to invent. A simple example of this was seen in the case of Jan Hendrik Schön and the scientific fraud that was discovered in 2002. Schön had shown fantastic experimental results in the field of organic electronics (electronic devices made of carbon based materials). The problem came when it was shown that some these results, despite being on different materials, were the same right down to the “random” noise on the data. Two data sets were identical even to the point of the errors on them, despite their being measurements of two different things.

A more recent case is a little more subtle but crucial for our understanding of how to treat Covid-19. A large study of Covid-19 patients apparently showed that the drug “Ivermectin” reduced mortality rates enormously and improved patient outcomes. Recently it has been shown that there are serious problems with some of the data in the paper, including the fact that some of the patient records have been duplicated and the paper has now been withdrawn due to “ethical considerations”. A good summary of the problems can be found in this Guardian article. However, some of the more worrying problems were a little deeper in the maths behind the data. There were sets of data where supposedly random variables were identical across several patients which suggested “that ranges of cells or even entire rows of data have been copied and pasted“. There were also cases where 82% of a supposedly random variable ended in the digits 2-5. The likelihood of this occurring for random variables can be calculated (it is not very high). Indeed, analysis of the paper showed that it was likely that these values too were either copy and pasted or “invented” because humans are not terribly good at generating properly random numbers.

A gratuitous image of some interesting physics in a V60. If anyone would like to hire a physicist for a cafe, in a 21st century (physics) recreation of de Moivre’s antics at Old Slaughters, you know how to contact me…

Interestingly, a further problem both for the Ivermectin study and for the Schön data comes when you look at the standard deviation of the data. Standard deviation is a measure of how variable is the measured outcome (e.g. duration of time a patient spent in hospital). For the ivermectin study, analysis of the standard deviations quoted on the patient data indicated a peculiar distribution of the length of hospital stay, which, in itself would probably just be a puzzle but in combination with the other problems in the paper becomes a suggestion of scientific fraud. In Schön’s data on the other hand, it was calculated that the precision given in the papers would have required thousands of measurements. In the field in which Schön worked this would have been a physical impossibility and so again, suggestive of fraud. In both cases, it is by looking at the smaller errors that we find a bigger error.

This last detail would have been appreciated by Abraham de Moivre, (1667-1754). As a mathematician, de Moivre was known for his work with probability distribution, which is the mathematics behind the standard deviation of a data set. He was also a well known regular (the ‘resident’ mathematician) at Old Slaughters Coffee House on St Martin’s Lane in London[1]. It is recorded that between 1750 and 1754, de Moivre earned “a pittance” at Old Slaughters providing solutions to games of chance to people who came along for the coffee. I wonder if there are any opportunities in contemporary London cafes for a resident physicist? I may be able to recommend one.

*You can find recipes suggesting this dosage here or here. Some recipes recommend a slightly stronger coffee amount, personally, I prefer a slightly weaker dosage. You will need to experiment to find your preferred value.

[1] “London Coffee Houses”, Bryant Lillywhite, 1963

Leave a Reply

Your email address will not be published. Required fields are marked *