Categories
Home experiments Observations Science history

To err is human…

Press Room coffee Twickenham
A smaller V60. For one cup you would use less coffee, but the errors on the measurement will always be there.

Preparing a good V60 requires 30g of coffee (for 500 ml of water)*. This can be measured using a set of kitchen scales, but a first estimate can also be obtained, if you are using whole coffee beans, by timing the passage of the grind through the grinder. Using an Ascaso burr grinder, my coffee used to come through at an approximate rate of 1g/s, so that, after 30 seconds, I’d have the perfect amount of coffee. Recently however this has changed, depending on the bean, sometimes 30g is 40 seconds, sometimes just less than 30 seconds.

Clearly there is an error on my estimate of the rate of coffee grinds going through the grinder. This may be influenced by factors such as the hardness of the bean (itself influenced by the degree of roast), the temperature of the kitchen, the cleanliness of the grinder and, the small detail that the ‘seconds’ measured here refers to my counting to 30 in my head. Nonetheless, the error is significant enough that I need to confirm the measurement with the kitchen scales. But are the scales free of error?

Clearly in asking the question, we know the answer will be ‘no’. Errors could be introduced by improper zero-ing of the scales (which is correct-able), or differences in the day to day temperature of the kitchen (not so correct-able). The scales will also have a tolerance on them meaning that the measured mass is, for example, only correct to +/- 5 % Depending on your scales, they may also only display the mass to the nearest gramme. This means that 29.6g of coffee would be the same, according to the scales, as 30.4g of coffee. Which in turn means that we should be using 493 – 507 ml of water rather than our expected 500 ml (the measurement of which also contains an intrinsic error of course).

Turkish coffee
A Turkish coffee provides a brilliant illustration of the type of particle distribution with depth that Jean Perrin used to measure Avogadro’s constant. For more information see here.

The point of all of this is that errors are an inescapable aspect of experimental science. They can also be an incredibly helpful part. Back in 1910, Jean Perrin used a phenomenon that you can see in your coffee cup in order to measure Avogadro’s constant (the number of molecules in a mole of material). Although he used varnish suspended in water rather than coffee, he was able to experimentally verify a theory that liquids were made up of molecules, by the fact that his value for Avogadro’s constant was, within error, the same as that found by other, independent, techniques. Errors also give us an indication of how confident we can be in our determination of a value. For example, if the mass of my coffee is 30 +/- 0.4 g, I am more confident that the value is approximately 30 g than if the error was +/- 10 g. In the latter case, I would get new scales.

But errors can also help us in more subtle ways. Experimental results can be fairly easily faked, but it turns out that the random error on that data is far harder to invent. A simple example of this was seen in the case of Jan Hendrik Schön and the scientific fraud that was discovered in 2002. Schön had shown fantastic experimental results in the field of organic electronics (electronic devices made of carbon based materials). The problem came when it was shown that some these results, despite being on different materials, were the same right down to the “random” noise on the data. Two data sets were identical even to the point of the errors on them, despite their being measurements of two different things.

A more recent case is a little more subtle but crucial for our understanding of how to treat Covid-19. A large study of Covid-19 patients apparently showed that the drug “Ivermectin” reduced mortality rates enormously and improved patient outcomes. Recently it has been shown that there are serious problems with some of the data in the paper, including the fact that some of the patient records have been duplicated and the paper has now been withdrawn due to “ethical considerations”. A good summary of the problems can be found in this Guardian article. However, some of the more worrying problems were a little deeper in the maths behind the data. There were sets of data where supposedly random variables were identical across several patients which suggested “that ranges of cells or even entire rows of data have been copied and pasted“. There were also cases where 82% of a supposedly random variable ended in the digits 2-5. The likelihood of this occurring for random variables can be calculated (it is not very high). Indeed, analysis of the paper showed that it was likely that these values too were either copy and pasted or “invented” because humans are not terribly good at generating properly random numbers.

A gratuitous image of some interesting physics in a V60. If anyone would like to hire a physicist for a cafe, in a 21st century (physics) recreation of de Moivre’s antics at Old Slaughters, you know how to contact me…

Interestingly, a further problem both for the Ivermectin study and for the Schön data comes when you look at the standard deviation of the data. Standard deviation is a measure of how variable is the measured outcome (e.g. duration of time a patient spent in hospital). For the ivermectin study, analysis of the standard deviations quoted on the patient data indicated a peculiar distribution of the length of hospital stay, which, in itself would probably just be a puzzle but in combination with the other problems in the paper becomes a suggestion of scientific fraud. In Schön’s data on the other hand, it was calculated that the precision given in the papers would have required thousands of measurements. In the field in which Schön worked this would have been a physical impossibility and so again, suggestive of fraud. In both cases, it is by looking at the smaller errors that we find a bigger error.

This last detail would have been appreciated by Abraham de Moivre, (1667-1754). As a mathematician, de Moivre was known for his work with probability distribution, which is the mathematics behind the standard deviation of a data set. He was also a well known regular (the ‘resident’ mathematician) at Old Slaughters Coffee House on St Martin’s Lane in London[1]. It is recorded that between 1750 and 1754, de Moivre earned “a pittance” at Old Slaughters providing solutions to games of chance to people who came along for the coffee. I wonder if there are any opportunities in contemporary London cafes for a resident physicist? I may be able to recommend one.

*You can find recipes suggesting this dosage here or here. Some recipes recommend a slightly stronger coffee amount, personally, I prefer a slightly weaker dosage. You will need to experiment to find your preferred value.

[1] “London Coffee Houses”, Bryant Lillywhite, 1963

Categories
slow

The importance of going slow

journals in a library
How can we assess the work of scientists? Should we count the number of papers that they write?

In the past few weeks there has been a bit of a media storm about the state of science. A paper that had been published in the journal Science, was retracted because it turned out that the study had, quite possibly, been faked. The retraction highlighted the problem of “publish or perish” which has been a concern for many scientists of late. A second article, this time an editorial in Nature, took a different and perhaps surprising perspective on things. Apparently the public trust scientists much more than scientists think that they do. Why would that be the case?

These two stories should concern us because they lie at the heart of a current problem in science. According to the dictionary, ‘science’ is “systematic and formulated knowledge”. Such knowledge takes time to develop, it takes us time to understand what goes on, both on an individual level and as a society. The ‘publish or perish’ culture acts in opposition to this. Within a ‘publish or perish’ culture, the way that science works is that the more papers that you have, especially those that get cited and are published in (apparently) good journals, the more successful you will be in your career and in your ability to get research funding. It is essential to publish “high impact” papers in order merely to survive in science. In more extreme cases this has led to data being faked and subsequent retractions of the papers (if it is ever discovered). Active faking of data though is only the tip of the iceberg. The pressure to publish high profile papers quickly, can lead to the original paper not having been investigated thoroughly enough. In fact, there are even motivations to publish too quickly. Firstly, if you are wrong, you just publish a second paper a few months later. Two papers, two sets of citations. Secondly, publishing early means getting there first, ie. more citations. It has got to the point where it is advantageous to quickly publish poor quality research with hyped key words than it is to do a thorough job and perhaps be beaten to the publication by a more incomplete work. This cannot be good for science or our society and it suggests that, in order to have a scientific career you must, to a greater or lesser extent, cease to behave scientifically. It is perhaps for this reason that scientists themselves have a doubt as to why the public would trust them, they no longer trust themselves.

lilies on water
Is there symbolism here? There’s certainly a lot of physics.

The ‘publish or perish’ culture has come about partly as a consequence of needing a metric by which to judge the worth of research. In itself this is understandable but it does suggest that we are no longer confident of our ability, as a society, to measure the ‘good’ of something. To judge something as ‘good for society’ necessarily involves many different inputs from many disciplines. Assessing something as good is a value judgement. To redefine ‘good’ purely into something that we measure (by profit, or by number of papers) is to artificially reduce what is good for society to an arbitrary, but on appearance scientific, method. Rather than admit that questions over what is ‘good science’ are, essentially, value judgements, we try to give a false ‘scientific’ measure of their worth, one based on citations and publications. We still have our biases but we have become less conscious of them and instead try to hide them with a false scientism.

How could we change this, how else can we assess who is a ‘good’ scientist or what research will benefit society? This is, I think where it is important for everyone to get involved and to slow down. It is open for everyone to investigate, for themselves, what they think would make a ‘good’ society. Clearly the quest for knowledge, and in particular scientific knowledge, will form part of that good but for us, as a society to realise what is good we need to stop and think about it. There is a need to encourage clear methods of thinking but at the same time everyone must feel eligible to be a part of this natural philosophy, purely as a consequence of their being a citizen of society. On a practical level, this can be achieved by our maintaining a sense of awe and wonder at the beauty of the world, and society, around us. In my own field of magnetism for example, to know the physics behind magnetic attraction is to make it more beautiful. And that is in essence what I am trying to communicate with Bean Thinking; just as an artist does with a painting, I am attempting to share the beauty that I see as a result of seeing physics all around me. The saying of Pierre Duhem that “Physical theory is a mathematical painting of reality” can be taken at many levels. As a scientist, I am to a certain extent, an artist.

rain drops on a tulip
A tulip in spring. The water droplets on the petals suggest some very deep physics. As the flower opens into the sunshine, each layer  (physical and metaphorical) of petals reveals a new level of beauty.

Of course, there is no immediate connection between appreciating the beauty of knowledge and allocating research funds. Yet if we, as a society, appreciate science and beauty where we see it, we are going to slowly move back to a more sustainable, scientific way of doing science. “By learning to see and appreciate beauty, we learn to reject self-interested pragmatism”¹. By allowing ourselves to assess the good of society across many measures, we recover science. Denying the fact that what qualifies as ‘good’ is ultimately a value judgement and instead covering it in false metrics, imperils science. It is in the history of humanity to ask ‘why’. Moving to a predominantly technology driven quasi-science does not enrich us as a species. Good art, good music, great science can. Great discoveries of the past have not been obtained by chasing the latest chimera of a device, they have been uncovered through an insatiable curiosity. A demand to know ‘why’ things are the way they are. We are destroying the very science we are so keen to promote if we conform to the key-word, hype and technology driven ‘publish or perish’ culture. It has got to the point where, in order to save science, it is imperative that we, as a society, recover our ability to appreciate the beauty in science.

I hope that Bean Thinking prompts at least some people to question the world around them. It is not important to agree with what is written in Bean Thinking indeed, perhaps with some things it is more important to disagree. The key thing is to notice the world around. The practise of slowing down and noticing things is the reasoning behind the cafe-physics reviews, as much as anything it enables me to practise slowing down and noticing too. To slow down and to appreciate what is there will mean that slowly, imperceptibly perhaps, we challenge the culture of ‘publish or perish’. To do so may not be too far short of a need to recover our humanity, to quote Laudato Si’ again, “[w]e need to see that what is at stake is our own dignity”².

¹ Pope Francis, Laudato Si’, (2015) #215

² ibid, #160

Further thoughts:

Michael Polanyi “Science, Faith and Society”, Sapientia Press, 1964

Michael Polanyi “Personal Knowledge: Towards a post-critical philosophy”, University of Chicago Press, 1974