How Useful Are “R” Values?

We in the UK are 60 days into a lockdown which has devastated our economy in an attempt to address a Covid-19 outbreak which is by now largely on the retreat. But like a rabbit in the headlights, we are stuck with a policy which, although it has successively changed its justification as each of the previous criteria ceased to be applicable, has changed little in its substance. Suggestions are made that the UK may on current form be the last in the world to return to normality, possibly with more damage to our economy (and all that this entails) than any other country. Nonetheless the policy remains popular. But being popular is different from being effective and different again from being justified.

To recap briefly, the policy was brought in on the basis that the NHS faced being overwhelmed similarly to how health services in Wuhan and Lombardy were. Modelling by a team in Imperial College suggested the possibility of over half a million deaths unless a lockdown was imposed. The first day of warm, sunny weather on Mother’s Day resulting in large crowds descending on beauty spots was cited as evidence that the British people as a whole were incapable of acting responsibly unless forced to by the police.

The lockdown was therefore introduced to “flatten the curve”, slowing down the progress of the virus to a rate which would not overwhelm the health service. Emergency hospitals and ventilators were commissioned to deal with the possible emergency. The peak of viral infections arrived in London (the only place where there was ever any credible risk of hospitals being overwhelmed) ten days later (see Fig. 1 below). The emergency hospitals have now been decommissioned. “Protect the NHS” has been dropped from the government’s messaging. The modelling on which the 500,000+ prediction was based has been thoroughly discredited. The daily rate of new infections in the capital is almost down to single figures with an expected associated monthly death rate also in single figures, a number which would in normal circumstances be undetectable. In short the entire evidential basis supporting the lockdown policy lies at our feet in tatters. Yet the lockdown remains in place. Why?

Fig. 1

There are many possible answers to this question, most of which I would suggest are more related to human psychology than any hard epidemiological facts (are there any such things?), but I will leave others more knowledgeable about such matters to do the speculating. There is so much we don’t know about this virus. But what seems most remarkable about our response to it is the degree to which we have based policy on things which appear to be intrinsically unknowable and even illusory; and then pointed to the level of uncertainty thus introduced to justify the adoption of extreme measures and putting barriers in the path of policy change.

Not least among these things is the “R” value or transmission rate. This seems to have been at the centre of much policy discussion. There is general agreement that the value should be lower for us to be safer. But how low does it have be to justify ending a lockdown? to reopen schools? to end social distancing? All important questions, but no answers. Part of the problem is that of course this elusive property is very hard to measure. Well, no surprise there since it requires us to keep track of the rate of arrival of new cases and the number of current cases, neither of which are observable.

So, there are serious problems relating to the determination of the “R” value; not least that one thing we do know with a fair degree of certainty is that insofar as it can be assessed it varies from place to place and was for example estimated to be 2.5 times times higher mid-May in North-West England than in London. There is the further problem that it is even more sensitive to local contexts: high in care homes, hospitals, busy public transport and private homes and very low outdoors. Yet we insist in considering aggregates as if they can tell us something meaningful about local transmission processes and look to apply the same controls in all contexts.

Oscar Dimdore-Miles and David Miles in a peer-reviewed scholarly article entitled “Assessing the spread of the novel coronavirus in the absence of mass testing” point out that there is very little understanding either of how the rate of transmission varies between symptomatic and asymptomatic cases, a very important detail in understanding and predicting the virus’s progress. Lack of understanding here makes it very hard to make credible assertions about the rate of spread of the virus and the impact of policy thereon. As they point out:

The degree of uncertainty about that asymptomatic rate [“R”] is large enough to mean that neither 0.3 or 0.9 is outside the range of plausible values, though the implications of those two numbers are very different.

But there are even more serious problems with the idea of basing policy on an “R” value. One is the main reason that the Imperial College modelling failed so spectacularly as a tool for policy guidance. That is that the “R” value is in large degree determined by the policy that is adopted by society; which fact is the main justification for the policy. Yet the impact of the policy is largely an assumption rather than a conclusion of the model. So when a policy is adopted we cannot know with any certainty in advance or even with hindsight the impact of that policy on the transmission rate, not least because there will also be a time delay before the impact of a policy is felt during which other factors can come into play (such as population immunity levels increasing or warmer weather arriving which is less conducive to the virus’s proliferation). Nor, crucially, do we know what the impact will be of relaxing and changing the policy subsequently.

Compounding this problem is the fact that it is not the regulations introduced which affect directly the transmission rate but the resulting behavioural changes. It is clearly better for all that a crisis be managed by voluntary measures than by heavy-handed legislation which can be difficult to back out of (as we now know) and which tends to have undesirable and often unforeseen consequences (as we also now know). But if we do not know how much of the behavioural change would have happened anyway and how much would persist in the absence of the regulations we can’t even begin to assess the effectiveness of the regulation. Then there is the related problem that, as we have seen, the government having taken the responsibility to address the threat away from the public, it is very difficult to give it back again, leaving the government responsible to micromanage the way back to normality with the media and opposition parties almost willing them from the sidelines to trip up and fail.

Given the above, it is hard to see how in the context of a Covid-19 pandemic, the prediction of “R” can be considered a remotely scientific enterprise; nor how any good can come of a government trying to avoid responsibility for making difficult and politically risky decisions by appealing to such “science”. Indeed it is striking that now the government are looking to take some decisive steps toward easing the lockdown and need some more explicit support from scientific opinion, the hoped-for endorsements appear not to be coming forth in any great abundance.

I would suggest the problem here is related to the high transmission rate of Covid-19 and its high degree of asymptomatic infection. This has meant that, whereas the classical epidemiological model and the “R” concept are posited around the idea of a virus propagating in a homogeneous medium which becomes progressively less supportive of its proliferation, a better model might be that of an opportunistic hunter-gatherer tribe, which strips an area bare or hunts its prey until such time as they have all hidden themselves away, after which forays start to be made into neighbouring regions. This tribe will prosper exponentially for a time then the growth will stall, but pick up again at a later time in a different region. This paradigm seems to fit better the pattern of behaviour observed in practice and the reason that, after an initial period of apparently exponential growth in the larger conurbation(s), a period of linear growth seems to follow with a fairly constant infection rate as the virus migrates to new areas until finally starting to decline. It also may help shed some light on an otherwise inexplicable phenomenon, which is that the rate of newly reported infections worldwide appears to have remained almost perfectly constant since the end of March, with the graphs for many other countries where the virus has been able to take hold looking remarkably similar. Yet when one digs deeper one see that this masks the existence beneath of a myriad smaller transitory peaks such as the one shown for London above.

I would further suggest in conclusion that a policy approach more commensurate with the notion of being guided by science (not The Science) would be to set out the risks and benefits associated with various policy options and the best expert assessments (possibly averaged) of the likelihood of various outcomes and to make a judicious choice among the options on offer, made public along with the rationale which drove the decision. Commitment should also be made in advance as to what the expected outcomes would be for a given time-frame in terms of variables we have capability to measure with some confidence such as anti-body tests, Covid-19 hospital admissions in a given geographical area or Covid-19 related fatilities and, if those outcomes are not realised in line with the expectation set out, the models will be reviewed and recalibrated and a determination made on what policy changes should be made. Such an approach makes no pretence of guaranteeing success (as is invariably the case in a scientific enterprise) , but it would have a much better chance of achieving something resembling progress than the present shambles.

By Colin Turfus

Colin Turfus is a quantitative risk manager with 16 years experience in investment banking. He has a PhD in applied mathematics from Cambridge University and has published research in fluid dynamics, astronomy and quantitative finance.

Leave a comment

Your email address will not be published. Required fields are marked *