How to Communicate in a Crisis
In the middle of the Fukushima nuclear crisis – an event which overshadowed and was caused by the more significant and damaging earthquakes and tsunamis – I was lucky enough to be one of the people who, a month and a half previously, had bought tickets to the British Library’s TalkScience event. Titled “Communicating risk and scientific advice during emergencies: Don’t Panic?” the event was so topical that the two contributors, Professor Sir John Beddington, the Government Chief Scientific Adviser and Mark Henderson, Science Editor the The Times, were so busy dealing with the crisis that they were late.
Unsurprisingly, much of the early discussion focussed on what was happening in the Fukushima Dai-ichi reactors, and then moved onto encompass other crises. This was interesting because it provided us with a snapshot case study to which we would not normally have access. Most of what is written in this is taken from the evening, but some of it is my own observation and thoughts. Where I have attributed, it is direct quotation.
A note on the timing: I did write this two days after the event on 15 March, but lost it due to a software problem. However, I think the perspective has improved my review.
The Challenge of Communicating to the Public
Mark Henderson first discussed (before Sir John arrived) the problems of reporting the situation in a balanced and informative way. Obvious problems included Japanese infrastructure problems (for some reason, they were a bit distracted), rapid evolution of the situation and linguistic and cultural differences. Rapid evolution meant that no sooner had copy been written than it needed to be completely revised (Mark highlighted that he had been changing copy until 0100 the night before). Some information is clearly going to lose something in the translation, but some of the paucity of information may have been down to cultural expectations. Indeed, the information shortage meant that the science team at The Times were being careful about ensuring that speculation was clearly identified.
Some of the things focussed on included differentiating “explosion in a nuclear facility” and “nuclear explosion” (something other outlets might usefully have learned!) and also avoiding Chernobyl as an analogous event. I wonder if this might have been counter-productive. On the one hand, one doesn’t want to repeat a misconception (evidence suggests that repeating a fallacy, even to debunk it, can result in the fallacy being reinforced), but at the same time I think that Chernobyl is very much an archetype for “nuclear incident” in the public’s consciousness. I think there’s a danger that in ignoring – or even actively excluding – such an archetype, key fears may not be addressed.
The issue really is that the news media have a responsibility to report the story even when the information quality is poor. The trick is doing this in a way that clearly shows the information quality. One trick The Times have been using is a question-and-answer format, which allows them to address questions in a clear way without pretending to be omniscient – something the public don’t expect anyway. I thought this was a key point; the public tend to have realistic expectations about the limits of knowledge, and to pretend to have all the answers undermines the reputation of the individual reports and the publication.
The flipside of information quality is whether the public can be overloaded with information. The focus with information volume is contextualisation – talking about contamination in μSv or mSv doesn’t provide any information unless the reader understands Sieverts. This reminded me of a line regularly used by the pastor in the church of my childhood: “text without context is a pretext.”
Communicating Science in Government: Fukushima Perspectives
Sir John started by highlighting that traditionally April is the worst month for the Government Chief Scientific Adviser (GCSA), what with the swine flu outbreak in April 2009 and the Icelandic volcano eruption in April 2010. This year, Sir John booked his holidays for April – so clearly the Japanese disaster had to happen in March instead.
Sir John focussed purely on the nuclear situation, and was asked on Sunday if there were any concerns to be communicated to British nationals living in the greater Tokyo area. In order to address this question, a Scientific Advisory Group for Emergencies (SAGE) was formed. SAGE is an ad hoc committee, chaired by the GCSA and/or lead government department, which is formed of appropriate government scientific advisers, academic and independent scientists. In this case, scientific advisers from the Department of Health, Health and Safety Executive and the Health Protection Agency were joined by an academic (who phoned in from a ski trip) and an independent scientist from the National Nuclear Laboratory; later on they also had input from the Met Office. They actively ask whether or not there is an expertise missing, but there’s a fine balancing line between getting the right people and getting so many people that the meetings are unmanageable.
The job of SAGE is to identify the consensus, challenge assumptions and to validate conclusion. In this case, they looked at the reasonable worst case, and the most likely case. The most likely case is what was being done at the time – sea-water cooling of the reactors; if the cooling was unsuccessful, there would be a build up of pressure and some explosive materials such as hydrogen would build up (as we now know of course happened). This technique is known as feed-and-bleed – pump in sea water and release the excess pressure. The best case scenario was that the reactor was kept cool. The reasonable worst case was that the base of the containment chamber melts, leading to material reacting with concrete resulting in steam and gas, leading to the containment chamber “blowing its top” and leaking contaminants into the atmosphere. This would have been a radiological explosion (ie. dirty bomb), not a nuclear explosion.
Given the worst case scenario, the Met Office advised on the weather to look at probable fall-out patterns; the worst possible outcome would be rain over greater Tokyo. The weather patterns were such that materials would go up to 500 m; this is not very high at all, and it would have had a local impact (INES 4). In comparison, Chernobyl, the world’s only ever INES 7 event, blew its top of as a result of the graphite cooling rods burning, and then, over a sustained period, poured contaminants into the atmosphere up to 30,000 feet.
Given this, the Tokyo exclusion zone of 30 km was entirely appropriate; Chernobyl’s exclusion zone was 30 km and research has shown that there was no direct contamination outside of the zone. (It should be noted that there was indirect contamination via milk, vegetables and water which resulted in radiation sickness and cancer increases.)
Once SAGE had drawn conclusions, Sir John reported into COBRA [as an aside, this always makes me think of the bad guys in G.I. Joe instead of the slightly more pertinent Cabinet Office Briefing Room A] that even in the reasonable worst case there was no danger as far as the residents of Tokyo were concerned. There was likely to be some contamination, but none that could be considered very serious.
The quality of information being fed into the advice process was not of the minute-by-minute variety (James Naughtie asked Sir John at 0700 whether he knew if the fuel pond was on fire, to which Sir John replied “no”; James Naughtie said it had only happened three minutes previously…). However, they were getting the obligatory information being supplied to IAEA and other international coordinating organisations. Indeed, given the circumstances – complete infrastructure collapse and tens of thousands of deaths – the amount of communication that came out of the Japanese government was impressive.
After speaking to Today, at 1000 there was another COBRA briefing, followed by a SAGE meeting in the afternoon. In the SAGE meeting, they cross-checked the meteorological information from the Met Office with that coming out of the USA, Japan and China, which confirmed that the weather was now headed to sea and would minimise contamination. A major issue for Japan is that sea water cooling renders the reactors beyond use, so they will be short of energy generation options.
Communicating Science in Government: A Broader View
Recently, the Science and Technology Committee (STC) released a report which was critical of the involvement of the Government Office of Science’s (GOS, which the GCSA heads) involvement in the review and publication of the National Risk Assessment (NRA). In particular, the STC was critical of the fact that the volcanic ash issue wasn’t included in the NRA, despite having been discussed as a risk by the GOS.
The key issue seems to be that the production of the NRA (coordinated by the Civil Contingencies Secretariat in the Cabinet Office) was less formal than it could have been, so although the GOS contributed, it wasn’t invited to review inclusions and exclusions. Discussions happen across the government to identify risk, which is then assessed on a biaxial scale (impact vs likelihood). This allows the government to identify where to focus preparatory and preventative resources; volcanic ash cloud should have been medium/medium and failure to include it was a mistake.
They now ask questions like “what would keep you awake at night?” to help to identify black swan events – what I would probably think of as an Outside Context Problem – low likelihood, high impact events to which most people are blind. They also have outside experts (academics and independents) now advising them on OCPs.
In addition, whilst acknowledging that they got the volcanic ash cloud wrong, the STC’s interpretation of the process was also not necessarily helpful. For example, the STC’s comments on the handling of the ‘flu pandemic suggested that it should not have been a high/high risk. This is an easy judgement to make with the benefit of hindsight in the context of the swine ‘flu outbreak, but at the start of the outbreak, all that was clear was that lots of people were dying. There was literally no information on what the number of deaths represented as a percentage of the total number of infection; indeed, for every 1 person who was symptomatic, 8 people were infected by asymptomatic. (Sir John joked that 100 years of socialism failed to close Eton, but the swine flu pandemic shut it down! However, the Eton outbreak also provided vital information on symptomology ratios by allowing them to test blood sera in a closed, well-documented community.)
Another complication of the swine ‘flu outbreak was the way in which the reasonable worst case (RWC) scenario was presented by news media. The RWC suggested a large number of people being incapacitated and/or dying, resulting in an infrastructure collapse. However, instead of being presented as the worst case, it was presented as the most likely scenario and the basis on which the contingency plans were rolled out. This meant that when assessing the effectiveness of the plans, people were saying that the severity was nowhere need the expected levels so the plans were inappropriate and a needless expense. The outbreak could have been much much worse – and probably would have been worse if it hadn’t been for the contingency plans being activated.
In addition, future outbreaks will benefit from a live run-through with what turned out to be a relatively low-mortality disease. As a result of any event, information is fed back into the planning process, for example, Chris Huhne has asked the Nuclear Inspectorate to report on the current and planned stations with “what if” scenarios. Nuclear reactors in the UK are hardened against 1:10,000 year events; tsunamis in the UK are nearly impossible, but tidal surges are possible and are hardened against 1:1,000 year events.
The role of the GCSA isn’t to dictate policy, but to ensure that the policy is as informed as it can be. SAGE feeds scientific information into COBRA, and COBRA has consider science, legal, financial, political and social impacts of their decisions. The GCSA provides high quality advice and identifies areas of disagreement, uncertainty and consensus; the politicians then make the decisions. This is unlikely to change as a result of the change of government, after all the difference between topdown to libertarian paternalism is a political decision.
When talking to the embassy, one of the diplomatic wives [why has nobody made that TV show yet?] asked whether there would be a problem eating seafood. This hadn’t occurred to SAGE, and they asked the Food Standards Agency, Department for Environment, Food and Rural Affairs, Department of Health and the Health Protection Agency to look into it. For me this was an interesting point; in short, situated knowledge had insights which hadn’t been made by the professionals, and if we had time I would like to have explored how SAGE ensures appropriate situated knowledge is provided and considered as part of the advice-giving process.
There was a question about whether likelihood/impact were appropriate axes, when academia tend to use likelihood/vulnerability. However, the effect of that can be quite drastic, and includes the European tendency to ban pretty much anything which has the potential to have toxic effects (for example, Sir John mentioned endocrine inhibitors can have health impacts, but banning them would result in a 30% reduction in crop yield. He also mentioned that caffeine is a carcinogen in very high doses). The key issue is to compare risk against hazard.