Chernobyl and its Cyber Lessons – Part 2

HBO’s recent ‘Chernobyl’ series, which re-told the story of the nuclear accident that threatened much of Europe in 1986, made for compelling viewing. The accident was said to have helped prompt the fall of the Eastern block and bring about a fundamental shift in global politics.

On April 26th 1986, reactor number 4 exploded, throwing radioactive material into the night sky. We may never know how many people suffered as a result of this accident. The official death toll was 31. Or 54. Or several thousand. Or 93,000.

I looked at the contributing factors in my previous blog, however I thought it would be interesting to look at the incident response which followed the explosion and what we can learn from this in a cyber sense – please bear with me, there is a lot to cover!

We should, of course, note that the Soviet Union was a very closed society. Mistakes were considered impossible in a Communist state and were often covered up; those who were involved could end up in Siberia (or worse). Therefore, expectations of transparency would be misplaced. However, what can we learn about incident management from Chernobyl?

What happened?

It took three minutes from the first explosion for the fire alarm to be raised. There was confusion in the control room as to the scale of the problem, with the Chief in Charge insisting that the reactor was intact – dismissing the debris as parts of the emergency tank.

Staff were sent to see what condition the reactor was in, and despite reporting back that the reactor had been destroyed, their findings were dismissed.

Telephone lines were down. The firefighters knew little about radiation and were ill-prepared for what they found, arriving without protection. It was reported that they picked up the graphite scattered around the site.

Thirty minutes after the explosions, the reactor was still thought to be intact.

A crisis meeting was convened and police assistance was sought to seal off the town – thousands of police arrived, without protective clothing or dosimeters, nor information on how to handle radiation or radioactive material.

Three hours after the initial explosion, it was still reported that the reactor was intact. Further staff were sent to survey the reactor. They reported that it had been destroyed. This report was also dismissed as being inaccurate.

All fires were extinguished – except that within the remains of the reactor – at around 0635.

At 8am, the shift changed and 286 men arrived to continue building the 5th and 6th reactors.

Some 18 hours after the accident, a government committee was established.

On Sunday 27th April, helicopters started dropping sand, boron and lead into the stricken reactor.

Monday 28th April. A nuclear power plant in Sweden detected high levels of radiation as part of a routine check on the soles of employees’ shoes.

Moscow TV announced that there has been an accident at the nuclear plant – “Measures are being taken to eliminate consequences of the accident. Aid is being given to those affected. A government commission has been set up”.

A nuclear research laboratory announced that a “maximum credible accident” had occurred at Chernobyl and mentioned a complete meltdown of one of the reactors and that radioactivity had been released.

On Monday April 29th an American satellite captured images of Chernobyl, showing the roof blown from the reactor, just as the Soviets release photos of the disaster, which are doctored to remove the smoke.

I could go on – I find this story both fascinating and horrific in equal measure.

What does this tell us?

When an incident happens, particularly a big one that impacts critical services, confusion resigns. Other events often come into play, such as senior members of staff being out of contact – it’s an unpleasant and stressful place to be.

Information comes at you in waves, some good, some bad, some reliable, some not, and sometimes you don’t get information at all.

The incident ripples throughout the organisation, rumours start. These rumours are passed off as fact.

The response can be inadequate when staff undertake roles that they are not prepared for, nor trained in.

And perhaps worse of all, our customers tell us the extent of the problem, further damaging reputation.

What can we learn?

Have an incident management plan that includes communication. You should know who your stakeholders are – and what their primary interest is – and then use this to formulate that communication plan. Test the plan in as realistic a way as possible. A leisurely stroll through it over coffee and cakes is unlikely to stress the component parts effectively – including staff.

Ensure the incident management plan clearly defines roles and responsibilities, and use the right people to do the right jobs wherever possible. This helps to reduce the risk of misinformation. Trust your staff and believe the information they give you – if you have recruited effectively and provided training, they will give you the information you need.

You won’t know everything about the incident straight away, but generally it is best to be open, honest and transparent about the fact you have an incident. If you don’t, others may well do so. By disclosing it, you control the communication and the narrative. You may not know everything yet, but as the Policy say, “we have limited information at this time.” You can always provide more information later – social media makes this quick and easy.

If you are putting forward a member of senior management to speak to the media, ensure they are effectively briefed about cyber and at least have a grasp of the jargon associated with the incident. And if this is not possible, support that executive by having the CISO alongside, who can add context and handle questions of a technical nature. This does not signify weakness on behalf of the executive.

Once the incident is over, learn well from it, ensuring that what you learn is embedded across all areas where a similar event could occur – and let your stakeholders know that you’ve learnt.

That incidents happen is a fact of cyber life. However, if you prepare properly, you can at least manage reputational damage.

If you missed ‘Chernobyl and its Cyber Lessons -Part 1’, you can read it here.

About the author

Simon Lacey
Principal Consultant, CRMG
Former Cyber Security Policy Manager, Bank of England
Industry of Expertise: Banking, Healthcare
Areas of Specialism: Cyber Security Governance & Policy Management