“The companies involved in the Gulf of Mexico oil spill made decisions to cut costs and save time that contributed to the disaster” National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling
The lessons to the learned from the loss of Nimrod XV230 are profound and wide-ranging. Many of the lessons to be learned are not new. The organisational causes of the loss of Nimrod XV230 echo other major accident cases, in particular the loss of the Space Shuttles Challenger and Columbia, and cases such as the Herald of Free Enterprise, the King’s Cross Fire, the Marchioness Disaster and BP Texas City.
Haddon-Cave report into the loss of RAF Nimrod XV230 was lost on 2 September 2006 with 14 service personnel on board
The past year has seen two major reports into different accidents with remarkably similar conclusions about the root causes. The first is the Deep Water Horizon disaster with the loss of 11 workers and huge environmental economic and social impact, the second the loss of RAF Nimrod XV230 with the loss of 14 lives in Afghanistan. Reading the reports on these accidents is a sobering exercise for any project management or engineer working with complex safety critical systems.
In both cases the risk were well understood in the industries in which they operated and multiple levels of precautions had been developed to prevent these accidents occurring. However in both cases commercial pressures of time and cost caused these precautions to be circumvented. For example the procurements of seals from a cheaper non-aviation supplier and a lamentable attempt to complete the safety case were one of the likely causes of the Nimrod accident and a flat battery the blow out prevented was one of the contributing factors to the deepwater horizon accident.
In his report on the Nimrod failure Charles Haddon-Cave QC said “The Nimrod Safety Case process was fatally undermined by a general malaise: a widespread assumption by those involved that the Nimrod was ‘safe anyway’ (because it had successfully flown for 30 years) and the task of drawing up the Safety Case became essentially a paperwork and ‘tickbox’ exercise.”
Similarly the president commission investigating the Deepwater Horizon Macondo disaster said “Whether purposeful or not, many of the decisions that BP, Halliburton, and Transocean made that increased the risk of the Macondo blow-out clearly saved those companies significant time (and money),” the presidential panel wrote. “BP did not have adequate controls in place to ensure that key decisions in the months leading up to the blow-out were safe or sound from an engineering perspective.”
The sad point is that many of these lessons are the same as recognised by the Columbia Accident Investigation Board Report in 2003:
The organizational causes of this accident are rooted in … history and culture, …years of resource constraints, fluctuating priorities, schedule pressures, …. Cultural traits and organizational practices detrimental to safety were allowed to develop, including: reliance on past success as a substitute for sound engineering practices …; organizational barriers that prevented effective communication of critical safety information and stifled professional differences of opinion; lack of integrated management across program elements; and the evolution of … informal … decision-making processes that operated outside the organization’s rules.…”
Columbia Accident Investigation Board Report, Volume 1, Chapter 1, page 9
Both of these reports are highly recommended reading for anyone working is safety critical industries. Key questions are:
Questions
Was the accident on the Deep Horizon Rig just bad luck or poor management?
To what extent were the risks associated with the project identified and managed?
What if any systematic root causes exist for the accident?
Could this happen in your organisation?