Bjorn Andersson

The cost of safety

Blog Post created by Bjorn Andersson Employee on Nov 25, 2013

I recently had a reason to take another look at the BP Macondo / Deep Horizon catastrophe. It happened several years ago by now, but even this long after the initial cleanup, the legal process churns away on determining potential compensations and fines. It’s been extremely costly for BP and other companies involved in what happened. The impact on the environment is immeasurable.


The National Commission that was appointed to investigate the accident published a detailed report well over a year ago. This report identified a series of events and unfortunate decisions that ultimately led up to the explosion, loss of life and disastrous spill of approximately 5 million barrels of oil.


Could it have been prevented? Can we avoid a repeat?


Among the factors leading up to the accident was the apparent inability to detect and recognize indications that in hindsight showed that something very bad was about to happen. If these indications had been detected and understood early on, and appropriate action had been taken, then it’s very likely that the catastrophic outcome could have been avoided or dramatically lessened.


For example, instruments were showing pressure readings that, when put into the context of what was happening and considering how they evolved over time, could have provided the heads-up warning needed to prevent the disaster. A case can be made that not only should these instruments more clearly highlight these signs, but that automatic trend analysis of the data from sensors could have sounded the alarm and also provided guidance for corrective action.


This is essentially a Big Data problem with data coming from many sensors and needing automatic analysis to detect trends and correlate between events in close to real time. Minute differences can be hard to discover manually because of them being very small or developing over long time, and maybe they appear in unexpected combinations that in itself can hint at something developing. Data needs to be analyzed at the speed of the events taking place or to match application workflows. It's a pattern matching exercise with a knowledge database of what to look for.


This is one of the things that makes it interesting being at Hitachi. The Hitachi group of companies has a broad experience both a user and a vendor. I'm personally on a regular basis discovering more technology inside the company that can be applied to various solutions, much of it from outside of IT. Hitachi Data Systems (HDS), where I am, for example provides IT platforms today to store, index and search data. This data could’ve been collected during normal operation to form a knowledge base. "Near misses” could also be analyzed to build a library of patterns to look for and help prevent future events from turning catastrophic.


This is what we do, we can help our customers to work with all types of data from all sources, with common management across multiple platforms, applications and locations. Our customers can quickly and easily turn raw field, business and external data into reliable, actionable intelligence.


With the risk of stating the obvious, in environments like Macondo I believe we need to automate data gathering and real time analysis to detect potentially dangerous situations, essentially take the human out of the loop for the routine analysis. Investing in systems that can do this is a small cost to pay for the added safety it provides. It's actually more of an insurance or safe guarding from being hit by the extreme costs after an accident.


Now, one of the reasons for me to take another look at Macondo is the crossing over of technologies between high performance computing and Big Data. These kind of Big Data problems, of which Macondo is on example, is becoming a necessary part of doing business safely and efficiently. For more on this cross-over of techniques, take a look at our presentation from SEG 2013: “HPC and Big Data, Relatives or Strangers?”