In our series “Managing Pipeline Threats – The Way Forward,” we explore the approach necessary to achieve a holistic maintenance mindset when it comes to the integrity management of oil and gas assets. In two previous articles, our experts Michael Smith, Roland Palmer-Jones and Michael Beller reviewed threats, the data needed to assess condition, the associated inspections and how this data is turned into useful information on pipeline condition. This third and final piece takes a look at the future of the process of moving from information to practical decisions with a “predictive maintenance mindset.”

Pipelines are a key element in global energy transportation and constitute the safest, most environmentally friendly way to transport large quantities of oil and gas. They are found in all segments of the oil and gas infrastructure in the up-, mid- and downstream sectors and will retain their important role in the future.

Figure 1 – Segments in oil and gas infrastructure

Figure 1 – Segments in oil and gas infrastructure

High-pressure pipelines are also important in other industries, including water, mining and chemicals. Within the energy industry, their use may be extended – for instance to the future transportation of hydrogen.

Apart from being extremely useful from a logistics perspective, pipelines are highly valuable assets, both in a strategic and in an economic sense, since they are needed to keep homes warm (or cool), industry running and transport moving. Therefore, they need to be protected. Their safe and economical operation must be ensured at all times.

This is the focus of an effective maintenance process, which includes the need to identify potential threats, assess their impact on mechanical integrity and derive useful information regarding actions to be taken.

In this three-part series, we explore the approach necessary to achieve a holistic maintenance mindset.

In Part 1, we addressed diagnostics, the discipline of identifying and collecting the data needed to define a given status.

In Part 2, we saw how this data can be used to provide information on the present and likely future mechanical integrity of a pipeline.

Now, in Part 3, we will investigate how this condition information can assist in the development of a “predictive maintenance mindset,” address some of the new requirements raised by the Mega Rule for gas pipelines in the US and review some of those requirements from a global perspective.

In this series of articles, we will focus on the actual line pipe while not considering other important parts of the pipeline infrastructure, such as pumps, compressors, valves, etc.

Working with the data we have (ideally, everything for all locations at all times) and turning it into useful information that helps us make decisions moves us from watching pipelines fail to actively managing the threats.


Good decision-making requires robust, reliable and accurate information – useful information derived through data. In the first article, we discussed the collection of data required for pipeline integrity assessment purposes and used the analogy of the three-legged stool, as shown in Figure 2 below.

Figure 2 – The integrity “stool” using a data collection perspective

Figure 2 – The integrity “stool” using a data collection perspective

We can already collect data on the presence and size of defects such as dents, cracks and corrosion metal loss in pipelines using a variety of in-line inspection (ILI) technologies – not to mention the increasing capability to collect data on the materials and on loads applied or stresses experienced. These multiple data sources can be used in increasingly accurate methods to assess the severity of anomalies, and they can be compared over time in order to identify changes and degradation. The overall condition can be represented in a digital twin, providing valuable information for integrity management decisions. But how does this lead us to a “predictive maintenance mindset,” and what is the relevance for the US pipeline Mega Rule?


In simple terms, predictive maintenance is about conducting maintenance activities such as inspections and repairs when they are needed – not before, and of course not after. This can be visualized by imagining steady degradation to a state where action is needed and considering the impact of uncertainty. This is shown in Figure 3. The greater the uncertainty, the earlier some action needs to be taken, for example to prevent failure. If uncertainty is very high, predictive maintenance is of limited use. In the vast majority of cases, action will be taken far earlier than is needed, which is not only a waste of time and resources, but also hazardous for the people taking the action and potentially damaging to the environment. Therefore, when uncertainty is high, the pragmatic approach often becomes some form of time-based maintenance (for example inspection every five years) based on qualitative experience and what works in most cases, and perhaps moderated with a consideration of risk with more frequent inspections of high-risk assets and less frequent inspections where risk is low. A major problem is that, while in many cases this time-based approach will be overly conservative, sometimes it will be non-conservative, the inspection will not be completed in time and a failure occurs. So, if we can reduce the uncertainty in our predictions, then we can start to get real value from predictive maintenance.

Figure 3 – Time to failure

Figure 3 – Time to failure


How can we reduce uncertainty? Well, in the previous articles we discussed the benefits of having more detailed information on defects, materials, loads and degradation rates (from highly repeatable inspections). These will undoubtedly reduce the uncertainty regarding current condition and past degradation. There is still the question of predicting the future, and as stock market pundits are obliged to tell us, past performance is no guarantee of future returns.

However, we can learn from experience, and we can do this in a data-driven way. We can benchmark condition, predict the present and predict the future.


How does my pipeline compare to others around the world?

This is the question at the heart of global benchmarking, a technique used to compare the condition of individual pipelines against the global population and provide a unique perspective on asset management performance.

Unsurprisingly, this demands a huge quantity of data (a sample size sufficient to represent the entire population, in fact), and that is where ROSEN’s Integrity Data Warehouse (IDW) comes into play. The IDW is a large and growing repository of ILI findings and corresponding pipeline information, which to date houses data from almost 10,000 pipelines around the world. It is the ideal substrate for meaningful benchmarking studies.


Using the ubiquitous threat of corrosion as an example, we can calculate and explore metrics such as anomaly count (the average number of corrosion anomalies per kilometer of pipeline), relative corroded area (the total surface area of corrosion relative to the pipeline surface area) and probability of exceedance (the probability that any anomaly exceeds its critical depth). Based on these metrics, pipelines can be plotted anonymously within a condition space, giving rise to a notion of “good” and “bad” pipelines [1]. With appropriate metrics, the same technique can be used for any other measurable pipeline threat, including cracks, dents and bending strain [2].

Figure 4 – Condition space populated with internal corrosion metrics from over 5,000 pipelines

Figure 4 – Condition space populated with internal corrosion metrics from over 5,000 pipelines

This application of descriptive analytics is almost embarrassingly simple – and yet incredibly useful. Why?
For one, the technique gives individual pipeline operators an objective view of asset condition, highlighting those pipelines within a network that truly require attention. This allows for more accurate and unbiased prioritization of investigations and assessments. Moreover, it is a highly effective means of justifying plans to regulators or other stakeholders. If your pipeline is one of the best in the world, that is a convincing endorsement of your integrity management strategy. And who could refuse an expanded integrity management budget for a pipeline that is more severely corroded than 99% of the global population?


While the domain of predictive analytics is often concerned with predicting future events, it is also concerned with predicting present states that have not yet been observed.

A good example in the pipeline world is an asset that cannot be inspected using ILI technology – a reality for around 40% of the world’s pipelines. Historically, these pipelines have been managed with direct assessment techniques, which involve traditional modelling or susceptibility analyses followed by direct examination. Although this can be effective at times, it is a costly process with no guarantee of success. We therefore tend to know relatively little about the true condition of uninspected pipelines, particularly when they are at the bottom of the ocean or buried underground.

With advances in data science, however, we have better options. By learning from the conditions of similar pipelines that have been inspected in the past (i.e. pipelines from the IDW), we can begin to understand the different variables that predict pipeline threats, enabling us to develop models that predict the condition of uninspected pipelines.

Figure 5 – “Virtual” pipeline inspection

Figure 5 – “Virtual” pipeline inspection

These “virtual” inspections are again powered by supervised machine learning algorithms, with early models exhibiting remarkable performance. One study showed that in a pipeline affected by internal CO2 corrosion, a Bayesian network improved prediction accuracy by a factor of almost 10 compared to traditional modelling techniques [3]. Another showed that a gradient boosted decision tree could reliably predict the external corrosion condition of over 90% of pipelines based on basic design and construction information alone [4].

Further development is now focused on the incorporation of new predictor variables into these models, such as local environmental conditions, detailed operational data, and even socioeconomic factors – all of which can influence the complex phenomena of internal and external corrosion.


Once we are confident about the current state of a pipeline (either through ILI or a “virtual” inspection), the next-most important consideration is its future state. Threats such as corrosion and cracking are time dependent, and a pipeline that is in a fit state right now may be on a trajectory to failure.

As an example of such a “trajectory,” one can readily imagine points floating through the condition space in Figure 4. Roughly speaking, points move upwards as new anomalies appear; they move to the right as the existing anomalies grow deeper. The question is: how quickly are the points floating towards the top right of the plot (where the bad pipelines live)? Where will my pipeline be within the next year, or five years, or ten?

Predictive analytics techniques can help us out again here, allowing us to simulate the deterioration of pipelines and map out their likely condition over time.

Figure 6 shows the result of an internal corrosion growth simulation for an offshore crude oil pipeline.
The blue points within the condition space show the known beginning and end points for the pipeline based on two successive ILI campaigns (approximately eight years apart), while the contours and surface plot show the predicted condition at the time of the second ILI. Importantly, however, the predictive model has never seen the second set of ILI data; the simulation is instead completed by bootstrapping from a distribution of characteristic deterioration rates derived from other crude oil pipelines around the world.

The fact that the true endpoint is seen to lie at a local maximum in probability density is a testament to the efficacy of predictive analytics and an excellent demonstration of the power of data [5].

Figure 6 – Tracking the deterioration of a pipeline

Figure 6 – Tracking the deterioration of a pipeline


These methods founded on big data and data science can significantly reduce the uncertainty associated with understanding specific pipelines, particularly when combined with the increasingly granular information on materials and loads, and the advanced assessments possible using digital twins and highly repeatable inspection systems. The result will be accurate predictions of time-to-failure, enabling genuine benefits in terms of both safety and cost from the adoption of predictive maintenance. It is this type of predictive maintenance approach that considers all threats fully and accounts for all the relevant data (traceable, verifiable and complete) behind the recently enacted US pipeline regulations or the Mega Rule. Not only does it make commercial sense to invest in high-quality and extensive data to improve predictions, saving both maintenance and failure costs, but it now also is the law!


[1] Palmer Jones, R; Smith M S; Capewell, M; Pesinis, K and Santana, E (2019). The good, the bad, and the ugly – categorizing pipelines using big data techniques. Pipeline Pigging & Integrity Management (PPIM) Conference, February 2019, Houston, Texas, United States of America

[2] Smith, M; Capewell, B; Kerrigan B; Pesinis, K and Santana, E (2019). Application of descriptive analytics for benchmarking of pipelines with crack features. Paper no. IBP1198_19, Rio Pipeline Conference 2019, Rio de Janeiro, Brazil

[3] Smith M S; Barton L; Pesinis, K and Laing, I (2019). Intelligent Corrosion Prediction using Bayesian Networks. NACE CORROSION Conference and Exhibition, April 2019, Nashville, Tennessee, United States of America

[4] De Leon, C and Smith, M S (2020). Machine Learning to Support Risk and Integrity Management. Pipeline Pigging & Integrity Management (PPIM) Conference, February 2020, Houston, Texas, United States of America

[5] Unpublished PTC