Author: Edmund Bennett

Predictive Analytics – Overcoming Barriers to Adoption in the Energy Industry

Edmund Bennett explores the role of predictive services in the complex ecosystem of pipeline integrity management and answers the question: Is the industry ready to adopt such measures?

Supporting operators with their real problems

“In the land of the blind, the one-eyed man is king.” This old proverb speaks to a simple truth: even a small edge in knowledge can make a big difference – especially when tackling complex problems like pipeline integrity. The challenge is constantly shifting between various threats, missing data, and evolving operational demands. Add to that changing risk profiles and rising expectations driven by tech advancements, and it’s clear: staying ahead requires proactive, forward-thinking strategies.

There is no silver bullet – only the solutions we grind out through hard work.

Emerging technologies like advanced inspection tools and survey techniques are crucial, especially with new demands from products like hydrogen, which introduce fresh integrity concerns. The industry must rise to meet these risks with confidence.

Integrity management needs to be flexible – using data where it exists, adapting where it doesn’t, and bridging the gap between risk- and rule-based approaches.

Predictive solutions must be accurate, transparent, and decision-ready – ideally working across piggable, unpiggable, and everything in between. But can they reach that level? And if they do, will that be enough to shift them from innovation to industry standard?

The prediction engine – transparent definitions

Prediction is and always has been a normal part of integrity management. From the “wet finger in the air” of the integrity engineer to conservative and cold rules-based decision-making, the courses we set are the results of our attempts to navigate the murky waters of integrity management. Predictions are always made: “This approach will get us there,” or “This will suffice.” All that has changed in recent years is the techniques applied in generating these predictions and their level of abstraction away from the problem at hand. Sadly, with this greater complexity and abstraction comes additional challenges in their interpretation and validation. Modern AI must undergo rigorous validation through multiple stages, including (but certainly not limited to):

  • Input data validation – does the data we want to use relate to the problem we want to solve?
  • Model output validation – are the model results “good enough,” and do they mean what we think they mean?

Additionally, the quality of the results must be provided clearly in a manner that can be easily understood – it is simply not fair to demand that a busy operator learns the language of data science to be able to walk away with something useful; or walk away they shall. That is why, within ROSEN, we provide measures of performance for predictive models in a clear and consistent manner for our predictive services for unpiggables.

Taking small but consistent steps

How do we respond to the availability of new data relevant to predicting pipeline integrity? As we are in the game for the long-term, a clear process for integrating new inspections or new variables must be incorporated by default.

Portrait of Edmund Bennett
The nature of inspection data is that it often evolves over time – either through changes in inspection technology or variations in data quality. Predictive models and integrity frameworks must accommodate this reality. A well-structured system allows for the seamless inclusion of new inspection runs, revalidating the pipeline’s predicted performance against new, higher-fidelity inputs.
Edmund Bennett, Principal Data Scientist, ROSEN Group

Likewise, geospatial data sources such as terrain movement, land use changes, or encroachment records are frequently updated. Integrating this intelligence allows for more dynamic prediction of geohazard-related threats. This does not require a wholesale reinvention of models but instead, a modular system where new insights are used to adjust risk scores or refocus field investigations.

Flexibility is also key, with the ability to incorporate any specific additional data that an operator might bring, such as cathodic protection, pipeline survey or operational data. Such data is both a blessing and a curse, for this additional information, while hugely valuable, forces us to exclude inspection data for which this additional information is not present. Nevertheless, we anticipate that small, focused models with more directly relevant data will lead to higher performance overall. This viewpoint is corroborated by the fact that it is possible to produce dramatically better predictions on pipelines that are partially inspected.

Ultimately, the message here is one of pragmatism. It is not about creating the perfect model and then walking away  it is about remaining open to iteration, improvement, and learning. Evolution through consistent small steps will always outperform the pursuit of a single perfect solution.

Helpful services, actionable intelligence

The key to successful prediction at scale is flexibility. As discussed, different people want different things and for different reasons. For example, an integrity engineer may seek early indicators of localized metal loss, while a risk manager is more concerned with the system-wide likelihood of failure.

Predictive services must be adaptable to these varied end uses. For predictions using inspection data as an input, determining relevant parameters must account not only for engineering theory but also for the decision-making context in which they will be applied. Predictions should be:

  • Context-aware – Results must speak the language of the person making the decision, whether in terms of risk, cost, or operational planning.
  • Layered and modular – Depending on the question being asked, different levels of confidence, granularity, or scope should be accessible.
  • Action-oriented – Outputs should not simply offer a prediction but should suggest what to do next. This might be a recommended inspection, a repair priority, or a rerouting consideration.

This is what we mean when we talk about “actionable intelligence:” It is not just about knowing something could happen. It is about understanding what you should do about it.

Learning to embrace risk

Meanwhile, operators also have some work to do, particularly those whose approaches are driven by responses to regulatory requirements. Risk and uncertainty are inherent across all aspects of pipeline integrity management, from the uncertainty in material properties, survey measurements, and inspection tool signal evaluation. Inspections are, of course, point-in-time measurements of stochastic processes with complex onset criteria and varied consequences. The fundamental paradigm underpinning integrity management is and always has been risk. To manage complexity, as an industry, we apply pragmatic rules to manage this risk efficiently; however, under the hood, it is the interpretation of historical failures, our quantitative understanding of pipeline mechanics, and our numerical validation of measurements that fuels the fire of our understanding and yields the strategies that we employ.

It is with this perspective that I urge that the industry prepares itself to accept the fundamentally risk-based nature of integrity management. By this, I mean that operators should:

  • Identify and collect data that traditional engineering wisdom dictates should be relevant to the integrity of a given pipeline. The data required will typically vary according to the threat in question. In many cases, this will align with operators’ own plans for data capture.
  • Ensure that adequate records of failures (trailing indicators) and other relevant measurements (leading indicators) are maintained within the organization.
  • Ensure that adequate pipeline records are maintained, including repairs and reroutes. Inspection data from multiple vendors should be transformed into a common format, and a link should be established to the original inspection data for provenance.
  • Proactively engage in collaborative ventures, sharing the valuable pipeline data that will contribute to a safer and more efficient industry overall.

These relatively straightforward actions will help position operators to respond well to changes across the industry brought about by example to updates in regulatory requirements.

Feedback cycles and field verification

Predictions alone are never enough. They must be validated, refined, and challenged. Without feedback, even the most sophisticated models risk drifting away from relevance. Integrity management, therefore, must include robust mechanisms for feedback and field verification.

This includes:

  • Post-activity verification – Comparing predicted versus observed conditions during dig programs or remediation work. Was the model correct? If not, why not?
  • Back-analysis of missed predictions – When failures or anomalies occur without prior warning, a thorough analysis must follow. Were signals missed? Was input data flawed? Did the model rely on incorrect assumptions?
  • Operator feedback loops – Field engineers and technicians are often the first to spot when something does not add up. Their input should be actively solicited and integrated into system design and model refinement.

This process of feedback is not just a quality control mechanism  it is part of the scientific cycle: prediction, observation, and correction. A living-learning system will always outperform a static one.

A mandate for pipeline integrity management

Where processes exist to support predictive services, their value is immediately enhanced by formal recognition within the organizational framework. Without a mandate  whether regulatory, commercial, or cultural  predictive analytics can remain sidelined, treated as an optional extra rather than a core business asset.

To avoid this fate, organizations must do more than invest in models or technologies. They must create the conditions for these tools to be used effectively. This includes:

  • Clear accountability – Define who is responsible for acting on predictive insights and empower them to make decisions based on forward-looking evidence.
  • Institutional commitment – Predictive integrity management must be written into integrity procedures, risk frameworks, and performance reporting.
  • Strategic alignment – Ensure that predictive analytics aligns with wider goals, such as safety, sustainability, efficiency, and asset longevity.

The value of prediction only becomes real when it changes people’s behavior. That change will only happen at scale when the organization commits to the predictive approach not just as a tool but as a mindset.

Portrait of Edmund Bennett

Edmund Bennett 

Principal Data Scientist, ROSEN Group

Contact me
Social Sharing Component
Close up of a hand holding a cell phone on which the facet newsletter can be seen.

Not yet registered to facets?

Register now if you would like to see more stories like this and receive the latest news and updates.
Read more