
Understanding asset criticality—and where it fits within a reliability and asset management program—is itself critical. Asset criticality is foundational to uncovering risk, aligning decisions with organizational objectives, and ensuring that limited resources are applied where consequences truly matter.
Yet despite its importance, asset criticality is often misunderstood. In practice, we see criticality confused with condition, cost, urgency, or individual preference. These misunderstandings don’t just create inefficiencies—they obscure real risk and create a false sense of confidence.
At its core, asset criticality refers to the relative importance of an asset or system to the mission of an organization, considering the consequences of failure in context. It is not a measure of how badly an asset is performing, how expensive it is, or how much attention someone believes it deserves.
Over years of facilitating criticality studies, a consistent set of traps emerges—patterns that keep organizations from seeing risk clearly. Avoiding these traps is essential if asset criticality is to deliver its intended value.
This is the most common objection to conducting a formal asset criticality analysis. Operations personnel—often rightly—believe they already know what is critical. With years of experience running a facility, that confidence is understandable, and in practice it is often mostly correct.
The issue is not experience. The issue is how that understanding is formed, validated, and preserved.
In many cases, criticality is based on outdated mission assumptions, informal rules of thumb, or numbers assigned to “fill in the box.” Without structured discussion, it is easy for criticality rankings to drift away from the current objectives of the organization. This is how situations arise where a non-essential support item receives the same priority as a core process asset.
When criticality is based on informal consensus or individual opinion, several risks emerge:
· Knowledge is lost when people leave, retire, or are unavailable
· Assumptions go untested as systems evolve
· Blind spots persist because “everyone knows” what matters
In practice, organizations almost always uncover assets that are far more critical than expected—and sometimes assets that were assumed to be critical turn out not to be. An asset criticality analysis exists to integrate, preserve, and validate institutional knowledge, turning personal insight into organizational understanding that survives people, time, and change.
Relying on assumptions about what is critical is not risk management—it is risk concealment.
Organizations often assume that having completed a HAZOP, FMEA, or FMECA eliminates the need for an asset criticality analysis. While these studies are valuable, they serve different purposes and answer different questions.
A Failure Modes and Effects Analysis starts at the asset level and focuses on how assets fail and how those failures affect performance. It does not determine how important an asset is to the mission of the organization. Conducting FMEAs without prior criticality often leads to significant effort being spent on assets that contribute little to overall risk.
Similarly, a HAZOP focuses on process safety and design intent. It can identify hazardous conditions or incorrect configurations, but it does not establish which assets are most consequential to organizational objectives.
Asset criticality analysis starts with the big picture. It evaluates systems and assets based on the consequences of failure relative to mission, safety, environment, service delivery, and compliance. Criticality provides the context that directs where deeper reliability analysis is justified—and where it is not.
Without that context, organizations routinely waste resources while leaving meaningful risk unaddressed.
An asset criticality analysis can certainly be done poorly—inefficiently, expensively, and with limited value. But when designed correctly, it can be completed far more efficiently than many organizations expect.
One key is starting at the system level, not the individual asset level. In most operations, the ratio of assets to systems is roughly ten to one. By identifying which systems are truly critical to the mission, the analysis can then be taken down to the asset level only where it matters. Systems with little consequence can be addressed later.
Another key is using appropriate tools. While it is technically possible to perform criticality analysis using spreadsheets or paper, modern tools like Risk & Criticality Analyzer allow organizations to capture relationships, consequences, and mission parameters efficiently and consistently.
Finally, participation matters. The right mix of operations, maintenance, reliability, and data expertise—guided by an experienced facilitator—dramatically improves both efficiency and quality.
Delaying or avoiding a criticality analysis does not save effort. It simply ensures that other reliability activities proceed without a clear understanding of risk.
Cost is often mistaken for consequence.
Expensive assets naturally attract attention, and rightly so—they should be well cared for. But cost alone does not determine criticality. An expensive asset with redundancy or operational workarounds may have limited impact on mission if it fails.
Conversely, inexpensive components can be disproportionately critical. A small, low-cost item can disable an entire system. A simple utility, seal system, or control component may quietly determine whether a major asset can operate at all.
Criticality is shaped by mission, context, and system interaction—not price. The same asset can have very different criticality rankings depending on:
· How the system is designed
· What redundancy exists
· The operational and regulatory context
· The consequences of loss of function
When organizations equate cost with importance, they often over-invest in the wrong places while overlooking quiet single points of failure.
This trap stems from everyday language. We routinely describe things as being in “critical condition,” meaning they are performing poorly. In asset management, this creates confusion.
Condition describes how likely an asset is to fail. Criticality describes what happens if it does.
An asset in poor condition is not necessarily critical, and highly critical asset may be in excellent condition. While condition influences failure probability, it does not change the consequences of failure.
Asset criticality analysis is not a condition assessment. Condition management and criticality analysis are separate disciplines that become powerful when used together—allowing organizations to focus condition-based work where consequences justify the effort.
Every organization has respected experts whose experience carries weight. That experience is valuable—but when criticality becomes defined by individuals rather than structured analysis, priorities can become distorted.
Personal preferences, past frustrations, and pet projects can subtly influence rankings. Over time, this leads to:
· Misallocation of limited resources
· Erosion of trust in prioritization decisions
· Capital and maintenance decisions that don’t align with real risk
An asset criticality analysis does not eliminate expert judgment—it channels it. A structured, team-based approach ensures that experience informs decisions without allowing personal advocacy to override consequence-based reasoning.
A common misconception is that critical assets require aggressive preventive maintenance simply because they are critical.
High criticality does not mean “fix it, whether it’s broken or not.” It means pay attention.
In many cases, the most appropriate response to high criticality is enhanced monitoring, improved operating discipline, or ensuring that critical spares are available. Invasive maintenance on assets that are performing well introduces risk through infant mortality and reassembly errors.
Criticality determines the level of attention an asset deserves—not whether intervention is required. Intervention should be driven by failure definitions and evidence of degradation, not by importance alone.
This misconception again arises from confusing condition with criticality.
Asset condition can change rapidly due to wear, misalignment, corrosion, or lubrication issues. Asset criticality is driven by design, system interactions, redundancy, and mission objectives—factors that are relatively stable and change intentionally.
Asset criticality does not need to be revisited weekly or monthly. Treating it as dynamic noise undermines its value as a stable foundation for planning and decision-making.
While asset criticality does not change day to day, it is not permanent.
Changes in mission objectives, regulatory requirements, operating context, system configuration, or risk tolerance can all affect criticality rankings. Periodic review ensures that rankings remain aligned with reality.
Revisiting criticality is far more efficient than the initial analysis and is essential for maintaining risk visibility over time. “Set it and forget it” allows risk to re-emerge unnoticed.
Understanding asset criticality makes risk visible—before asset failure does it for you.
When organizations fall into these traps, they do not eliminate risk; they simply lose sight of it. A well-executed asset criticality analysis aligns understanding across disciplines, preserves institutional knowledge, and ensures that effort is focused where consequences truly matter.
That is the real value of understanding asset criticality.
MentorAPM’s Risk & Criticality Analyzer helps organizations systematically uncover hidden risk, align asset decisions with mission objectives, and apply resources where consequences truly matter. Learn how a structured, consequence-based approach to asset criticality can strengthen reliability and asset management outcomes across the lifecycle.