Automation, Work Management, and Use Cases: When Health Scores Create Work Orders

Who this is for: Maintenance planners, reliability engineers, and asset managers who want to connect health scores to actual work -- not as a theoretical exercise, but as a daily operational practice.

Estimated read time: 15 minutes

🔥 The Score That Changed a $6 Million Decision

A utility company was reviewing their annual transformer replacement budget. The traditional approach: rank transformers by age, review condition reports from the last inspection cycle, add input from district engineers, and submit a capital request. Same process for 15 years.

This year, they had Maximo Health running on their distribution transformer fleet. The health-and-risk-ranked list told a different story than the age-ranked list.

Three transformers that were high on the age-based list scored 72-80 on health -- in solid "Watch" territory but far from critical. They were old but well-maintained, operating in mild conditions, with clean oil test histories.

Two transformers that were NOT on the age-based list scored 28 and 34 -- deep in "Critical" territory. They were younger units, but installed in high-load, high-temperature environments with deteriorating oil quality and increasing fault gas concentrations. The predictive models estimated 12-18 months of remaining useful life.

The utility shifted $2.4 million from the three "old but healthy" transformers to the two "young but failing" ones. The first of the two critical transformers experienced an internal fault 11 months later -- caught during the planned replacement window, not as an emergency.

"If we had followed the age-based list, we would have replaced the wrong transformers and had a catastrophic failure on the ones we skipped."

That is what closing the loop looks like. Health scores that change capital decisions. Risk data that prevents failures. Numbers that create the right work orders.

🛠️ The Three Core Patterns

Every connection between Health and work management falls into one of three patterns.

Pattern 1: Risk-Based Backlog Prioritization

The problem: You have 200 open corrective work orders across 8 sites. Your planners process them roughly in the order they arrived. A work order on a non-critical backup pump gets the same scheduling priority as one on a bottleneck compressor.

The Health solution: Annotate every work order with the associated asset's health score, criticality, and risk rating. Sort the backlog by risk.

BEFORE: BACKLOG SORTED BY DATE

  WO#     Asset     Date Created    Status
  ─────────────────────────────────────────
  WO-101  P-4430    Feb 1           WAPPR
  WO-102  AHU-B3    Feb 2           WAPPR
  WO-103  P-4407    Feb 3           WAPPR   ← Critical pump, buried
  WO-104  CONV-12   Feb 4           WAPPR
  WO-105  AHU-B7    Feb 5           WAPPR

AFTER: BACKLOG SORTED BY RISK

  WO#     Asset     Risk     Health  Criticality
  ─────────────────────────────────────────────────
  WO-103  P-4407    CRIT     35      Very High    ← Now at the top
  WO-104  CONV-12   HIGH     48      High
  WO-101  P-4430    MOD      63      Medium
  WO-105  AHU-B7    LOW      75      Low
  WO-102  AHU-B3    LOW      82      Low

Implementation:

  • Add health and risk fields to work order list views
  • Train planners to sort by risk during weekly scheduling
  • In backlog review meetings, start with the high-risk segment
  • Track: percentage of weekly scheduled work on high-risk assets

Pattern 2: Health-Triggered Work

The problem: An asset deteriorates between scheduled PMs. Nobody notices until the next inspection -- or until it fails.

The Health solution: Define rules that trigger new work when health conditions change.

Three automation levels:

LEVEL 1: MANUAL (Recommended starting point)
  ─────────────────────────────────────────────
  Planner reviews Health dashboard weekly
  Identifies assets that dropped to POOR or CRITICAL
  Manually creates inspection or diagnostic WOs
  Documents health score as justification

LEVEL 2: SEMI-AUTOMATED
  ─────────────────────────────────────────────
  System generates "recommended work" list based on rules:
    - Health dropped >15 points in 30 days
    - Asset entered CRITICAL band
    - High-risk asset with no planned work
  Planner reviews list and approves/rejects recommendations
  Approved items become work orders

LEVEL 3: FULLY AUTOMATED
  ─────────────────────────────────────────────
  Predefined rules create work orders automatically:
    - Health < 40 AND Criticality >= High → Create inspection WO
    - Health dropped >20 points in 7 days → Create diagnostic WO
    - RUL < 90 days AND Criticality = Very High → Create capital assessment
  Planner is notified but work order already exists
Key insight: Most organizations should start at Level 1 and earn their way to Level 3. Automated work creation requires high confidence in scoring models, clean criticality data, and organizational trust that the system makes good decisions. Jumping to Level 3 before that trust exists creates noise that destroys credibility.

Pattern 3: PM Frequency Optimization

The problem: Every asset in a class gets the same PM frequency regardless of condition. You are over-maintaining healthy assets and under-maintaining deteriorating ones.

The Health solution: Use health trends to inform PM frequency adjustments.

PM FREQUENCY OPTIMIZATION

  Health Trend              Recommendation         Approval Required
  ───────────────────────────────────────────────────────────────────
  Stable HEALTHY (>80)      Extend interval 25%    Reliability engineer
  for 12+ months

  Stable WATCH (60-80)      Maintain current       No change needed
  no deterioration trend    interval

  Deteriorating toward      Shorten interval 25%   Reliability engineer +
  POOR (<60)                or add scope           Maintenance manager

  CRITICAL (<40)            Immediate intervention  Engineering review
                            beyond PM scope         required

Critical governance: PM frequency changes must go through a controlled process. Health data informs the decision; it does not make the decision. A reliability engineer should review and approve every frequency change.

🤖 Automation Maturity Model

Match your automation level to your organizational readiness:

AUTOMATION MATURITY

  STAGE 1: VISIBILITY (Months 1-3)
  ───────────────────────────────────
  Health scores visible on dashboards
  Planners review manually during meetings
  No automated work creation
  Focus: build trust in the scores

  STAGE 2: RECOMMENDATION (Months 3-6)
  ───────────────────────────────────
  System generates recommended work lists
  Planners approve or reject recommendations
  Health context added to work order views
  Focus: validate that recommendations are useful

  STAGE 3: ASSISTED AUTOMATION (Months 6-12)
  ───────────────────────────────────
  Rules create draft work orders for review
  PM frequency adjustments proposed by system
  Capital candidates auto-flagged for planning
  Focus: measure recommendation accuracy

  STAGE 4: AUTONOMOUS OPERATION (Month 12+)
  ───────────────────────────────────
  High-confidence rules create approved work
  PM frequencies adjust within defined bounds
  Exception-based oversight (humans review outliers)
  Focus: continuous model improvement
"Why can't we just go to Stage 4 immediately?"

Because Stage 4 requires every health score, every criticality rating, and every rule to be validated against real outcomes. That takes time. Organizations that skip to Stage 4 create a flood of auto-generated work orders, overwhelm their planners, and end up turning the automation off entirely.

Earn it. Stage by stage.

🏭 End-to-End Use Cases

Use Case 1: Pump Fleet in Manufacturing

The Setup

  • 120 centrifugal pumps across two manufacturing sites
  • Health models based on vibration, failure history, inspection ratings, and runtime
  • Criticality assessed based on production impact and safety

The Journey

MONTH 1: VISIBILITY
  - Health scores published for all 120 pumps
  - 14 pumps in CRITICAL, 23 in POOR
  - Reliability engineer validates: "These scores make sense"

MONTH 2: PRIORITIZATION
  - Weekly planning meeting starts with high-risk pump list
  - 8 of 14 CRITICAL pumps get diagnostic inspections
  - Findings: 5 confirmed bearing degradation, 2 seal issues, 1 alignment

MONTH 3: INTERVENTION
  - Corrective work completed on all 8 diagnosed pumps
  - Health scores improve: 6 pumps move from CRITICAL to WATCH
  - Planners now routinely sort backlog by risk

MONTH 6: OPTIMIZATION
  - PM frequencies adjusted for 15 pumps based on health trends
  - 8 stable/healthy pumps: interval extended from 90 to 120 days
  - 7 deteriorating pumps: interval shortened to 60 days
  - Net PM work orders: reduced by 12% while risk decreased by 28%

MONTH 12: RESULTS
  - Unplanned pump failures: down 41%
  - PM compliance on critical pumps: up from 78% to 94%
  - Maintenance cost per pump: down 15%
  - High-risk pump count: 14 → 4

Use Case 2: Transformer Fleet at a Utility

The Setup

  • 850 distribution transformers across the service territory
  • Health models using oil test results, loading history, age, and predictive RUL
  • Criticality based on grid reliability impact and customer count

The Journey

MONTH 1: PORTFOLIO VIEW
  - Health scores on all 850 transformers
  - 67 in CRITICAL or POOR band
  - Capital planning team sees ranked replacement list

MONTH 3: CAPITAL ALIGNMENT
  - Top 20 replacement candidates identified by health + risk
  - 8 of 20 were NOT on the previous age-based list
  - Capital request revised: $6.2M redirected based on health data

MONTH 6: PREDICTIVE ENRICHMENT
  - Maximo Predict RUL estimates integrated into health model
  - 12 additional transformers flagged as high-risk based on trajectory
  - Inspection program created for these 12

MONTH 12: RESULTS
  - Zero transformer failures on units in the capital replacement program
  - 2 failures on units NOT in the program (both low-criticality)
  - Regulatory reporting now includes health and risk metrics
  - Capital approval process standardized on Health data

Use Case 3: HVAC Systems in Campus Facilities

The Setup

  • 200 air handling units across a corporate campus (15 buildings)
  • Health models using runtime, failure history, comfort complaints, energy metrics
  • Criticality based on building function (data center > lab > office)

The Journey

MONTH 1: TARGETED VISIBILITY
  - Health scores focused on 40 AHUs serving critical spaces
  - 6 AHUs in data center building flagged as POOR health
  - Facilities manager sees risk concentrated in one building

MONTH 2: SEASONAL PREPARATION
  - Summer readiness campaign prioritized by health scores
  - POOR-health AHUs get preventive repair before peak cooling season
  - Scope expanded to include 15 additional units in labs

MONTH 6: RESULTS
  - Summer comfort complaints: down 62% vs. previous year
  - Data center cooling incidents: zero (vs. 3 previous year)
  - Energy cost per AHU: down 8% (healthier units run more efficiently)
  - Replacement candidates identified for next fiscal year budget

📈 Measuring What Matters

You cannot justify Health without proving it changes outcomes. Track these metrics:

Health and Risk Metrics

  • High-risk asset count and trend -- the headline number
  • Health band migration -- how many assets moved to better (or worse) bands
  • Average health score -- by site, class, and portfolio

Work and Reliability Metrics

  • Work targeting ratio -- percentage of work on high-risk assets (higher is better)
  • Unplanned failure rate -- for assets under Health management (lower is better)
  • PM optimization -- net change in PM work orders (fewer with same or better outcomes)
  • MTBF trend -- for health-managed asset classes

Financial Metrics

  • Capital decision quality -- assets replaced based on health data vs. age/gut feeling
  • Avoided failure cost -- estimated cost of failures prevented by health-driven intervention
  • Maintenance cost per asset -- trending down for health-managed fleets
Key insight: The most powerful ROI story is a specific failure that was prevented. "Transformer TF-447 was heading for catastrophic failure. Health flagged it 6 months early. We replaced it for $120K instead of responding to an emergency that would have cost $1.8M." That story is worth more than any dashboard.

🎯 The 7 Commandments of Health-Driven Work

  1. Thou shalt close the loop. Health scores that do not create work orders are decoration. Connect scores to actions.
  2. Thou shalt start manual. Let planners decide what to do with health data before automating decisions.
  3. Thou shalt earn automation. Each stage of automation requires proven trust in scores and rules.
  4. Thou shalt not automate garbage. If your health scores are unreliable, automated work creation amplifies the unreliability.
  5. Thou shalt govern PM changes. Health data informs PM frequency adjustments. Reliability engineers approve them.
  6. Thou shalt measure outcomes. Track failure rates, risk trends, and cost impacts. Prove that Health-driven work is better.
  7. Thou shalt tell the prevention story. Every avoided failure is a story worth telling. Collect them. Share them. They justify the program.

Next in the series: Part 8 -- Best Practices, Governance, and Patterns covers how to scale from a pilot to an enterprise program and keep it running for years.

About TheMaximoGuys: We help Maximo developers and teams build, configure, and optimize IBM Maximo Application Suite. Our content comes from real implementations, not marketing slides. If it is in our blog, we have done it in production.