Best Practices, Governance, and Patterns: Making Maximo Health Stick
Who this is for: Asset management leaders, program managers, and reliability directors who need to take Health from a successful pilot to a permanent, enterprise-wide capability. This is not about technology. This is about organizational survival.
Estimated read time: 18 minutes
๐ฅ The Program That Died at 18 Months
A global manufacturer had one of the best Maximo Health pilots we have ever seen. Fifty critical pumps at their flagship plant. Expert-validated scoring model. Dashboards embedded in weekly planning meetings. Health-driven work prioritization that reduced unplanned failures by 35% in six months.
The pilot was a textbook success.
Then the program owner got promoted. The reliability engineer who built the models moved to a different division. The new maintenance manager had different priorities. The IT team that supported the dashboards was reallocated to an ERP migration.
Within 18 months:
- Nobody updated the criticality scores when a new production line was added
- Three inspection forms were changed without updating the corresponding health indicators
- The dashboard showed health trends that stopped making sense because the underlying data mappings were broken
- Planners stopped opening the dashboard because the scores did not match reality
By month 24, the Health application was still running. The scores were still calculating. But nobody was looking. Nobody was acting. The program was technically alive and operationally dead.
"We did not turn Health off. We just stopped caring."
This is the most common way Health programs die. Not with a bang. Not with a decision to shut it down. With a slow drift into irrelevance as the people, processes, and governance that made it work gradually disappeared.
This blog is about preventing that.
๐ The Three Phases of Health Maturity
Every successful Health program we have seen follows the same maturity arc:
PHASE 1: PROVE (Months 0-6)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Scope: 30-50 assets, one site, one asset class โ
โ Goal: Validate scoring models against expert judgment โ
โ Metric: Are scores trusted? Are they used in planning? โ
โ Risk: Scope creep, data denial, configuration paralysisโ
โ Exit criteria: Scores validated, embedded in one processโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
PHASE 2: EXPAND (Months 6-18)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Scope: Additional asset classes, sites, or systems โ
โ Goal: Demonstrate value at broader scale โ
โ Metric: Risk reduction, failure prevention, cost impact โ
โ Risk: Model inconsistency across sites, loss of focus โ
โ Exit criteria: Multiple sites/classes operational โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
PHASE 3: SUSTAIN (Month 18+)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Scope: Enterprise-wide, all critical asset classes โ
โ Goal: Health is permanent infrastructure, not a project โ
โ Metric: Long-term reliability and cost trends โ
โ Risk: Governance decay, model drift, personnel turnover โ
โ Exit criteria: There is no exit. This is the new normal.โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโThe fatal mistake is treating Phase 1 as the end. A successful pilot is not a successful program. It is a successful experiment. The hard work is Phases 2 and 3.
๐ The Rollout Playbook
Three strategies for expanding beyond the pilot. Most organizations blend them.
Strategy A: Phased Expansion
Expand the pilot scope incrementally -- same site, more asset classes; or same asset class, more sites.
PHASED EXPANSION EXAMPLE
Pilot: 50 pumps at Site A
Phase 1: + 80 compressors at Site A
Phase 2: + 50 pumps at Site B (reuse pump model)
Phase 3: + 120 transformers at Sites A and B (new model)
Phase 4: + Remaining critical assets at both sitesAdvantages:
- Controlled risk and effort
- Each phase builds on lessons from the previous one
- Champions emerge organically from early-adopting teams
Best for: Organizations with mixed data quality across sites or classes where some areas are much readier than others.
Strategy B: Asset-Class-Focused
Roll out one asset class across all (or many) sites simultaneously.
ASSET-CLASS EXPANSION EXAMPLE
Pilot: 50 pumps at Site A
Phase 1: All critical pumps across 5 sites (300 assets)
Phase 2: All transformers across 5 sites (200 assets)
Phase 3: All HVAC in critical buildings (150 assets)Advantages:
- Standardized models across sites enable benchmarking
- Reliability teams can compare performance across locations
- Model governance is simpler (one model per class)
Best for: Organizations with consistent data quality and strong central reliability or asset management teams.
Strategy C: Site-Focused
Roll out all asset classes at one site before moving to the next.
SITE-FOCUSED EXPANSION EXAMPLE
Pilot: 50 pumps at Site A
Phase 1: All critical assets at Site A (200 assets, 4 classes)
Phase 2: All critical assets at Site B (reuse models)
Phase 3: All critical assets at Site CAdvantages:
- Deep local engagement and buy-in
- Site teams become self-sufficient
- Holistic view of site risk
Best for: Organizations with strong site-level management and autonomy.
Key insight: The strategy you choose matters less than the discipline of executing it. Pick one, commit to it, and resist the urge to change strategies mid-rollout.
๐๏ธ Governance That Works (Without Killing Momentum)
Governance is not bureaucracy. It is the immune system that keeps your Health program healthy. Without it, models drift, data degrades, and trust erodes. With too much of it, nothing moves.
The Governance Roles
Role โ Responsibility โ Time Commitment
Program Owner โ Strategic alignment, executive reporting, resource allocation โ 2-4 hours/week
Asset Health Lead โ Model design, indicator management, threshold tuning โ 8-16 hours/week
Site Champions โ Local adoption, data quality monitoring, user support โ 4-8 hours/week per site
Data Stewards โ Master data quality, coding standards, inspection templates โ 4-8 hours/week
IT/Platform Support โ Application health, integration maintenance, access management โ 2-4 hours/week
Key insight: The Asset Health Lead role is the single most critical position. If this role is vacant or assigned to someone with conflicting priorities, the program will degrade within 6-12 months. Budget for it. Protect it.
Model Governance
What to govern:
MODEL GOVERNANCE CHECKLIST
CATALOG:
[ ] All health models documented (indicators, weightings, thresholds)
[ ] Criticality models documented (criteria, scoring rules, examples)
[ ] Model versions tracked with change history
CHANGE CONTROL:
[ ] All model changes go through review and approval
[ ] Impact assessment required before changing weightings or thresholds
[ ] Stakeholders notified of changes before they take effect
REVIEW CYCLE:
[ ] Annual model review (more frequent in first year)
[ ] Post-incident review when health scores missed a failure
[ ] Data quality review alongside model review
APPROVAL:
[ ] New indicators approved by Asset Health Lead
[ ] Threshold changes approved by reliability engineering
[ ] Criticality changes approved by cross-functional committeeData Governance
DATA GOVERNANCE ESSENTIALS
OWNERSHIP:
[ ] Each data source has a named owner
[ ] Data quality targets defined (e.g., >80% failure code completeness)
[ ] Escalation path for data quality issues
MONITORING:
[ ] Data quality metrics tracked alongside health metrics
[ ] Stale data flagged (meters not updated, inspections overdue)
[ ] Coding standard compliance reviewed quarterly
IMPROVEMENT:
[ ] Data quality improvement actions tracked and resourced
[ ] Training provided for technicians and inspectors
[ ] Positive reinforcement for good data practicesโ ๏ธ The Pitfalls That Kill Programs
We have seen dozens of Health implementations. The ones that fail share common patterns. Learn from their mistakes.
Pitfall 1: The Complexity Trap
What happens: The team tries to include every possible indicator, every nuance, every edge case in the first version. The model has 15 indicators per component. Configuration takes months. Scores are impossible to explain.
Why it kills: Complexity breeds distrust. When a planner asks "why did this pump score 42?" and the answer requires a 20-minute explanation of 15 weighted indicators, the planner stops asking. And stops looking.
The fix: Start with 3-5 indicators per component. Maximum. Add complexity when you have evidence it improves decisions. Not before.
"Our model has 23 indicators. It is the most comprehensive in the industry."
>
Does the planner trust the score? No? Then your 23 indicators are producing a number that nobody uses.
Pitfall 2: The Data Ostrich
What happens: The team knows the data has problems but proceeds as if it does not. Failure codes are inconsistent. Inspection ratings are inflated. Meter readings are stale. But the health scores look reasonable on the surface.
Why it kills: The first time a "Healthy" asset fails, someone investigates and discovers the data problems. Trust in the entire system collapses. "If the data is bad, why should I believe any score?"
The fix: Lead with data quality honesty. Show which data is good and which needs improvement. Track data quality metrics on the same dashboard as health metrics. Make it visible.
Pitfall 3: The Dashboard Museum
What happens: Beautiful dashboards are created. They are demoed to executives. They receive praise. They are never opened during an actual planning meeting. Health scores are not connected to work order prioritization, PM adjustments, or capital planning.
Why it kills: Dashboards without decisions are decoration. Within 3-6 months, the dashboards are outdated because nobody noticed they stopped reflecting reality.
The fix: Define the decisions first. "We will use the high-risk asset list to prioritize the weekly work backlog." Then build the dashboard to support that decision. If there is no decision, there should be no dashboard.
Pitfall 4: The Change Management Vacuum
What happens: The tool is deployed. Training consists of a recorded webinar. Planners are told to "start using Health" without being shown how it fits into their existing workflow. Resistance builds silently.
Why it kills: People do not resist new tools. They resist changes to their workflow that they do not understand. Without targeted change management, Health is perceived as extra work rather than better work.
The fix: Train role by role. Show each role exactly how Health improves their specific workflow. Share early wins. Listen to concerns. Adjust.
Pitfall 5: The Orphan Program
What happens: No named owner. No governance structure. No review cadence. The people who built the pilot move on. Nobody maintains the models. Nobody validates the scores. Nobody tracks outcomes.
Why it kills: Every system degrades without maintenance. Health models are no exception. Without governance, model drift begins within 6 months. By 18 months, scores are unreliable. By 24 months, the application is running but nobody trusts it.
The fix: Name an owner. Build governance into the operating model. Budget for ongoing maintenance. This is not a project; it is a capability. Capabilities need permanent support.
๐ Patterns from Programs That Lasted
Pattern 1: Reliability-Led with Data Focus
Profile: The reliability or asset management team leads the initiative. They invest early in data standards and failure coding. They collaborate closely with operations and maintenance.
Why it works: Reliability engineers understand both the scoring models and the engineering reality. They catch model errors early and build credibility with the maintenance teams who need to act on scores.
Key success factor: The reliability team has enough authority to influence data quality standards and maintenance planning priorities.
Pattern 2: Health as the Bridge Between OT and IT
Profile: The program is jointly sponsored by operations and IT. Monitoring and predictive analytics teams integrate signals into health models. Health dashboards serve as a common language across traditionally siloed teams.
Why it works: Breaking down the OT/IT barrier has been a challenge for decades. Health provides a shared framework -- both sides can see and discuss asset condition in the same terms.
Key success factor: Executive sponsorship that spans both operations and IT leadership.
Pattern 3: Capital Planning Driver
Profile: The initial business case is evidence-based capital decisions. Health and risk scores replace age-based and subjective replacement lists. Over time, the program expands into maintenance and reliability processes.
Why it works: Capital decisions are high-visibility, high-value. Proving that Health-based prioritization produces better capital outcomes generates executive support that funds broader expansion.
Key success factor: A capital planning cycle that is willing to incorporate new data sources.
๐ Sustaining the Program
Getting to Phase 3 (Sustain) means building Health into the permanent operating model of your organization. Here is what that looks like:
Regular Reviews
Review โ Frequency โ Participants โ Focus
Model performance โ Quarterly โ Health lead, reliability โ Are scores accurate? Any misses?
Data quality โ Monthly โ Data stewards, site champions โ Are quality targets being met?
Adoption metrics โ Monthly โ Program owner, site champions โ Who is using Health? How often?
Outcome metrics โ Quarterly โ Program owner, finance, ops โ Risk reduction, cost impact, ROI
Strategy alignment โ Annual โ Program owner, exec sponsor โ Does Health still serve business objectives?
Community of Practice
Build a network of Health practitioners across sites:
- Monthly virtual meeting to share experiences
- Shared repository of model templates and configurations
- Recognition for sites or teams that demonstrate strong adoption
- Forum for questions, challenges, and lessons learned
Knowledge Management
Document everything as if the person who built it will leave tomorrow. Because they will.
- Model documentation: indicators, weightings, thresholds, rationale
- Configuration guides: step-by-step for common tasks
- Troubleshooting guides: common issues and resolutions
- Training materials: role-specific, updated when processes change
- Decision log: every significant model or threshold change with rationale
๐ The Maturity Assessment
Use this self-assessment to gauge where your Health program stands:
MATURITY ASSESSMENT (Score 1-5 for each)
SCORING MODELS
[ ] Models documented and version-controlled ___/5
[ ] Validated against expert judgment ___/5
[ ] Tailored per asset class ___/5
[ ] Regularly reviewed and updated ___/5
DATA QUALITY
[ ] Data quality metrics tracked ___/5
[ ] Improvement actions resourced ___/5
[ ] Standards enforced consistently ___/5
ADOPTION
[ ] Dashboards used in decision processes ___/5
[ ] Health data influences work prioritization ___/5
[ ] Health data influences capital planning ___/5
GOVERNANCE
[ ] Named program owner ___/5
[ ] Change control process active ___/5
[ ] Review cadence maintained ___/5
OUTCOMES
[ ] Risk reduction measured and reported ___/5
[ ] Cost impact quantified ___/5
[ ] Prevention stories documented ___/5
TOTAL: ___/75
Score Interpretation:
60-75: Mature, sustainable program
45-59: Strong foundation, some gaps to address
30-44: Functional but fragile -- governance and adoption need work
15-29: At risk of decay -- prioritize governance and ownership
0-14: Pilot stage or program in distress๐ฏ The 8 Commandments of Sustainable Health Programs
- Thou shalt name an owner. Not a committee. Not a shared responsibility. One person accountable for the program.
- Thou shalt govern thy models. Change control, review cycles, documentation. Models drift without governance.
- Thou shalt invest in data quality permanently. Data quality is not a one-time project. It is a continuous practice.
- Thou shalt embed Health in decisions. If Health is not part of a meeting where work gets planned, it is not part of operations.
- Thou shalt manage change for humans. Train role by role. Listen to resistance. Share wins.
- Thou shalt measure outcomes, not just scores. Health scores are inputs. Fewer failures, lower costs, and better capital decisions are outcomes.
- Thou shalt document as if the builder will leave. Because they will. The documentation is the program's immune memory.
- Thou shalt not declare victory after the pilot. The pilot is the experiment. The program is the commitment. They are not the same.
๐ฎ Where This Is Heading
Asset health management is not a static discipline. Over the next 3-5 years, expect:
- AI-augmented scoring models that learn from outcomes and self-tune weightings
- Natural language interfaces where planners ask "which pumps should I worry about this week?" and get ranked, contextualized answers
- Cross-enterprise benchmarking where organizations compare health maturity against industry peers
- Integrated digital twins where health scores are visualized in spatial context alongside live sensor data
- Autonomous maintenance orchestration where Health, Monitor, and Predict jointly schedule and dispatch work without human intervention for routine cases
These capabilities are coming. The organizations that build strong Health foundations today will adopt them first.
๐ Series Wrap-Up
Over eight parts, we have covered the full lifecycle of IBM Maximo Health:
- Introduction -- What Health does and why it matters
- Data Model -- How data becomes scores
- Getting Started -- From zero to first health score
- Configuration -- Building models that mean something
- Integration -- Connecting Monitor and Predict
- Dashboards -- Making health visible to decision-makers
- Automation -- Connecting scores to work and capital
- Governance -- Making it stick
If you take one thing from this series, take this: Health scores are not the goal. Better decisions are the goal. Health scores are the tool.
The tool is only as good as the data behind it, the governance around it, and the decisions in front of it. Get all three right, and you will transform how your organization manages assets.
Get only the tool right, and you will have a very expensive thermometer.
We know which one we would choose.
Back to the series index: MAS HEALTH Series Index
About TheMaximoGuys: We help Maximo developers and teams build, configure, and optimize IBM Maximo Application Suite. Our content comes from real implementations, not marketing slides. If it is in our blog, we have done it in production.



