New Responsibilities in MAS: What SaaS and On-Prem Admins Actually Do Now
Who this is for: Maximo administrators transitioning to MAS (either SaaS or on-premises), platform engineers supporting MAS deployments, and IT managers who need to understand how to staff and skill their MAS operations teams.
Estimated read time: 22 minutes
The New Job Description Nobody Wrote
In our experience, the most common question from admins preparing for MAS is not "What do I need to learn?" It's "What does my job actually look like day to day?"
IBM's documentation explains each MAS component. Red Hat's documentation explains OpenShift. But nobody wrote the guide that says: "Here is what the MAS administrator does on a Tuesday afternoon when something breaks." This is that guide.
Regardless of whether your organization runs MAS as a SaaS subscription or as an on-premises deployment on your own OpenShift cluster, the admin role now centers around six core areas. Some of these will feel familiar with new tooling. Others will feel completely new.
1. Understanding MAS Microservices: The Suite You Operate
The Architecture You Need to Know
MAS is not one application. It is a suite of applications deployed as independent services, each with its own pods, databases, and scaling characteristics:
Application — Function — Admin Relevance
Manage — Core work management, assets, inventory, procurement — Highest — this is where most user activity occurs
Monitor — IoT data ingestion and dashboards — Moderate — data pipeline health, device connectivity
Health — Asset health scoring — Low to moderate — model configuration, scoring accuracy
Predict — Predictive failure analytics — Moderate — model training data, prediction accuracy
Assist — AI-powered technician guidance — Low — content management, AI model accuracy
Visual Inspection — Image-based quality inspection — Low — edge device management, model updates
IoT — Device connectivity and data ingestion — Moderate — connection health, data throughput
Optimizer — Schedule optimization — Low to moderate — optimization run health
Why this matters for admins: Each application can fail independently. A problem in Monitor does not necessarily affect Manage. Understanding these boundaries means faster incident triage — you can quickly determine which service is affected and focus your troubleshooting there.
Real Scenario: Troubleshooting a Slow Manage Application
In the legacy world, "Maximo is slow" meant investigating one application server. In MAS, "Manage is slow" could mean:
- The Manage pods are under-resourced (CPU/memory pressure)
- The database backing Manage is experiencing lock contention
- An integration is flooding the Manage API
- A CRON task inside Manage is consuming resources
- The OpenShift node hosting Manage pods is resource-constrained
The diagnostic approach is similar to legacy — eliminate variables systematically — but the tools are different.
2. Kubernetes and Container Fundamentals: Your New Literacy
What You Need to Know (and What You Don't)
We've seen admins panic when they hear "you need to learn Kubernetes." Let's be precise about what MAS admins actually need.
You need to understand:
- Pods — the basic unit of deployment (think of it as a running instance of a service)
- Services — how pods are discovered and accessed within the cluster
- Deployments/StatefulSets — how Kubernetes ensures the right number of pods are running
- Namespaces — how MAS isolates its components within the cluster
- Resource limits — CPU and memory boundaries for each pod
- Health checks — liveness and readiness probes that determine if a pod is healthy
- ConfigMaps and Secrets — how configuration and credentials are stored
You don't need to know (initially):
- How to build Kubernetes manifests from scratch
- Networking policy details (CNI, Calico, OVN)
- Storage class internals
- Custom Resource Definitions (beyond MAS-specific ones)
- Cluster installation and lifecycle management
Essential Commands for MAS Admins
Here are the oc (OpenShift CLI) commands you'll use most frequently. These are the equivalent of your old WebSphere admin console and tail -f SystemOut.log:
# Check the overall health of MAS pods in the core namespace
oc get pods -n mas-{instance}-core
# Example output:
# NAME READY STATUS RESTARTS AGE
# admin-dashboard-5d9f8b7c6-x2k9p 1/1 Running 0 3d
# api-gateway-7c4d6f8e9-m3n7q 1/1 Running 0 3d
# coreidp-6b8f9c7d5-p4r2s 2/2 Running 0 3d
# Check Manage application pods specifically
oc get pods -n mas-{instance}-manage
# Look for: STATUS=Running, READY shows all containers ready, RESTARTS low
# View logs for a specific pod (like reading SystemOut.log)
oc logs -f pod/manage-maxinst-7b9d6f-k2m4p -n mas-{instance}-manage
# View logs for a specific container within a multi-container pod
oc logs -f pod/manage-server-bundle-5d8c7b-x9p3q -c serverBundle -n mas-{instance}-manage
# Check pod resource usage (like monitoring JVM heap)
oc adm top pods -n mas-{instance}-manage
# Describe a pod to see events, conditions, and resource limits
oc describe pod/manage-server-bundle-5d8c7b-x9p3q -n mas-{instance}-manage
# Check recent events in a namespace (great for diagnosing startup failures)
oc get events -n mas-{instance}-manage --sort-by='.lastTimestamp'Key insight: The oc command is your new admin console. Just as you once navigated WebSphere's tree of servers, applications, and resources, you now navigate namespaces, pods, and containers. The concept is identical — the syntax is different.Reading Pod Health: What the Numbers Mean
When you run oc get pods, the output tells a story:
NAME READY STATUS RESTARTS AGE
manage-server-bundle-5d8c7b 1/1 Running 0 5d
manage-server-bundle-9f3e2a 1/1 Running 0 5d
manage-crontask-7b4d6f 1/1 Running 2 5d
manage-mea-api-3c8e9f 0/1 CrashLoopBackOff 15 1hHere is how to read this:
- 1/1 Running, 0 restarts — Healthy. Both containers in the pod are running, no recent issues.
- 1/1 Running, 2 restarts — Mostly healthy but has restarted twice. Check logs around restart times.
- 0/1 CrashLoopBackOff, 15 restarts — Problem. This pod keeps starting, crashing, and restarting. Investigate immediately.
# Investigate the crashing pod
oc logs pod/manage-mea-api-3c8e9f -n mas-{instance}-manage --previous
# The --previous flag shows logs from the LAST run before the crash
# Check the pod's events for scheduling or resource issues
oc describe pod/manage-mea-api-3c8e9f -n mas-{instance}-manage | grep -A 20 "Events:"3. Operators: The Automation Engine You Supervise
What Operators Replace
In the legacy world, the admin personally performed:
- Installation (run installer, configure database, build EAR, deploy)
- Upgrades (download fix pack, apply scripts, rebuild EAR, redeploy)
- Configuration changes (edit properties, restart servers)
- Health monitoring (check logs, verify services, test connectivity)
- Scaling (add cluster nodes, configure load balancer)
In MAS, operators do all of this automatically based on a desired-state specification that you define. The admin's role shifts from performer to supervisor.
The MAS Operator Hierarchy
MAS uses a layered operator model. The top-level IBM MAS Operator orchestrates application-specific operators, each of which manages its own lifecycle independently.
Application Operators (managed by IBM MAS Operator):
Operator — Manages — Namespace
Manage Operator — Core EAM (work orders, assets, inventory) — mas-{id}-manage
Monitor Operator — IoT dashboards and anomaly detection — mas-{id}-monitor
Health Operator — Asset health scoring — mas-{id}-health
Predict Operator — Predictive failure analytics — mas-{id}-predict
Assist Operator — AI-powered technician guidance — mas-{id}-assist
IoT Operator — Device connectivity and data ingestion — mas-{id}-iot
Visual Inspection Operator — Image-based quality inspection — mas-{id}-visualinspection
Dependency Operators (required infrastructure):
Operator — Purpose — Typical Namespace
MongoDB Community Operator — MAS core configuration database — mongoce
Strimzi / AMQ Streams Operator — Kafka for IoT event streaming — amq-streams
cert-manager — Automated TLS certificate lifecycle — cert-manager
Service Binding Operator — Connects applications to backing services — openshift-operators
IBM Suite License Service (SLS) — AppPoints license tracking — ibm-sls
Each operator watches its custom resources and reconciles the actual state to match the desired state. If a Manage pod crashes, the Manage operator detects this and creates a replacement. If you change a configuration value in the MAS custom resource, the operator rolls out the change across all affected pods.
Monitoring Operator Health
# Check that all MAS operators are running
oc get csv -n mas-{instance}-core
# CSV = ClusterServiceVersion; this shows installed operators and their status
# Look for: Phase = Succeeded
# Check the MAS Suite custom resource status
oc get suite -n mas-{instance}-core -o yaml
# Look for the status section — it shows the health of each component
# Check specific application activation
oc get manageapp -n mas-{instance}-manage -o yaml
# The status section shows deployment health, database connectivity, etc.
# View operator logs for troubleshooting
oc logs deployment/ibm-mas-operator -n mas-{instance}-coreReal Scenario: Operator-Managed Upgrade
When IBM releases an update for Manage, here is what the admin's involvement looks like in MAS on-prem:
- IBM publishes a new operator version in the catalog
- Admin reviews release notes and known issues
- Admin approves the update (or configures automatic approval)
- Operator updates itself to the new version
- Operator reconciles — detects the new desired state, begins rolling update
- Pods are replaced one by one (rolling strategy, no full outage)
- Admin monitors the rollout, watching for errors
- Admin validates — functional testing after the update completes
Compare this to the legacy EAR deployment ritual (stop cluster, deploy EAR, restart, pray). The admin's role has shifted from executor to reviewer and validator.
Key insight: Operators don't eliminate the admin's judgment — they eliminate the admin's manual labor. You still decide when to update, what to test, and how to validate. The operator handles the mechanics of making it happen.
4. API-First Troubleshooting: Your New Diagnostic Toolkit
The Shift from Logs-First to API-First
In the legacy world, troubleshooting started with log files. You'd SSH into the server, tail the SystemOut.log, and search for exceptions. That approach doesn't scale in a distributed microservices environment where logs come from dozens of pods across multiple services.
MAS admin troubleshooting follows a layered approach:
- Dashboard first — MAS Suite Administration dashboard shows system health
- API health checks — verify service endpoints are responding
- Pod status — use
occommands to check container health - Centralized logs — search aggregated logs, not individual files
- Distributed traces — follow a request across services (when available)
Health Check Patterns
# Check MAS API gateway health
curl -s -o /dev/null -w "%{http_code}" \
https://api.mas-{instance}.{domain}/api/v1/health
# Check Manage application health endpoint
curl -s https://manage.mas-{instance}.{domain}/maximo/api/health \
-H "Authorization: Bearer ${TOKEN}" | jq .
# Check Integration Service health
curl -s https://api.mas-{instance}.{domain}/api/integration/v1/health \
-H "Authorization: Bearer ${TOKEN}" | jq .Working with the Maximo REST API
The REST API is now your primary interface for data operations, replacing much of what you previously did through direct SQL:
# Query work orders via API (replaces direct SQL queries)
curl -s "https://manage.mas-{instance}.{domain}/maximo/oslc/os/mxwo?\
oslc.where=status=%22WAPPR%22\
&oslc.select=wonum,description,status,statusdate\
&oslc.pageSize=10" \
-H "Authorization: Bearer ${TOKEN}" \
-H "Content-Type: application/json" | jq .
# Check CRON task status via API
curl -s "https://manage.mas-{instance}.{domain}/maximo/oslc/os/mxcrontask?\
oslc.where=crontaskname=%22PMWOGEN%22\
&oslc.select=crontaskname,description,active" \
-H "Authorization: Bearer ${TOKEN}" | jq .Synthetic Monitoring
In our experience, the most proactive MAS admins set up synthetic monitoring — automated tests that periodically verify key workflows are functional:
# Example: synthetic health check script (runs every 5 minutes)
#!/bin/bash
# synthetic-check.sh
MAS_URL="https://manage.mas-{instance}.{domain}"
TOKEN="${MAS_API_TOKEN}"
# Check 1: API responds
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
"${MAS_URL}/maximo/oslc/os/mxwo?oslc.pageSize=1" \
-H "Authorization: Bearer ${TOKEN}")
if [ "$HTTP_CODE" != "200" ]; then
echo "ALERT: Manage API returned ${HTTP_CODE}" | \
mail -s "MAS Health Alert" admin-team@company.com
fi
# Check 2: Can create and retrieve a test record
# (More sophisticated checks follow similar patterns)Key insight: API-first troubleshooting is not slower than SSH-and-grep. It's different. In our experience, admins who invest in building API-based monitoring scripts end up detecting issues faster than those who relied on manual log review. The key is building the toolkit incrementally, starting with the checks that matter most to your environment.
5. Zero Trust Security: OIDC, OAuth2, and Modern IAM
From LDAP to Identity Providers
The legacy security model was straightforward: WebSphere authenticated users against LDAP, and Maximo authorized them through security groups. In MAS, the security architecture follows Zero Trust principles:
User/Service → Identity Provider (Keycloak/External IdP)
→ OIDC Authentication
→ OAuth2 Token Issuance
→ MAS API Gateway (token validation)
→ Application-level authorization (security groups)Key Concepts for MAS Admins
OIDC (OpenID Connect) — The authentication protocol. When a user logs in to MAS, they are redirected to the identity provider (IdP), which authenticates them and returns an identity token. MAS never sees the user's password.
OAuth2 — The authorization framework. After authentication, the user receives an access token that grants permission to access specific resources. Tokens have expiration times and scopes.
Keycloak — The default identity provider included with MAS. It manages user identities, authentication flows, and token issuance. In organizations with existing enterprise IdPs (Azure AD, Okta, Ping), Keycloak can federate to those systems.
Common Admin Tasks in the New Security Model
Task — Legacy Approach — MAS Approach
Add a new user — Create in LDAP, sync to Maximo — Create in IdP, assign to MAS application entitlement
Reset password — LDAP admin tools — Identity provider self-service or admin portal
Configure SSO — WebSphere TAI or TAM — OIDC federation between MAS and enterprise IdP
Service-to-service auth — Shared credentials in properties files — OAuth2 client credentials flow
API authentication — Basic auth or LTPA token — Bearer token via OAuth2
Audit login activity — WebSphere security logs + LDAP audit — IdP audit logs + MAS access logs
Practical: Obtaining an API Token
# Obtain an OAuth2 access token for API access
# Client credentials flow (for service accounts and automation)
curl -s -X POST \
"https://auth.mas-{instance}.{domain}/realms/mas/protocol/openid-connect/token" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=client_credentials" \
-d "client_id=mas-api-client" \
-d "client_secret=${CLIENT_SECRET}" \
| jq -r '.access_token'
# The returned token is a JWT — you can decode it to inspect claims
# (Never do this with production tokens in insecure environments)
echo "${TOKEN}" | cut -d'.' -f2 | base64 -d 2>/dev/null | jq .SSO Integration Patterns
For organizations connecting MAS to an enterprise identity provider:
# Simplified representation of MAS OIDC configuration
# (Actual configuration is through MAS Suite Administration UI)
oidc:
provider: "azure-ad"
discoveryEndpoint: "https://login.microsoftonline.com/{tenant}/.well-known/openid-configuration"
clientId: "mas-application-client-id"
clientSecret: "stored-in-kubernetes-secret"
scopes: ["openid", "profile", "email"]
userClaim: "preferred_username"
groupClaim: "groups"
groupMapping:
"AD-MAS-Admins": "MAXADMIN"
"AD-MAS-Users": "MAXUSER"
"AD-MAS-ReadOnly": "MAXRONLY"Key insight: The move to OIDC/OAuth2 isn't just a technology change — it fundamentally improves security posture. Tokens expire automatically (unlike LDAP sessions that could persist indefinitely). Service-to-service communication uses scoped credentials instead of shared passwords. And the identity provider provides a centralized audit trail that LDAP never could.
6. AppPoints Licensing: Monitoring What You Consume
The Licensing Model Shift
In Maximo 7.x, licensing was typically based on named users — you purchased X authorized user licenses and Y concurrent user licenses. Tracking compliance meant counting active user records.
MAS introduces AppPoints — a token-based licensing model where:
- Each MAS application consumes a defined number of AppPoints per user type
- Different user types (Premium, Limited, Base) consume different amounts
- AppPoints are shared across the entire MAS suite
- Consumption is dynamic and needs active monitoring
AppPoints Allocation Example
Application — Premium User — Limited User — Base User
Manage — 15 points — 10 points — 5 points
Monitor — 5 points — 3 points — N/A
Health — 5 points — 3 points — N/A
Predict — 10 points — 5 points — N/A
Assist — 15 points — N/A — N/A
Note: Actual AppPoints values may vary by contract. Verify with your IBM agreement.
Monitoring AppPoints Consumption
# Check IBM License Service status (on-prem)
oc get pods -n ibm-common-services | grep license
# Access the License Service Reporter
# The URL is typically:
# https://license-service-reporter.{cluster-domain}/
# Query license consumption via API
curl -s "https://license-service.{cluster-domain}/api/v1/report" \
-H "Authorization: Bearer ${TOKEN}" | jq '.products[] | select(.name | contains("Maximo"))'Practical Optimization Strategies
In our experience, organizations often over-provision AppPoints initially and then optimize over time:
- Audit user types regularly — Are "Premium" users actually using premium features, or could they be downgraded to "Limited"?
- Deactivate inactive users — Users who haven't logged in for 90+ days are consuming AppPoints unnecessarily
- Monitor application adoption — If you're licensed for Predict but nobody is using it, those AppPoints could be reallocated
- Plan for growth — AppPoints consumption increases as you onboard new users and activate new applications
# Example: Script to identify inactive users for AppPoints optimization
# (Run this against the Manage API)
curl -s "https://manage.mas-{instance}.{domain}/maximo/oslc/os/mxperson?\
oslc.where=status=%22ACTIVE%22\
&oslc.select=personid,displayname,loginid,lastlogindate\
&oslc.orderBy=-lastlogindate\
&oslc.pageSize=100" \
-H "Authorization: Bearer ${TOKEN}" | \
jq '.member[] | select(.lastlogindate != null) |
select((.lastlogindate | split("T")[0] | strptime("%Y-%m-%d") | mktime) <
(now - (90 * 86400))) |
{personid, displayname, lastlogindate}'The Shared Responsibility Model: Who Owns What
This is the most important conceptual shift for MAS administrators. In the legacy world, you owned everything from the OS up. In MAS, responsibilities are shared between you, IBM, and (in on-prem deployments) your platform team.
MAS SaaS Responsibility Matrix
Layer — Who Manages — Examples
Infrastructure — IBM — Compute, storage, network, data centers
Platform — IBM — OpenShift, Kubernetes, operators
Application Runtime — IBM — MAS containers, databases, middleware
Application Configuration — You — Security groups, org/site settings, CRON tasks
Users and Access — You + IBM — You manage users and roles; IBM manages the IdP platform
Integrations — You — Integration configuration, API credentials, data mapping
Data — You — Data quality, migration, governance
Testing — You — Functional testing, UAT, upgrade validation
Governance — You — Compliance, audit, change management
MAS On-Premises Responsibility Matrix
Layer — Who Manages — Examples
Infrastructure — You / Cloud Provider — VMs, bare metal, storage, network
Platform — You — OpenShift installation, upgrades, capacity
Operators — IBM (delivery) / You (operations) — IBM delivers operators; you manage their lifecycle
Application Runtime — Operators (automated) — Pod deployment, scaling, updates
Application Configuration — You — Same as SaaS
Users and Access — You — Full control of IdP and MAS user management
Integrations — You — Same as SaaS
Data — You — Full database access and management
Backups — You — Database and persistent volume backups
Real Scenario: Incident Escalation
When something breaks, the shared responsibility model determines your escalation path:
Scenario: Users report that Manage is returning 500 errors intermittently.
SaaS Admin Response:
- Check MAS Suite Administration dashboard for reported issues
- Verify the issue isn't configuration-related (recent changes?)
- Check IBM Cloud status page for platform incidents
- Open IBM Support case with timestamps, error details, and user impact
- IBM investigates pod health, platform issues, and application errors
- You coordinate user communication and workarounds
On-Prem Admin Response:
- Check pod health:
oc get pods -n mas-{instance}-manage - Review pod logs:
oc logs -f pod/{manage-pod} -n mas-{instance}-manage - Check node resources:
oc adm top nodes - Review recent events:
oc get events -n mas-{instance}-manage - If the issue is in operator-managed components, open IBM Support case
- If the issue is in your platform layer (node resources, storage, networking), resolve internally
Mapping Old to New: The Responsibility Translation Table
For admins coming from the legacy world, here is how each old responsibility maps to its new equivalent:
Legacy Responsibility — New Equivalent (On-Prem) — New Equivalent (SaaS)
WebSphere JVM tuning — Pod resource limits in operator CR — N/A (IBM manages)
EAR deployment — Operator-managed container deployment — N/A (IBM manages)
SystemOut.log analysis — oc logs and centralized logging — Request logs from IBM Support
Database connection pools — Operator-managed DB configuration — N/A (IBM manages)
LDAP configuration — OIDC/Keycloak configuration — SSO/IdP federation setup
SSL certificate renewal — Cert-manager automatic rotation — N/A (IBM manages)
Fix pack application — Operator catalog updates — IBM-managed continuous updates
CRON task management — Same UI, different monitoring tools — Same UI, same management
MIF queue monitoring — API + centralized logs — API + IBM dashboards
Backup and recovery — Platform-level backup tools — IBM-managed (SLA-backed)
User provisioning — IdP + MAS Suite Admin — IdP + MAS Suite Admin
Security groups — Same application-level UI — Same application-level UI
Performance monitoring — Prometheus/Grafana dashboards — MAS dashboards + IBM monitoring
Capacity planning — OpenShift resource monitoring — AppPoints monitoring + IBM guidance
The MAS Microservices Architecture: Layer by Layer
Understanding how MAS components relate to each other helps with troubleshooting and capacity planning. The architecture is organized into four distinct layers, from the user-facing edge down to the infrastructure.
Layer 1: User Access (External)
All user traffic enters through the OpenShift Router (Ingress Controller), which terminates TLS and routes requests to the appropriate MAS application based on the URL hostname.
- Browser users access MAS applications via HTTPS routes
- Mobile users (Maximo Mobile) connect through the same API gateway
- Integration partners call REST APIs through authenticated endpoints
- Identity Provider (Keycloak) handles all authentication before requests reach the applications
Layer 2: MAS Application Services
Each MAS application runs as an independent set of pods in its own namespace:
Application — Namespace — Database — Key Pods
Suite Core — mas-{id}-core — MongoDB — API gateway, admin dashboard, core IdP
Manage — mas-{id}-manage — DB2 or Oracle — ServerBundle pods (UI, cron, API, MEA, report)
Monitor — mas-{id}-monitor — PostgreSQL — Dashboard, analytics, pipeline pods
Health — mas-{id}-health — PostgreSQL — Scoring engine, data processor pods
Predict — mas-{id}-predict — PostgreSQL — ML model service, training pods
Assist — mas-{id}-assist — PostgreSQL — Content server, AI service pods
Visual Inspection — mas-{id}-visualinspection — PostgreSQL — Inference server, model manager pods
Key insight: A failure in one application namespace does not necessarily affect others. If Monitor pods crash, Manage continues to work. This isolation is a major improvement over the monolithic 7.6 architecture where a single JVM failure took down everything.
Layer 3: Shared Services (Infrastructure Dependencies)
These components are shared across all MAS applications and are critical infrastructure:
Service — Purpose — Impact if Down
MongoDB — MAS core configuration, user sessions — All MAS applications affected
Kafka (AMQ Streams) — Event streaming for IoT and integrations — Monitor, IoT data pipelines stop
Keycloak — Authentication and SSO — Nobody can log in
cert-manager — TLS certificate automation — Certificate expiration leads to service outages
IBM SLS — License tracking — License compliance reporting fails
PostgreSQL — Application databases for Monitor, Health, Predict — Affected applications stop
DB2 or Oracle — Manage application database — Manage (core EAM) stops
Layer 4: OpenShift Platform
The foundation layer provides compute, storage, and networking:
- Control plane nodes -- Run the Kubernetes API server, etcd, scheduler, and controller manager
- Worker nodes -- Run all MAS application pods and shared services
- Infrastructure nodes (optional) -- Run the router, monitoring stack, and logging
- Storage -- Persistent volumes backed by NFS, block storage, or cloud-native CSI drivers
- Networking -- Software-defined networking (OVN-Kubernetes or OpenShift SDN) with network policies
Each layer in this architecture represents a set of pods that the admin may need to troubleshoot. The shared services layer is managed by operators but still requires awareness -- if MongoDB is unhealthy, MAS core functions will be affected even if the Manage pods are running fine.
Deep Dive: The MAS Deployment Hierarchy — Clusters, Nodes, Pods, and Containers
This section goes beneath the application layer to explain exactly how MAS maps onto OpenShift infrastructure. Understanding this hierarchy is essential for capacity planning, troubleshooting, and communicating with your platform team.
The Full Hierarchy: From Cluster to Container
MAS follows the standard Kubernetes deployment model with MAS-specific patterns layered on top. Here is the complete hierarchy from the broadest scope down to the smallest unit:
Level — What It Is — MAS Example
Cluster — A complete OpenShift installation with its own control plane, networking, and storage — Your production OpenShift cluster running MAS
Node — A physical or virtual machine within the cluster — A 16 vCPU / 64 GB worker node running MAS pods
Namespace — A logical boundary within a cluster that isolates resources — mas-inst1-core, mas-inst1-manage, mongoce
Deployment / StatefulSet — A controller that manages pod replicas — inst1-masdev-ui (manages 3 UI pods)
Pod — The smallest deployable unit — one or more containers sharing network and storage — inst1-masdev-ui-abc12 (one UI replica)
Container — A single running process inside a pod — liberty-server (the WebSphere Liberty JVM running Maximo code)
Node Types in a MAS Cluster
A production MAS cluster uses four distinct node roles, each with specific responsibilities:
Node Type — Typical Count — Sizing — What Runs Here
Control Plane (Masters) — 3 — 8 vCPU / 32 GB each — etcd, Kubernetes API server, scheduler, controller manager. No MAS workloads.
Worker Nodes — 6-10+ — 16 vCPU / 64 GB each — All MAS application pods, MongoDB, Kafka, SLS, operators
Infrastructure Nodes — 3 — 8-16 vCPU / 32-64 GB each — OpenShift Router (HAProxy), internal registry, Prometheus, Grafana, logging
ODF Storage Nodes — 3 — 16 vCPU / 64 GB each — OpenShift Data Foundation (Ceph), providing persistent storage for all pods
Key insight: Each worker node reserves approximately 1 CPU core for internal OpenShift services (kubelet, kube-proxy, CRI-O runtime). A 16-core node has roughly 15 cores available for MAS workloads. Plan capacity accordingly.
A production deployment running Core + Manage + Monitor typically requires 15-19 nodes and runs 800+ pods across all namespaces.
Namespace Organization: How MAS Isolates Components
MAS uses a standardized namespace convention: mas-{instanceId}-{component}. Each MAS instance creates its own set of namespaces, completely isolated from other instances.
Per-instance namespaces (created for each MAS installation):
Namespace — What Lives Here
mas-{id}-core — MAS core platform: API gateway, admin dashboard, CoreIDP, entity managers
mas-{id}-manage — Maximo Manage: ServerBundle pods (UI, cron, MEA, report), build pods, maxinst admin
mas-{id}-pipelines — Tekton pipeline runs for MAS CLI installation and configuration operations
mas-{id}-iot — IoT application pods (if deployed)
mas-{id}-monitor — Monitor dashboards and analytics pods (if deployed)
mas-{id}-health — Health scoring engine pods (if deployed)
mas-{id}-predict — Predict ML model serving pods (if deployed)
mas-{id}-visualinspection — Visual Inspection inference and training pods (if deployed)
mas-{id}-optimizer — Optimizer scheduling pods (if deployed)
mas-{id}-assist — Assist AI service pods (if deployed)
Shared cluster-wide namespaces (used by all MAS instances):
Namespace — Purpose
openshift-marketplace — IBM Maximo Operator Catalog (CatalogSource)
ibm-common-services — IBM Common Services, IAM operator, namespace scope operator
ibm-sls — Suite License Service — AppPoints tracking
mongoce or mas-mongo-ce — MongoDB Community Edition — MAS configuration database
ibm-cpd — Cloud Pak for Data (if using Predict or Health with Watson)
# List all namespaces related to your MAS instance
oc get namespaces | grep "mas-inst1\|mongoce\|ibm-sls\|ibm-common"
# Count pods in a specific MAS namespace
oc get pods -n mas-inst1-core --no-headers | wc -lInside the Core Namespace: Entity Manager Architecture
The mas-{id}-core namespace contains a distinctive architectural pattern -- the entity manager model. Rather than one monolithic operator managing everything, MAS decomposes configuration management into approximately 15 small, focused controller pods, each responsible for a single integration concern.
Entity Manager Pod — What It Watches — What It Does
entitymgr-coreidp — CoreIDPCfg CR — Manages core identity provider (coreidp, coreidp-login pods)
entitymgr-ws — Workspace CR — Controls workspace creation and lifecycle
entitymgr-bascfg — BASCfg (DRO) CR — Manages usage reporting (milestonesapi, adoptionusageapi, apppoints)
entitymgr-jdbccfg — JDBCCfg CR — Validates database connections
entitymgr-kafkacfg — KafkaCfg CR — Manages Kafka connections for IoT/Monitor
entitymgr-mongocfg — MongoCfg CR — Manages MongoDB connections
entitymgr-slscfg — SLSCfg CR — Manages Suite License Service connection
entitymgr-idpcfg — IDPCfg CR — Handles SAML/LDAP identity provider integration
entitymgr-smtpcfg — SMTPCfg CR — Manages email notification configuration
entitymgr-scimcfg — SCIMCfg CR — Manages LDAP user sync via SCIM protocol
entitymgr-objectstorage — ObjectStorageCfg CR — Manages S3/NFS attachment storage connections
entitymgr-pushnotificationcfg — PushNotificationCfg CR — Manages push notification services
entitymgr-watsonstudiocfg — WatsonStudioCfg CR — Manages Watson Studio / CP4D integration
Each entity manager follows the Kubernetes controller pattern: it watches a specific Custom Resource (CR), compares the desired state to the actual state, and reconciles any differences. When you configure a JDBC connection in the MAS admin UI, you are actually modifying a JDBCCfg custom resource. The entitymgr-jdbccfg pod detects the change, validates the database connection, and updates the CR status.
# See all entity manager pods in the core namespace
oc get pods -n mas-inst1-core | grep entitymgr
# Check the status of a specific configuration
oc get jdbccfg -n mas-inst1-core -o yaml | grep -A5 "status:"Inside the Manage Namespace: ServerBundle Architecture
The mas-{id}-manage namespace contains the workloads that run Maximo Manage — the core EAM application. MAS uses a concept called ServerBundles to organize Manage workloads into logical groups.
What is a ServerBundle? A ServerBundle is a logical abstraction that maps to one or more Kubernetes pods. Each bundle type runs a specific aspect of the Maximo Manage application inside a WebSphere Liberty container.
Bundle Type — Purpose — What Runs Inside — When to Scale
all — Combined workload (default for dev/test) — Full Maximo EAR with UI, cron, MEA, and reports — Simple deployments with low user counts
ui — User interface serving — Maximo UI code — what end users interact with — Scale based on concurrent user count (50-75 users per pod)
cron — Scheduled background tasks — Escalations, scheduled reports, PM generation, data cleanup — Usually 1 replica; scale for heavy cron workloads
mea — Maximo Enterprise Adapter — SOAP/REST integration endpoints for external systems — Scale based on integration message volume
report — BIRT Report Only Server — Report execution engine, isolated from UI — Scale based on concurrent report execution needs
standalonejms — JMS messaging — Liberty JMS server for Integration Framework queues — Usually 1 (StatefulSet); requires PVC for persistence
Why split bundles? In production, splitting bundles provides workload isolation:
- A long-running report does not starve UI users of CPU
- High-volume MEA integrations do not slow down the interactive UI
- Cron tasks (escalations, PM generation) run independently
- Each bundle type can be scaled independently based on its specific load pattern
ServerBundle to Pod mapping:
# View all Manage pods and their bundle types
oc get pods -n mas-inst1-manage -o wide
# Example output for a split-bundle production deployment:
# NAME READY STATUS NODE
# inst1-masdev-ui-abc12 2/2 Running worker-1
# inst1-masdev-ui-def34 2/2 Running worker-2
# inst1-masdev-ui-ghi56 2/2 Running worker-3
# inst1-masdev-cron-xyz78 2/2 Running worker-2
# inst1-masdev-mea-jkl90 2/2 Running worker-1
# inst1-masdev-report-mno12 2/2 Running worker-3Capacity planning: IBM guidance is 50-75 concurrent users per UI ServerBundle pod, equivalent to a JVM with 2 CPU cores. To handle 300 concurrent users, plan for 4-6 UI pods.
Container Architecture: What Runs Inside Each Pod
Most MAS pods follow a consistent container model: one primary container performing the application function, with optional sidecar containers for supporting tasks.
Typical MAS Manage ServerBundle pod (2 containers):
Container — Role — Image Base — Ports — Purpose
liberty-server — Primary — WebSphere Liberty + OpenJ9 JVM — 9080 (HTTP), 9443 (HTTPS) — Runs the Maximo application EAR
monitoring-sidecar — Sidecar — Lightweight metrics exporter — /metrics endpoint — Scrapes Liberty MicroProfile Metrics, exposes Prometheus endpoint
Resource defaults for Manage ServerBundle pods:
Resource — Request (Guaranteed Minimum) — Limit (Maximum Allowed)
CPU — 200m (0.2 cores) — 6 cores
Memory — 1 Gi — 10 Gi
Typical MAS operator pod (2 containers):
Container — Role — Purpose
manager — Primary — The operator reconciliation loop — watches CRs and manages lifecycle
kube-rbac-proxy — Sidecar — Standard OLM RBAC proxy for authenticated metrics
Special-purpose pods in the Manage namespace:
Pod — Type — Purpose
manage-maxinst — Administrative — Database configuration (updatedb, configdb, integrity checker). Not user-facing.
jmsserver-0 — StatefulSet — Liberty JMS messaging engine. One per workspace. Requires PVC for persistent message storage.
*-build-config-*-build — Build — OpenShift BuildConfig pods that compile customization archives into container images. Appear during builds, then complete.
The Manage Build Pipeline: From Code to Container
Before ServerBundle pods can run Maximo Manage, the operator executes a multi-stage build pipeline using OpenShift BuildConfigs:
Build Stage — What Happens — Output
1. ManageBuild CR created — Operator detects build request — Build pipeline starts
2. Pull base image — Downloads WebSphere Liberty from IBM Container Registry (icr.io) — Base image cached locally
3. Download customizations — Fetches customization archive from configured HTTP/FTP endpoint — Custom Java classes, XML, DB scripts
4. Layer customizations — Applies customizations onto the base image per bundle type — One image per bundle: ui, cron, mea, report
5. Push to internal registry — Built images stored in OpenShift internal registry — Images available for deployment
6. ManageDeployment CR — Operator triggers pod rollout using new images — ServerBundle pods start with updated code
# Check build status
oc get builds -n mas-inst1-manage
# View build logs
oc logs build/ui-build-config-1-build -n mas-inst1-manageThe Operator Hierarchy: How the MAS Stack Is Managed
All MAS operators are delivered through the IBM Maximo Operator Catalog and managed by the Operator Lifecycle Manager (OLM). The catalog is a curated, tested snapshot of compatible operator versions.
Top-level operator catalog (installed in openshift-marketplace):
Operator Category — Key Operators — What They Manage
MAS Application — ibm-mas (Core), ibm-mas-manage, ibm-mas-iot, ibm-mas-monitor, ibm-mas-health, ibm-mas-predict, ibm-mas-visualinspection, ibm-mas-optimizer, ibm-mas-assist, ibm-mas-aibroker — MAS applications and their lifecycle
Data Services — mongodb-operator-app, db2u-operator, cloud-native-postgresql, strimzi-kafka-operator — Databases and messaging infrastructure
Platform Services — cert-manager-operator, ibm-sls, ibm-truststore-mgr, common-service-operator — Certificates, licensing, trust, and shared services
The reconciliation pattern:
When you make a configuration change (for example, adding a new JDBC connection), here is how it flows through the operator hierarchy:
- Admin updates configuration via MAS Admin UI (or directly edits the CR)
- The Suite CR change is detected by the
ibm-mas-operator - The operator validates the configuration and creates/updates a child CR (e.g.,
JDBCCfg) - The corresponding entity manager (
entitymgr-jdbccfg) detects the child CR change - The entity manager validates the JDBC connection to the database
- On success, the entity manager updates the CR status to
Ready - The Suite CR aggregates all child statuses — when all are
Ready, the application activates
# Check operator subscription and approval mode
oc get subscription -n mas-inst1-core -o custom-columns=NAME:.metadata.name,CHANNEL:.spec.channel,APPROVAL:.spec.installPlanApproval
# Check Custom Resource Definitions installed by MAS
oc get crd | grep mas.ibm.comResource Sizing Reference: Planning Your Cluster
Use this table when planning infrastructure or discussing capacity with your platform team:
MAS Application — CPU Request — CPU Limit — Memory Request — Memory Limit
Core (platform) — 1.5 cores — 19 cores — 6.3 GB — 32.5 GB
Manage (per workspace, base) — 2.9 cores — 11.1 cores — 4 GB — 17 GB
Monitor — 5.4 cores — 32.4 cores — 12.8 GB — 55.5 GB
Health — 2.9 cores — 15.6 cores — 7.1 GB — 30.8 GB
Predict — 3.1 cores — 12.5 cores — 6.1 GB — 24.5 GB
IoT — 19.7 cores — 214.7 cores — 57.1 GB — 269 GB
Additional infrastructure overhead:
Component — CPU — Memory — Notes
ODF/OCS Storage (3 nodes) — 14 cores — 32 GB — 16 vCPU / 64 GB per node, SSD required
CP4D + DB2 Warehouse — 31.6 cores — 235 GB — Required for Predict/Health with Watson
Each additional Manage UI pod — 1 core (request) / 6 cores (limit) — 2 GB (request) / 10 GB (limit) — Scale out for more concurrent users
Sizing rules of thumb:
- 4 GB memory per CPU core for most MAS workloads
- 15-25 GB disk per CPU core to prevent pod evictions
- 50-75 concurrent users per UI ServerBundle pod
- Minimum 3 worker nodes for high availability
- Add 30-50% over calculator minimums for on-prem installations
Building Your New Toolkit: A Practical Starting Point
Essential Tools for MAS Admins
Tool — Purpose — Legacy Equivalent
oc CLI — Cluster and pod management — WAS admin console
Postman / curl — API testing and troubleshooting — Direct SQL queries
Grafana — Performance dashboards — Tivoli Performance Viewer
Prometheus — Metrics collection and alerting — Custom monitoring scripts
EFK/ELK Stack — Centralized log analysis — tail -f SystemOut.log
jq — JSON processing on the command line — N/A (new tool)
MAS Suite Admin UI — Application and user management — Maximo admin mode
Your First Week Checklist
If you are transitioning to MAS admin responsibilities, here is a practical starting plan:
- [ ] Install
ocCLI and authenticate to your cluster (or request access) - [ ] Run
oc get podsacross all MAS namespaces — understand what's deployed - [ ] Read the logs of one healthy pod and one recently restarted pod
- [ ] Access the MAS Suite Administration dashboard and explore
- [ ] Obtain an API token and run a basic REST API query against Manage
- [ ] Review the MAS operator custom resources (
oc get suite,oc get manageapp) - [ ] Set up Postman with MAS API endpoints and your authentication token
- [ ] Identify who on your team manages the OpenShift platform (your escalation contact)
- [ ] Review your AppPoints allocation and current consumption
- [ ] Document the shared responsibility boundaries for your specific deployment
Common Misconceptions We Hear
We've worked with dozens of admin teams transitioning to MAS. These misconceptions come up repeatedly:
"MAS admins don't need technical skills anymore."
False. The skills are different, not simpler. Understanding distributed systems, API behavior, and container orchestration requires significant technical depth.
"Everything is automated by operators, so there's nothing for admins to do."
False. Operators automate the mechanical tasks. Admins still make decisions about configuration, monitor for anomalies, troubleshoot complex issues, manage users and security, and coordinate upgrades.
"SaaS admins have no technical responsibilities."
False. SaaS admins manage integrations, security configurations, user access, data governance, and upgrade validation. The technical scope is different, but it's still technical. We'll cover this in detail in Part 3.
"I need to become a Kubernetes expert before I can work with MAS."
False. You need Kubernetes literacy — understanding the concepts and basic commands. Deep expertise helps but isn't required for the admin role. You can learn incrementally.
Key Takeaways
- Six core areas define the new MAS admin role: microservices awareness, Kubernetes literacy, operator supervision, API-first troubleshooting, Zero Trust security management, and AppPoints licensing
- Kubernetes literacy is practical, not theoretical — learn the
occommands you'll use daily, not the full Kubernetes certification curriculum - Operators are your most important new partner — they handle the mechanical work that used to consume most of your day, freeing you to focus on higher-value activities
- The shared responsibility model is the foundation — knowing who owns what determines how you troubleshoot, escalate, and plan
- API-first troubleshooting replaces SSH-and-grep — build a toolkit of health checks, monitoring scripts, and Postman collections
- AppPoints require active management — this is a new ongoing responsibility that has no legacy equivalent
References
- IBM MAS Administration Guide
- Red Hat OpenShift CLI Reference
- Kubernetes Concepts — Pods
- Kubernetes Operators Pattern
- OpenID Connect Specification
- IBM License Service Documentation
- MAS AppPoints Licensing Guide
Series Navigation:
Previous: Part 1 — The Legacy Maximo Administrator Role: A Love Letter to the 7.x Era
Next: Part 3 — How the SysAdmin Role Changes in MAS SaaS: From Server Room to Strategy Room
View the full MAS ADMIN series index →
Part 2 of the "MAS ADMIN" series | Published by TheMaximoGuys
The new MAS admin role isn't smaller — it's different. In Part 3, we'll focus specifically on the SaaS admin experience: what you lose, what you gain, and why this isn't a downgrade but a career evolution.


