What We Are Learning

Practitioner observations from active engagements. No case study theater. These are things we have observed, tested, and are currently applying.

Data, AI & Analytics

CMDB First: Why We Require a Data Sprint Before Any Now Assist Deployment

The core issue is that Now Assist relies on the CMDB for context. When that context is wrong or stale, the AI gives answers that are technically confident and factually incorrect. Users lose trust in the tool within the first two weeks, and that trust is very hard to rebuild.

A structured CMDB sprint typically runs four to six weeks. We focus on the highest-traffic CI types first, establish a governance process so the data stays clean, and only then introduce the AI layer. The 90-day satisfaction numbers are consistently better when we sequence it this way.

Veterans in Tech

What 30 Veteran Hires Have Taught Us About ServiceNow Delivery

Retention is higher than industry average. Client satisfaction scores for veteran-staffed engagements are above our firm average. Here is why we think that is happening.

Military culture trains people to own outcomes, not tasks. When something goes wrong on a deployment, a veteran on the team does not wait for a status meeting to raise it. They escalate immediately, come with a proposed fix, and stay on the problem until it is resolved. That behavior pattern is exactly what clients need when a go-live is at risk.

The certifications matter. But the operational mindset is what separates our veteran-staffed engagements. We track this now and actively staff more veterans on engagements where delivery accountability is the primary risk.

Utility Reliability

Why AI for Utilities Requires Solving OT/IT Integration First

Predictive maintenance AI fails when EAM, GIS, and OT systems are disconnected. The problem is almost never the AI model. It is the data schema mismatch between operational and enterprise systems that no vendor documents clearly.

Infor EAM and Esri GIS use different identifiers for the same physical asset. OT systems report events in formats that do not map cleanly to enterprise data models. When a utility tries to feed this data to a predictive AI without resolving these conflicts first, the model trains on noise and produces unreliable outputs.

We now approach utility AI engagements by starting with the integration layer before any model work. This adds time to the front end but eliminates the pattern where AI tools are deployed, perform poorly, and are quietly abandoned.

Recognized a Pattern That Applies to Your Program?

If something in here sounds familiar, we are worth a conversation. We will tell you what we think and whether we can help.