Governance

Running $48M in Consulting Taught Me One Thing About Analytics Governance

Charles Huang|

When you're accountable for more than 150 concurrent BI projects at a time — as I was for much of my tenure at Oracle — you stop thinking about individual engagements and start thinking in distributions.

You see the full range. The Fortune 100 implementation that looks perfect at go-live and comes back to you in crisis mode 14 months later. The mid-market manufacturer that, by any reasonable measure, has half the resources and a third of the technical sophistication, but whose analytics environment is still running clean three years after you deployed it. You start asking why. And eventually the answer becomes clear enough that you can predict it before the engagement even ends.

It was never the platform. It was rarely the quality of the build. It was almost always governance — specifically, whether the organization understood governance as something they were rather than something they had.

The Document Nobody Reads

Here is the version of governance I saw most often at the consulting firms I ran, and at the vendors I worked alongside: a project team produces a governance document during the final weeks of an engagement. It is thorough. It covers access control, change management, naming conventions, escalation procedures. Someone presents it to the business sponsor. The sponsor thanks the team and files it somewhere on a SharePoint drive.

Six months later, nobody can find it. Twelve months later, nobody remembers it existed.

That document was never governance. It was governance theater — the appearance of a structured operating model, produced to satisfy a project closure checklist, with no mechanism for it to actually change how the organization operated. The team felt good about it. The client felt covered. And then the environment ran on instinct and admin rights until something broke badly enough to require intervention.

I watched this cycle repeat across Oracle's portfolio for years. And I watched it repeat, with some variation, across Qlik's $48M services business. The format changed. The slide decks got better. The frameworks got branded. But the fundamental dynamic — consulting firm produces governance artifact, client files it, environment drifts — was disturbingly consistent.

What Scale Reveals

Running 150-plus concurrent projects at Oracle gave me a vantage point that's genuinely difficult to replicate. I could see patterns across industries and maturity levels simultaneously, track outcomes over years, and compare the clients who called back for help against the clients who never needed to.

The clients who called back for rescue work were not, in general, less sophisticated. They were not less well-funded. They were not running worse technology or less experienced internal teams. What they were missing was an operating model that had been internalized before the consulting team left.

The clients who didn't need rescue — the ones whose environments I could revisit two years later and still recognize — had built something different. Not a document. An operating rhythm. Recurring review cadences where someone actually looked at usage data, reconciled metric definitions, and made decisions about what to retire. Named ownership, not committee ownership. Change control that was lightweight enough to be followed consistently, rather than comprehensive enough to be ignored entirely. Someone whose standing job included asking whether the number in that dashboard still meant what it meant when it was built.

None of it was sophisticated. All of it was disciplined. And the discipline wasn't imposed by the consulting engagement — it existed because someone in the organization had decided analytics was an operational function, not a project output, and had structured accountabilities accordingly.

The Incentive Problem Nobody Acknowledges

I want to be direct about something that consulting firms, including the ones I ran, are rarely willing to say publicly: the professional services model is structurally misaligned with solving the governance problem.

Consulting firms make money on engagements. Engagements are scoped, delivered, and closed. A client organization that internalizes strong governance practices needs fewer engagements — or at least different kinds of engagements, at longer intervals, with less urgency. A client organization that doesn't govern well returns consistently, with problems that look urgent and justify new SOWs.

I am not suggesting this dynamic is cynical or intentional. Most of the people delivering those engagements genuinely want their clients to succeed. But the business model rewards delivery, not capability building. There is no line item for "the client should eventually not need us for this." There is no metric that tracks whether the operating model we recommended actually took hold six months after closeout.

At Qlik, running a business with $77M in annual bookings, the pressure to maintain and grow revenue created a constant tension against the kind of honest advisory that might reduce a client's dependence on the practice. You can navigate that tension with integrity. But it's always a tension, and most organizations don't navigate it particularly well.

The uncomfortable consequence is that most analytics consulting creates dependency, not capability. It solves the immediate problem in a way that tends to reproduce the conditions for the next immediate problem. Governance is the exact thing that would break that cycle — and it is the exact thing that is most consistently underdelivered.

What Governance Actually Looks Like

I have seen governance frameworks that ran to 40 pages and were never followed. I have seen organizations with no governance documentation at all whose environments were impeccably maintained. The document is not the thing. The operating rhythm is the thing.

Effective governance, in practice, is a small set of disciplines that run continuously:

Someone owns the environment. Not a team. Not a committee. A person, with a named mandate, who is accountable for the health of the analytics portfolio as a standing responsibility. This person has the authority to retire content, enforce naming standards, require documentation before production promotion, and say no to a department head who wants to bypass change control.

There is a defined path to production. Every piece of content that goes into a production environment goes through a review. The review does not need to be elaborate — it needs to exist. Who approved this? Is the metric definition documented? Are there downstream dependencies that need to be assessed? Without that gate, production is governed by whoever has admin rights and a deadline.

Usage data is reviewed on a cadence. Analytics platforms generate rich operational telemetry. What's being opened, by whom, how often, and what hasn't been touched in months. Most organizations have this data and never look at it systematically. Looking at it quarterly — even informally — is how you catch drift before it becomes sprawl. It's how you make the decommissioning decision before the obsolete content has been accumulating for three years.

Metric definitions are controlled. When a KPI definition needs to change, there is a process for changing it. Not because process is inherently valuable, but because an uncontrolled change to a metric definition is a silent data quality incident waiting to happen. Finance calculates margin one way. Operations calculates it another way. Both dashboards look correct. Nobody knows which one to use for the board presentation. That situation has an organizational cost, and it is preventable.

None of this requires a governance center of excellence. None of it requires a vendor-certified framework or a consulting engagement to implement. It requires a decision — made once, at sufficient organizational altitude — that analytics will be treated as an operational function with standing accountabilities, not a project that completes and transitions to maintenance mode.

Why I Joined Evil Genius

After Qlik, I founded Active Analytics and ran enterprise engagements directly. The work reinforced what I'd observed from the operations side: the clients who benefited most from external expertise were the ones who used it to build internal capability, not to substitute for it. And the question I kept coming back to was whether there was a model that was actually structured to deliver that outcome — rather than one that delivered it incidentally, when the engagement team was unusually disciplined about it.

What drew me to Evil Genius was the architecture of the thing. An operation built around tools that accelerate the diagnostic and mechanical work — so that senior engagement time goes toward judgment, not labor. A methodology that starts with what the environment actually contains before making recommendations about what to build. A practice that, by design, doesn't expand scope to fill a budget.

That's a different structural incentive than the one I operated under at Oracle and Qlik. And in my experience, structural incentives matter more than good intentions.

Governance is not the most exciting conversation in analytics. It doesn't come with a product demo or a proof-of-concept or a milestone presentation. But it is the conversation that determines whether any of the other work holds. The organizations that get durable returns on their analytics investments are the ones that treat the operating model as seriously as they treat the technology selection.

I've seen what happens when they don't. I'd rather help organizations see it before they learn it the hard way.

If you're building an analytics program and want governance built into the foundation from the start, let's talk.

Back to all posts

Want to discuss this topic?

We're always up for a conversation about analytics modernization and AI readiness.

Schedule a Call