Contact Centers Overestimate Their KM Maturity

Contact Centers Overestimate Their KM Maturity

There is a conversation I have had, in one form or another, with leaders across healthcare, financial services, utilities, and BPO for the last several years. It goes something like this. 

“We have a knowledge base. We are rolling out an AI assistant. We are ready.” 

And then I ask a few questions.  

  • How many of your critical articles have a named owner and a review date?  
  • When a policy changes, how long before every channel reflects it?  For instance, the chatbot, the IVR, the agent desktop, the web portal.  
  • Can you show me who changed what, when, and why? 

The conversation usually gets quieter. 

The gap between where organizations believe they sit on the knowledge management maturity curve and where they actually sit is one of the most persistent and under-discussed risks in contact center transformation today.  

And in 2026, with AI deployments accelerating across regulated sectors, that gap has stopped being a background concern. It is a live operational liability.  

Why the self-assessment tends to be optimistic 

It is not difficult to understand why leaders overestimate their KM maturity. 

Most regulated contact centers have something 

  • There is a knowledge base module in the CCaaS platform.  
  • There is a SharePoint intranet.  
  • There is a collection of PDFs that most people know exist, even if nobody is quite sure which version is current.  
  • There may even be a recent AI pilot that produced encouraging demo results. 

That feels like progress. And it is relative to nothing. But it is not the same as readiness.  

Specifically, it is not the same as the kind of readiness that allows you to deploy AI confidently in an environment where a wrong answer can trigger a complaint, a regulatory finding, or lasting damage to a customer relationship.  

The truth is that having a knowledge base and having governed knowledge are two entirely different things.  

One is a place where documents live. The other is a system where the right answer is always owned, always current, and always traceable regardless of which channel or tool surfaces it.  

A plain-language map of the four stages 

The maturity model we use to diagnose this does not require a consultant or a complex scoring process. It asks one fundamental question at each of four stages: how much can you trust what your people and AI are working with?  

The Four Stages of Knowledge Maturity flow chart

Stage 1 — Scattered 

Knowledge lives in emails, shared drives, and people’s heads. Agents survive by asking around, checking with a colleague, or copying from old tickets. There is no single source of truth because there is no single source at all. Just a dispersed collection of individual judgements and institutional memory that has never been written down.  

If you deploy AI at this stage, it will guess. And it will guess with the calm confidence of a system that has no way to know it is guessing.  

Stage 2 — Collected 

A knowledge base exists. Content has been gathered into one system, and there is at least a nominal process for adding to it. This is the stage most organizations are at when they believe they are ready for AI.  

The problem is that Stage 2 knowledge is patchy and out of date in places. Governance exists on paper more than in practice. Not everyone trusts the system, which is why agents still ask around. Just less openly than at Stage 1. At this stage, AI is still high risk, because it can sound confident while using stale or unapproved content. The surface appearance of reliability masks a much messier reality underneath.  

Stage 3 — Controlled 

Core knowledge lives in one place, with clear owners, review dates, and a full change history. When a policy changes, there is a known path for updating it and a way to verify that the update has flowed through to agents and customers across every channel.  

This is the minimum viable foundation for safe AI deployment in a regulated contact center. Not perfection but the non-negotiables. Before Stage 3 is reached, deploying AI is a gamble. Once it is reached, AI becomes something you can begin to trust.  

Stage 4 — AI-Ready 

Knowledge is not just stored; it is actively tested, measured, and improved. The organization treats its knowledge base the way it treats any other operational control: with analytics, feedback loops, and regular audits that surface gaps before customers or regulators do.  

This is the stage where AI can scale across channels with confidence. Not because everything is perfect, but because the governance layer is strong enough to catch and correct problems before they propagate at machine speed.  

The line that changes everything 

AI and Knowledge Governance The Risks and Rewards

In the work we have been doing on this maturity model, one observation keeps proving to be the most important framing device in any leadership conversation: 

AI does not make weak knowledge safer. It makes weak knowledge scale faster.  

That single sentence reframes the whole discussion. Because the question most leaders are asking  “Are we ready for AI?” is really two questions.  

  • The first is whether the technology works.  
  • The second is whether the knowledge layer it will be working with can be trusted.  

The technology question is usually the easier one to answer.  

When an AI assistant is pointed at a Stage 2 knowledge estate  – patchy, partially outdated, with gaps filled by tribal knowledge – it does not compensate for those weaknesses. It amplifies them. 

It delivers outdated guidance with exactly the same assured tone it uses for fully uptodate answers. It answers questions that have no approved answer by constructing something plausible from whatever it finds. It scales your uncertainty to every customer, every channel, every interaction.  

Five questions to ask about your own environment – honestly 

Before concluding that your organization is at Stage 3 or beyond, it is worth sitting with five direct questions. These are the same questions we use at the start of every KM maturity assessment.  

  1. Do agents still rely on tribal knowledge, shared drives, or asking around? 

Not “sometimes in edge cases” but as a routine part of resolving interactions. If the answer is yes, the knowledge base is not yet the single source of truth. It is one option among several, and AI will be working alongside all the others.  

  1. When a policy changes, how quickly is the live knowledge updated everywhere it matters?

This means not just the main knowledge base, but the chatbot training set, the IVR scripts, the web self-service content, and the agent desktop guidance. If different channels are updated on different timelines by different teams, version drift is already happening.  

  1. Does every critical article have an owner and a review date?

Not “most articles” and not “I think so.” Every critical article. In a regulated environment, the word “most” is not sufficient when an auditor asks who approved the guidance that was given to a member, a patient, or a policyholder.  

  1. Can you see who changed what, when, and why?

This is the audit trail question. A SharePoint version history is not the same as a compliance-grade governance trail in a guided knowledge system. The difference matters when a regulator asks you to reconstruct the sequence of events around a disputed interaction.  

  1. Is AI restricted to approved, in-date, customer-safe knowledge only?

“It seems to work in testing” is not an answer that holds up in a regulatory review. The question is whether the guardrails are architectural, built into how the AI accesses content or aspirational, relying on the assumption that the corpus is clean enough.  

If any of these questions produced a pause, an uncomfortable qualification, or a mental note to check with a colleague, that is useful data. It tells you more about your actual maturity stage than any technology audit will.  

Five Questions to Assess Your Knowledge Maturity

What this means for your AI investment 

The goal here is not to slow AI deployment.  

It is to prevent the pattern that is increasingly visible across regulated sectors: organizations that invest significantly in AI tooling, run successful pilots, go live at scale, and then spend the next twelve months managing complaints, inconsistencies, and regulatory questions that trace back to the quality of the knowledge the AI was working with.  

The organizations that will get the most durable value from AI in their contact centers are the ones that treat knowledge governance as a prerequisite, not an afterthought.  

Not because it is a governance formality, but because AI that operates inside a disciplined, governed knowledge layer delivers faster, more accurate, more defensible answers at scale, consistently. With an audit trail that protects the organization as well as the customer.  

That is not a theoretical aspiration. It is an operational design choice that is available right now, at every stage of the maturity curve if you know where you are starting from. 

pv webinar image Is your Knowledge Management ready for AI-powered customer contact (1)

Find out where you sit — and what to do about it 

Our webinar is specifically designed for leaders in regulated contact centers who want a clear, honest answer to the question:  

Is your knowledge management ready for AI-powered customer contact? 

We will walk through the four maturity stages in plain language, share the honest diagnostic questions to ask about your own environment, define precisely what “AI-ready” requires, and give you a prioritized view of what to tackle first to close the gap. 

This is not a theoretical framework. It is a practical diagnostic for leaders who need to move digital transformation forward without moving recklessly.

Watch our webinar

knowledge under pressure whitepaper

For a deeper dive into the strategic context, including cognitive overload, training compression, and the ROI evidence, you can also download the “Knowledge Under Pressure” whitepaper. 

It sets out why knowledge governance, guided guidance, and AIhuman collaboration are now central to the future of regulated contact centers, not peripheral concerns

Download the whitepaper

About the Author

Martin Hill-Wilson is a long-standing member of the CX and Customer Contact community and an experienced business and thought leader in customer strategy, design, and practice.

Over his career, he has held senior roles across consulting, BPO, and systems integration before establishing himself as an independent advisor, consultant, facilitator, and conversation host.

Reliable products.
Real results.