What "Threshold Capability" Actually Means

What "Threshold Capability" Actually Means

The language around AI readiness in regulated industries has a problem. 

Either it is too vague to act on: 

  • “build a strong data foundation,”  
  • “ensure governance is in place,”  
  • “adopt a responsible AI framework” 

or so technically specific that it belongs in an architecture document rather than a leadership conversation. 

What is missing is a plain language answer to a very practical question:  

What is the minimum we need to have in place for deploying AI in our contact center to be a responsible decision rather than a reckless one? 

The answer has a name. In the KM maturity model we use with regulated contact centers, we call it threshold capability.  

Why “AI-ready” has been defined the wrong way 

Most discussions of AI readiness focus on technology: the right model, the right infrastructure, the right integration architecture. These things matter, but they are not where AI risk in regulated customer contact actually originates. 

The primary risk driver in AI-assisted contact centers is not the AI itself. It is the quality, ownership, and governance of the knowledge the AI is working with.  

An AI assistant is, at its core, a very fast, very confident retrieval and generation system. It will answer questions based on whatever it has access to. If that content is owned, current, reviewed, and consistent, the AI will deliver fast, accurate, governed guidance.  

If that content is patchy, partially outdated, inconsistently authored, or drawn from a mix of approved and unapproved sources, the AI will deliver exactly the same. With exactly the same confident tone in both cases.  

This is the mechanism behind what the industry calls “hallucination” in regulated environments. It is not usually the model inventing answers from nothing. More often, it is the model constructing plausible answers from content that seemed reasonable but was not actually current, approved, or authoritative.  

The AI is not lying. It genuinely cannot tell the difference.  

That is why, when leaders and procurement teams search for “the most secure contact center software” or “the most compliant knowledge management software,” the answer has to begin with the knowledge layer, not the AI layer.  

Security and compliance in this context are engineering choices that show up in your governance model, audit trails, and AI guardrails, and they can be tested in an audit, not just claimed in a slide deck.  

Defining threshold capability precisely 

The Five Threshold Capability Requirements

Threshold capability is Stage 3 of the KM maturity model: Controlled.  

It is not perfection. It is not the sophisticated, analytics-driven, feedback-loop-rich environment of Stage 4. It is the minimum viable foundation from which AI deployment stops being a gamble, and starts being a governed, defensible operational choice. 

At Stage 3, five things are true:  

  1. Core knowledge lives in one place.
    Not in a combination of SharePoint, aCCaaS knowledge module, team drives, and individual cheat sheets. In one system, with one version of the truth, accessible to every agent, bot, and channel that needs it.  
  2. Every critical article has a named owner and a review date.
    Not “most articles, broadly speaking.” Every article that could be surfaced to a customer in a regulated interaction. Ownership is not a bureaucratic nicety. It is what makes accountability possible when something goes wrong. 
  3. A fullchangehistory exists.
    Who changed what, when, and why. Not to satisfy an internal audit process, but because in a regulated environment, the ability to reconstruct the content state at the time of any given customer interaction is the difference between a defensible position and an exposed one.  
  4. A clearchangepropagation process is in place.
    When a policy, procedure, benefit rule, or product specification changes, there is a defined path from SME approval to every channel it affects and a way to verify that the update has propagated completely. Not “someone usually picks it up,” but a governed, traceable process.  
  5. AI is constrained to the controlled corpus only.
    The AI cannot roam freely across unvetted legacy content, indexed shared drives, or whatever happens to be accessible in the background. It is architecturally restricted to approved, in-date, customer-safe knowledge. This is not a configuration preference; it is a non-negotiable design requirement for regulated environments.  

Until all five of these conditions are reliably true, deploying AI at scale in a regulated contact center is a risk management decision, not a digital transformation decision. And it is one that boards, risk committees, and regulators are increasingly equipped to scrutinize.  

What the risk chain looks like when threshold is not met 

The Five Threshold Capability Requirements (1)

The webinar slide that tends to land hardest with leadership teams shows the downstream consequence chain from an immature knowledge base. It runs as follows:  

Immature knowledge base – legacy documents in shared drives and inboxes, patchy, outdated, and inconsistent. 

AI answers sound confident – even when the knowledge is wrong. The system presents stale or unapproved content with exactly the same tone as accurate, current guidance. 

Complaints and rework – misquotes and conflicting answers drive escalations, repeat contacts, and operational cost. 

Regulatory and legal exposure – incorrect AI guidance triggers findings from regulators, auditors, and legal teams. In healthcare, this can affect patient outcomes and clinical compliance. In financial services, it can constitute a breach of conduct-of-business obligations. 

Trust and brand erosion – customers lose confidence in digital answers and, by extension, in the organization’s ability to manage their case, claim, or account reliably.  

This chain does not start with a technology failure. It starts with a knowledge governance failure. One that AI deployment then accelerates and scales.  

The organizations that avoid it are not the ones with the most sophisticated AI. They are the ones that reached threshold capability before switching AI on. 

Why threshold capability is becoming a regulatory expectation, not just best practice 

For leaders who need to make the case internally for prioritizing KM maturity before expanding AI deployment, there is increasingly strong external support. 

Frameworks including the EU AI Act and the NIST AI Risk Management Framework both emphasize that AI risk management must be embedded in organizational processes not treated as a layer that sits on top of existing systems.  

This means the requirement to demonstrate that AI operates within documented, auditable, human overseen boundaries is moving from voluntary guidance toward regulatory expectation for organizations deploying AI in high-stakes customer interactions.  

In practical terms, this means an auditor or regulator reviewing AI deployment in a regulated contact center is increasingly likely to ask the same five threshold questions: 

  • What knowledge is the AI drawing from, and how do you know it is current and approved? 
  • Who owns that content, and what is the review cycle? 
  • Can you produce an audit trail of what guidance was given, in what context, and based on which version of content? 
  • How do policy changes propagate to the AI’s knowledge base, and how quickly? 
  • What prevents the AI from answering questions that fall outside governed, approved content?  

These are not novel questions for compliance teams. They are the same questions applied to human agents for years. AI deployment does not reduce the obligation to answer them it raises the stakes, because AI operates at a scale and speed that makes the consequences of a wrong answer much larger.  

The threshold diagnostic: where does your organization sit? 

AI and Knowledge Governance The Risks and Rewards (1)

Rather than a scoring system, the most useful way to assess threshold capability is to apply the five questions directly to your current environment. For each one, the test is not whether the answer sounds reasonable it is whether you can demonstrate it. 

Threshold requirement
The test
A weak answer looks like…
Knowledge in one place
Can every agent, bot, and channel access the same content?
“Mostly, but agents sometimes use their own notes too”
Every article has an owner and review date
Can you pull a random critical article and show both?
“We have owners for the main ones”
Full change history
Can you show who changed a specific article six months ago, and why?
“It’s in version history somewhere”
Clear change propagation process
Show me the last policy change. How did it reach every channel?
“We send an email to team leaders”
AI constrained to approved corpus only
What prevents the AI from accessing legacy or unapproved content?
“We haven’t had any problems so far”

  

If any row in that table produces an uncomfortable answer, the organization has not yet reached threshold capability. That does not mean AI should be abandoned. It means the sequencing needs to be corrected.  

 What to do if you are below threshold 

The most important thing to resist is the temptation to expand AI use cases while threshold capability is still being built. Expanding AI’s reach before the knowledge foundation is secure does not accelerate transformation. It amplifies the risk of exactly the complaints, regulatory exposures, and trust erosions outlined above.  

The practical starting point is to identify the single biggest gap in the five threshold requirements and treat it as an infrastructure priority, not a content project.  

  • Ownership and review cycles for the most critical 50 articles.  
  • A defined change propagation process for the top ten policy domains.  
  • An architectural review of what content AI can currently access and whether all of it meets the threshold tests.  

None of this is glamorous. It will not feature in a board AI strategy presentation. But it is the work that separates organizations that get durable value from AI in regulated contact from those that get a few impressive demos followed by a difficult conversation with their regulator.  

pv webinar image Is your Knowledge Management ready for AI-powered customer contact (1)

Find out where you sit — and what to do about it 

If this framing is useful, the logical next step is to bring it into your own environment with a structured diagnostic. 

On 13th May, we are running a session — Is your Knowledge Management ready for AI-powered customer contact? that will define threshold capability with precision, provide the honest diagnostic questions to apply to your own environment, and give leaders a prioritized view of what to tackle first to close the gap. 

This is designed for leaders who need to move their digital transformation forward without moving recklessly and who want a plain-language framework, not a theoretical one.  

 Register for webinar

knowledge under pressure whitepaper

For a deeper dive into the strategic context, including cognitive overload, training compression, and the ROI evidence, you can also download the “Knowledge Under Pressure” whitepaper. 

It sets out why knowledge governance, guided guidance, and AIhuman collaboration are now central to the future of regulated contact centers, not peripheral concerns.

Download the whitepaper

About the Author

Martin Hill-Wilson is a long-standing member of the CX and Customer Contact community and an experienced business and thought leader in customer strategy, design, and practice.

Over his career, he has held senior roles across consulting, BPO, and systems integration before establishing himself as an independent advisor, consultant, facilitator, and conversation host.

Reliable products.
Real results.