Decidr logo
Back

Your AI doesn't have a data problem. It has a knowledge problem

Decidr
9 min read

TL;DR: Most enterprises try to solve institutional knowledge loss with better documentation. It doesn't work, and it's the reason AI stalls. What your AI actually needs isn't more documents. It's a map of how work moves, including the exceptions nobody ever wrote down.


Blog image 29

There's a conversation that happens in most large organisations somewhere around month four of an AI initiative.

The pilot results came in. The numbers are underwhelming.

The consultants are saying the models need more data. The IT team is talking about integration. And somewhere in that room, a COO is thinking: we've spent a lot of money to automate things that weren't the actual problem.

This isn’t a technology story. The AI worked. The issue is that nobody could tell the AI what the work actually was.

The root cause is AI knowledge management — most large organisations simply can’t tell AI what they know, because what they know has never been made explicit.

The gap between enthusiasm and execution

Research tracking 2,400+ enterprise AI initiatives found that more than 80% of the $684 billion invested in AI in 2025 failed to deliver its intended value, and 95% of generative AI pilots never reach production.

The advice that follows is usually the same: invest in better models, cleaner data, or more rigorous change management. These are not wrong. They’re also not the root cause.

The workflows AI is supposed to execute still live in the heads of the people who run them.

Our own research across Singapore SMEs and enterprises found that 82% of businesses face operational challenges because too few employees know key workflows. Only 43% of leaders feel confident that their workflows and processes are clearly captured and kept up to date.

The pattern holds in Australia too: 83% of SME decision makers believe AI will significantly impact their business within 12 months, but only 39% feel confident actually executing on it. And 76% haven't developed a clear AI strategy or roadmap at all.

That's the gap: AI tools are being adopted without the operational knowledge that would make them genuinely powerful.

Why documentation usually fails

Businesses are running AI on top of the same undocumented, fragmented, person-dependent knowledge base they've always had.

Documentation has a structural problem: it captures how work is supposed to happen, not how it actually happens.

It describes the clean path. Real work runs on exceptions. Who gets called when the system throws an error nobody planned for. Which approvals can be fast tracked. What the escalation path actually is on a Tuesday when two senior people are travelling.

Sugarwork, a company that helps businesses map what their employees actually know, estimates that 80% of operational knowledge never gets written down at all. The operational wisdom of a large organisation is not in its process manuals. It's in its people — and specifically in the judgment calls those people make every day.

What knowledge loss actually costs

The $31.5 billion figure — Fortune 500 companies losing that annually from failing to share what they know — is striking, but it obscures something more personal.

The average large US business loses $47 million a year in productivity from inefficient knowledge sharing alone. That's the cost of people waiting for information, recreating work that already exists somewhere, and making decisions without the context to make them well.

When someone leaves, it compounds fast.

42% of institutional knowledge is unique to the individual — acquired specifically for their role, not shared with anyone else. It doesn't get handed over in an exit interview. It's gone.

And according to research published in Harvard Business Review, employees already spend 21% of their working week searching for information that exists somewhere in the organisation — they just can't find it.

In the pre-AI era, this was an efficiency problem. In the AI era, it's a capability problem.

An AI system can only act on what it's been given. If the knowledge it needs to run a process reliably exists only in one person's head, your AI is one resignation letter away from breaking.

What does AI actually need from your organisation?

To run workflows reliably, AI needs three things your documentation almost certainly doesn't capture:

  1. The real sequence of a process, including its branches and exceptions
  2. Clear ownership at every handoff
  3. A concrete definition of what a good outcome looks like.

An agentic AI system — one that executes multi-step tasks autonomously rather than waiting to be prompted at each step — breaks down without all three.

It needs to know the actual sequence of a process, including the branches. Not the idealised flowchart. The real path, with the decision points, the decision rules and the exceptions that get handled informally a dozen times a week.

It needs to know who owns what, and when ownership transfers. Responsibility charts — the documents that map who owns what and who needs to sign off — are a start.

But the real version in most organisations is understood, rather than written, and it shifts depending on context, urgency, and who happens to be available.

It needs to know what good looks like. What does a completed task actually involve? What are the edge cases that require human review? Where does the process connect to another process, and what gets passed between them? This is the difference between an AI that runs a workflow and one that runs it correctly.

Most organisations have none of this documented. Not because they haven't tried, but because the tools they've used to try were built for the wrong job.

None of it can be extracted from a document library. It has to be surfaced directly from the people who do the work — and structured in a form that AI can actually use.

The fix isn't more tools. It's a different approach to the knowledge underneath them.

The map, not the manual

This is the distinction that separates organisations closing their AI gap from those still in it. The ones pulling ahead have stopped trying to document their way to AI readiness. They've started mapping how work moves instead.

Mapping is different from documenting. Where documentation asks people to describe a process, mapping captures what they actually do, including the decisions they make that they've never been asked to articulate before. What gets escalated versus resolved at the team level.

Where the informal approval shortcuts are. Which handoffs routinely cause delays, and why. The goal is to turn tacit knowledge — the expertise that lives in people's heads — into something structured that AI can actually act on.

That’s what AI-ready institutional knowledge looks like. It’s structured, actionable and connected to the rules and systems that govern how decisions actually get made: what we call decision systems. It is not a manual. It is a model.

And once it exists, orchestration, the coordinated flow of tasks and decisions across your organisation, becomes something AI can actually do reliably.

Does your AI know what your organisation knows?

Three questions are worth sitting with before your next AI initiative.

  • Can your AI access the actual logic of your workflows, not just the documents that describe them?
  • Do the people who run your most critical processes know how to make that knowledge explicit?
  • And if two of your most experienced operators left tomorrow, would your AI perform worse next week?

The organisations that answer those questions early, and act on them, don't just improve their own AI performance. They build a structural advantage that compounds, because the map of how work moves becomes an asset their competitors can't replicate by buying better models.

If the answer to any of those questions is uncertain, the documentation approach has already shown you its limits.

Talk to the Sugarwork team about mapping your organisation's operational knowledge.

You can read more about why this pattern keeps repeating in our breakdown of the knowledge gap behind AI pilot failure.

Share article