Skip to main content
← Research
Viren Mohindra5 min read

Why Decisions Disappear

The AI revolution isn't creating a memory problem — it's exposing one that's been compounding for decades. And most of the industry is building the wrong fix.

agent memory

In 1894, The Times of London predicted that within fifty years, every street in the city would be buried under nine feet of horse manure. New York had 100,000 horses producing 2.5 million pounds of manure daily. In 1898, the first international urban planning conference convened to solve the crisis. It was scheduled for ten days. Delegates abandoned it after three. Nobody could see a solution.

By 1912, cars had made the question irrelevant. But they didn't just solve the manure problem — they created emissions, traffic fatalities, noise pollution, and eventually a climate crisis. Every transformative solution carries its own pathology. The question is never whether new problems emerge, but how quickly you build the infrastructure to handle them.

Something similar is happening with AI. LLMs solved a massive set of problems around writing and generating code. But they also introduced a failure mode almost nobody is talking about: every AI interaction happens in a temporary workspace that forgets everything once the session ends. We solved the generation problem. The memory problem is the exhaust.

The problem that was always there

Here's the conventional take: AI tools need better memory. Bigger context windows. More RAG. This misses the point the same way a faster horse-drawn carriage misses the point.

The problem isn't that AI tools don't have memory. It's that companies don't have memory. AI just cranked up the clock speed on a failure mode that's been compounding for decades.

Key takeaway

Gallup estimates voluntary turnover costs U.S. businesses $1 trillion per year. A 2020 Deloitte study found 75% of organizations say knowledge preservation is critical — only 9% are ready to do it. That's one of the largest gaps between importance and readiness in the entire survey.

After the Apollo program ended in 1972, NASA's Saturn V engineers retired and their institutional knowledge retired with them. Despite having complete blueprints, NASA could not rebuild the Saturn V. The documentation covered what was built. It didn't cover why — the undocumented workarounds, the failed approaches, the reasoning that existed only in engineers' heads. The blueprints were a corpse. The knowledge that made them alive was gone.

Organizations learned to tolerate this because the loss was slow enough to feel normal. AI changed the tempo. At the current U.S. turnover rate of 26.3%, a team of ten turns over every four years. AI tools turn over every session.

And it's not just the AI that's forgetting. The speed of AI-assisted work is eroding the organic process by which humans used to build context. Code arrives fully formed, gets reviewed at a glance, and merges before anyone has internalized what it does. The contextual mind map that used to form through the friction of writing code is being skipped entirely.

What search can't find

The reasonable response is: index everything. Glean, Guru, Notion AI, Confluence — they all do the same thing: horizontal search over static documents. They answer the question what did someone write down?

But decisions aren't documents. A decision has structure — who decided, what alternatives existed, what the reasoning was, whether the context has changed. Horizontal search flattens all of that into "here are some matching results." You can't search your way to understanding when the tool treats every signal with the same depth — which is to say, none.

This isn't just an engineering problem. A friend in finance pointed it out: a new analyst spends months reconstructing why the team passed on deals that looked good on paper. The CRM that's supposed to be the system of record? Garbage in, garbage out — every rep logs deals differently, no standard structure for why. A PM green-lights a feature that was already tried and killed a year ago. An ops team selects a vendor that was previously fired for the exact same issue.

Every team in every company loses knowledge the same way. Decisions happen, context lives in transient media, people leave, and the reasoning disappears.

Better shovels

In 1898, the delegates were asking the right question — how do we deal with all this waste? — within the wrong paradigm. They were optimizing the horse. The answer wasn't a better shovel. It was a different engine.

The infrastructure that solves organizational memory won't look like a search engine. It'll look more like a nervous system — something that listens to the signals an organization already generates, synthesizes them into memory that strengthens or decays based on what actually matters, and surfaces context before someone needs to ask for it.

The horse is dying. The manure is piling up. And most of the industry is still building better shovels.


I'm building this at mnem.dev — organizational memory that compounds instead of starting from zero.

Previously: The Artisanal Engineer · Why Every AI Coding Tool Is a Perpetual New Hire