Ochre & Co.

← All resources

Organization & systems
April 2026

Organization context before models

The single most undervalued asset a business can own right now is what we call organization context — a clear, written account of who decides what, where work goes after this desk, and what "done" looks like.

AI runs on context. When the business is fuzzy — fuzzy about who owns which decision, fuzzy about what the policy is versus what one senior person happens to do, fuzzy about where work goes after it leaves this desk, in what shape, to whom — the model produces more of whatever fuzziness was fed in. Now the fuzziness is automated.

The fix is not a better prompt. The fix is a clearer business, and then a model on top of that.


A sharp tool on a fuzzy process just fails faster

Here is the pattern we see most often.

A team handles, let's say, vendor onboarding. Four people each do it slightly differently. There is no written standard. The rule, if you pull it out of them, is "whatever Janet has been doing since 2019." It mostly works, because Janet is good.

A vendor sells them an AI tool for vendor intake. It is real software — if you gave it a clearly defined process, it would execute that process at high speed.

Six months later, the four people are still doing onboarding four different ways. The AI is trying to help, but the input is inconsistent, the "approved vendor" criteria live in someone's head, and the tool keeps producing confident recommendations that contradict each other. The project gets put on the back burner. Someone decides AI "is not ready yet."

That project did not fail because AI was not ready. It failed because vendor intake was not ready. The AI made the existing fuzziness move ten times faster — which, to be fair, is useful. But everyone read it as a technology failure.

The takeaway is boring and correct: AI is a force multiplier on the process underneath it. If the process is good, AI makes it better. If the process is fuzzy, AI makes it faster and louder. There is no version of this story where skipping the process work produces a good ending.


What "getting clear" actually looks like

Three layers. Most businesses have pieces of each — scattered across an old deck, a wiki, and a few people's heads. "Getting clear" means writing them down in a way a newcomer, or a model, could read and actually use.

Layer 1 — What this business is

This layer changes slowly. It should fit in a few pages of plain writing. Most companies have it spread across a pitch deck from two years ago, a founder's memory, and three different mission statements. That is not a layer. That is a rumor.

Layer 2 — How each team actually works

For each function — sales, operations, support, finance, delivery — the same five questions:

If you can answer these for a team cleanly, you have given a human or a model everything they need to be useful there. If you cannot, no amount of tooling will make up for it.

Layer 3 — The operator's playbook

The tactile layer. Checklists for recurring work. Templates for recurring messages. Decision rules ("if X, escalate; if Y, proceed"). The tools each role uses and what each tool is for.

Not an enterprise binder nobody opens. A short, living document written by the people who do the work, read by the people about to do it.


The order that works

Most successful AI adoption goes in this order:

  1. Get clear. Layers 1, 2, 3. Write them. Argue them. Use them.
  2. Pick one real process where speed or quality would matter. Narrow enough to measure.
  3. Shape the context for that process — what the model should see, in what order, with what rules.
  4. Put a model behind it. Which model matters less than you think; it mostly falls out of the first three steps.
  5. Keep going. Your team owns it. Add the next process. Update the context when reality changes.

Every step skipped is debt. Teams that pretend they can skip step 1 because "we will just use ChatGPT" end up rebuilding step 1 under duress eighteen months later, usually during a leadership change. It costs more the second time.


What this does for you

When organization context is clean, three things happen at once:

None of that is free. Doing this work is harder than running another pilot. But it is the work that compounds. Pilots that skip it do not.


How we help

We do not write your organization context for you. We could sell it as a product, but it would not work.

The reason is simple: the value of this asset is that your team believes it and uses it. A beautiful, AI-generated playbook that nobody in your room argued over will be read once and ignored. We have seen teams hit that failure over and over.

What we do instead is equip your team to build it. We bring the shape, the questions, and the discipline. They bring the business. We argue productively about what is actually true. At the end, the artifact is theirs — written in their voice, defended by them in meetings, updated by them when reality changes. We do not own it. You do. That is the only version that survives us leaving.

If you have watched a tool project stall on something that was not really a tool problem — or if you are staring at an AI plan that cannot quite explain what decision it is improving — this is the capability we equip your team to build. Two sentences on what you are trying to do is enough to start a real conversation.


Going deeper

If you want the technical version of the same idea — context at the prompt level rather than the organization level — read What "context" really costs you.

For the research and writing that shaped how we think about this:

If something in here maps to a problem you are sitting on

Two sentences on what you are trying to do is enough to start. We reply personally—no sequences, no SDR handoff.

New writing is announced via the —same list site-wide. Away from the home page, this opens the signup form over the page so you do not lose your place.