Prompts, context, tools — the three levers
When something is wrong with an AI feature, owners and teams reach for one of three controls without naming which one. The output is bad — so we change the prompt. The output is bad — so we feed it more information. The output is bad — so we connect it to another system.
Each of these is a real lever. Each of them does something different, costs different amounts, and fails in different ways. The three are not interchangeable. Reaching for the wrong lever is one of the most common mistakes we see, and it wastes a remarkable amount of time and money.
This piece names the three levers, what each one does, and how to know which one you should actually be reaching for.
Lever one: the prompt
The prompt is the instruction you give the model. The most familiar lever, because everyone who has used a chatbot has touched it.
A prompt is what you type into the chat box. In a built AI system, the prompt is also a set of standing instructions — written by whoever set up the system — that frames every conversation. "You are a quote-writing assistant. The customer business is a landscape contractor. Always include line items for labor, materials, and equipment." That kind of thing.
The prompt is the cheapest lever to change. You edit some text, you run the system again, you see if the output got better. There is almost no engineering work involved. Most "prompt engineering" tutorials online are about this lever specifically.
The prompt is also the weakest lever. It tells the model how to behave with what it already has. It does not give the model new information, and it does not let the model do anything new. A prompt change can shift tone, structure, or focus. A prompt change cannot make the model know something it does not already know, and cannot make it take an action the system around it has not been wired to take.
When the prompt is the right lever:
- The output is in the wrong tone, format, or level of detail
- The model is technically correct but missing the operator's preferences
- The instructions in the system are vague and the model is filling in the blanks
- A small adjustment to behavior would produce the output you want
When the prompt is the wrong lever (and people reach for it anyway):
- The model is missing facts about your business that it cannot guess from the prompt
- The model needs to do something the system has not been built to let it do
- The output is hallucinating, and you are trying to fix it by saying "do not hallucinate"
That last one is the most common failure. Owners type "be accurate, do not make things up" into a prompt and expect that to fix wrong output. It does not. The model has no internal way to follow that instruction, because it has no internal way to know when it is making things up. Telling an LLM to be accurate is like telling a calculator to be careful. Wrong category of fix.
Lever two: context
Context is the information you put in front of the model so it can do its job.
If the prompt is "what to do," context is "what to do it with." A quoting assistant with no context can produce a generic quote. A quoting assistant with the customer's history, the relevant cost catalog, and yesterday's conversation with that customer can produce a specific, useful quote.
Context is a more powerful lever than the prompt, and a more expensive one. Setting up good context means deciding what information the model needs, getting that information into a form the model can read, and arranging for it to flow into the conversation at the right moments.
Most of what an "AI engineer" actually does is context engineering, even though the name has not caught on. The model is fixed. The prompt is short and stable. The variation in output quality, between a good system and a bad one, comes mostly from how well the context is set up.
When context is the right lever:
- The model is technically capable but lacks specific information about your business
- The output would be correct if it knew the customer's history, the cost catalog, the past project, the policy
- You find yourself wishing the model "remembered" something — which usually means the something was not in the context to begin with
- The output drifts when conversations get long, because old facts age out of the model's window
When context is the wrong lever (or where it fails):
- The model needs to do something — send an email, update a record, schedule a meeting. Context alone cannot let it act. (That is lever three.)
- The information is too large to put in front of the model at once. (That is when retrieval comes in — fetching only the relevant slice — but that is still context engineering.)
- The information is wrong. Bad context produces bad output. The lever multiplies the quality of what you put in.
There is a deeper essay specifically on what context costs and how to think about it, available here. For this piece, the only thing to internalize is: context is the lever where most of the leverage actually lives.
Lever three: tools
Tools are the connections that let the model take action in the world, or fetch information it does not currently have.
A tool is a function the model can call. "Look up this customer in the CRM." "Send this email." "Schedule this meeting." "Search the cost catalog for items matching this description." Each one is a piece of code that the model can ask to be run, with arguments it provides, and whose result comes back into the conversation.
Tools are the lever that makes an AI system do things rather than just produce text. Without tools, an LLM is a chat partner — useful for drafting, summarizing, classifying, but unable to act. With tools, an LLM can be wired into the actual operation of a business: the CRM, the calendar, the document store, the inbox, the project tracker.
Tools are also the most expensive lever and the most consequential one to get wrong. A bad prompt produces output you don't like. A bad tool sends an email to a customer with a wrong number, or updates a record incorrectly, or schedules a meeting that should not have been scheduled. The blast radius is real.
When tools are the right lever:
- The model needs to take an action that touches another system
- The model needs information that lives in a system it cannot otherwise reach (the live database, the calendar, the file store)
- The output should not just be text — it should be a state change in the business
When tools are the wrong lever (or where they need a guard):
- The action is consequential and should never be taken without human review. In that case, the tool is "draft the email and surface it for approval," not "send the email."
- The model is not reliable enough at the underlying judgment to be trusted with the action. Wire the action behind a confirmation, not behind direct call.
- The owner has not yet thought through what the model is allowed to do versus what it is not. Tools deployed without that thinking are how AI systems take actions nobody on the team made.
The discipline with tools is least privilege — the model gets exactly the tools it needs to do the job, no more. Everything else stays behind a human's hand.
How owners conflate the levers
The most common failure is reaching for the wrong lever — usually the prompt — to fix a problem that needs a different lever.
The model gives a quote that uses last year's prices. We tried to fix it with the prompt. The prompt cannot fix it. The model does not have access to this year's prices. That is a context problem.
The model is supposed to update the customer record after the call but doesn't. We tried to fix it with the prompt. The prompt cannot fix it. The model does not have a tool to update the customer record. That is a tools problem.
The model occasionally produces invented details that look real. We tried to fix it with the prompt. The prompt cannot fix it. The structural fact of the model is that it predicts text that fits, and "the text that fits" is not always "the truth." That is a verification and review problem — a system design problem — not a prompt problem.
We have lost count of the AI projects whose teams spent months tuning prompts when the actual issue was missing context or missing tools. Prompt iteration feels productive — you can change something quickly and see the output change quickly — and the productivity is mostly an illusion when the prompt was not the right lever.
A cleaner habit: when something is wrong, name what kind of wrong. Tone or format wrong → prompt. Missing or stale information → context. Cannot act, cannot reach a system → tools. Producing confidently wrong outputs → that is structural, not lever-fixable, and the answer is verification and human review around the model, not a fourth lever.
The order to reach for them
When you are building, the order is roughly the reverse of how forgiving each lever is.
Start with context. Get the right information in front of the model. This is where most of the work is, and where most of the leverage is.
Then add tools, deliberately. Decide which actions the model should be able to take and which should remain the operator's. Wire the tools. Test them. Add guards.
Then tune the prompt. Once context and tools are in place, the prompt is the polish — the instructions that align the model's behavior with the operator's preferences. Doing prompt work first, before the context and tools are right, is fitting trim before you have walls.
For owners using AI rather than building it, the same order is useful for diagnosing problems. Wrong information? Context. Cannot act? Tools. Wrong tone or focus? Prompt. Confidently wrong? Verification, not a lever fix.
Three levers. Different costs, different powers, different failure modes. Knowing which one you are reaching for is most of the skill.
For the underlying mechanic, read What an LLM actually is (and isn't). For where context sits in the bigger picture, read What "context" really costs you. For the broader build, read An AI system, walked through like a building.
If something in here maps to a problem you are sitting on
Two sentences on what you are trying to do is enough to start. We reply personally—no sequences, no SDR handoff.
New writing is announced via the —same list site-wide. Away from the home page, this opens the signup form over the page so you do not lose your place.