
The next is Half 3 of three from Addy Osmani’s unique publish “Context Engineering: Bringing Engineering Self-discipline to Elements.” Half 1 could be discovered right here and Half 2 right here.
Context engineering is essential, nevertheless it’s only one element of a bigger stack wanted to construct full-fledged LLM purposes—alongside issues like management movement, mannequin orchestration, software integration, and guardrails.
In Andrej Karpathy’s phrases, context engineering is “one small piece of an rising thick layer of non-trivial software program” that powers actual LLM apps. So whereas we’ve centered on easy methods to craft good context, it’s essential to see the place that matches within the general structure.
A production-grade LLM system sometimes has to deal with many considerations past simply prompting. For instance:
- Drawback decomposition and management movement: As a substitute of treating a consumer question as one monolithic immediate, strong techniques usually break the issue down into subtasks or multistep workflows. For example, an AI agent may first be prompted to stipulate a plan, then in subsequent steps be prompted to execute every step. Designing this movement (which prompts to name in what order; easy methods to determine branching or looping) is a basic programming process—besides the “features” are LLM calls with context. Context engineering suits right here by ensuring every step’s immediate has the data it wants, however the determination to have steps in any respect is a higher-level design. For this reason you see frameworks the place you primarily write a script that coordinates a number of LLM calls and gear makes use of.
- Mannequin choice and routing: You may use completely different AI fashions for various jobs. Maybe a light-weight mannequin for easy duties or preliminary solutions, and a heavyweight mannequin for ultimate options. Or a code-specialized mannequin for coding duties versus a basic mannequin for conversational duties. The system wants logic to route requests to the suitable mannequin. Every mannequin might need completely different context size limits or formatting necessities, which the context engineering should account for (e.g., truncating context extra aggressively for a smaller mannequin). This side is extra engineering than prompting: consider it as matching the software to the job.
- Device integrations and exterior actions: In case your AI can carry out actions (like calling an API, database queries, opening an online web page, working code), your software program must handle these capabilities. That features offering the AI with an inventory of accessible instruments and directions on utilization, in addition to truly executing these software calls and capturing the outcomes. As we mentioned, the outcomes then change into new context for additional mannequin calls. Architecturally, this implies your app usually has a loop: immediate mannequin → if mannequin output signifies a software to make use of → execute software → incorporate end result → immediate mannequin once more. Designing that loop reliably is a problem.
- Person interplay and UX flows: Many LLM purposes contain the consumer within the loop. For instance, a coding assistant may suggest adjustments after which ask the consumer to verify making use of them. Or a writing assistant may supply a number of draft choices for the consumer to choose from. These UX selections have an effect on context too. If the consumer says “Possibility 2 seems to be good however shorten it,” it’s essential to carry that suggestions into the subsequent immediate (e.g., “The consumer selected draft 2 and requested to shorten it.”). Designing a clean human-AI interplay movement is a part of the app, although in a roundabout way about prompts. Nonetheless, context engineering helps it by guaranteeing every flip’s immediate precisely displays the state of the interplay (like remembering which choice was chosen or what the consumer edited manually).
- Guardrails and security: In manufacturing, you need to contemplate misuse and errors. This may embody content material filters (to forestall poisonous or delicate outputs), authentication and permission checks for instruments (so the AI doesn’t, say, delete a database as a result of it was within the directions), and validation of outputs. Some setups use a second mannequin or guidelines to double-check the primary mannequin’s output. For instance, after the primary mannequin generates a solution, you may run one other test: “Does this reply include any delicate data? In that case, redact it.” These checks themselves could be applied as prompts or as code. In both case, they usually add further directions into the context (a system message like “If the consumer asks for disallowed content material, refuse,” is a part of many deployed prompts). So the context may at all times embody some security boilerplate. Balancing that (guaranteeing the mannequin follows coverage with out compromising helpfulness) is yet one more piece of the puzzle.
- Analysis and monitoring: Suffice to say, it’s essential to always monitor how the AI is performing. Logging each request and response (with consumer consent and privateness in thoughts) means that you can analyze failures and outliers. You may incorporate real-time evals—e.g., scoring the mannequin’s solutions on sure standards, and if the rating is low, robotically having the mannequin strive once more or path to a human fallback. Whereas analysis isn’t a part of producing a single immediate’s content material, it feeds again into bettering prompts and context methods over time. Primarily, you deal with the immediate and context meeting as one thing that may be debugged and optimized utilizing knowledge from manufacturing.
We’re actually speaking about a brand new type of software structure. It’s one the place the core logic entails managing info (context) and adapting it by means of a collection of AI interactions, reasonably than simply working deterministic features. Karpathy listed components like management flows, mannequin dispatch, reminiscence administration, software use, verification steps, and so forth., on prime of context filling. All collectively, they kind what he jokingly calls “an rising thick layer” for AI apps—thick as a result of it’s doing rather a lot! After we construct these techniques, we’re primarily writing metaprograms: packages that choreograph one other “program” (the AI’s output) to resolve a process.
For us software program engineers, that is each thrilling and difficult. It’s thrilling as a result of it opens capabilities we didn’t have—e.g., constructing an assistant that may deal with pure language, code, and exterior actions seamlessly. It’s difficult as a result of lots of the methods are new and nonetheless in flux. We’ve got to consider issues like immediate versioning, AI reliability, and moral output filtering, which weren’t commonplace components of app growth earlier than. On this context, context engineering lies on the coronary heart of the system: In the event you can’t get the appropriate info into the mannequin on the proper time, nothing else will save your app. However as we see, even excellent context alone isn’t sufficient; you want all of the supporting construction round it.
The takeaway is that we’re transferring from immediate design to system design. Context engineering is a core a part of that system design, nevertheless it lives alongside many different elements.
Conclusion
Key takeaway: By mastering the meeting of full context (and coupling it with strong testing), we will improve the probabilities of getting the very best output from AI fashions.
For knowledgeable engineers, a lot of this paradigm is acquainted at its core—it’s about good software program practices—however utilized in a brand new area. Give it some thought:
- We at all times knew rubbish in, rubbish out. Now that precept manifests as “dangerous context in, dangerous reply out.” So we put extra work into guaranteeing high quality enter (context) reasonably than hoping the mannequin will determine it out.
- We worth modularity and abstraction in code. Now we’re successfully abstracting duties to a excessive stage (describe the duty, give examples, let AI implement) and constructing modular pipelines of AI + instruments. We’re orchestrating elements (some deterministic, some AI) reasonably than writing all logic ourselves.
- We apply testing and iteration in conventional dev. Now we’re making use of the identical rigor to AI behaviors, writing evals and refining prompts as one would refine code after profiling.
In embracing context engineering, you’re primarily saying, “I, the developer, am liable for what the AI does.” It’s not a mysterious oracle; it’s a element I have to configure and drive with the appropriate knowledge and guidelines.
This mindset shift is empowering. It means we don’t should deal with the AI as unpredictable magic—we will tame it with strong engineering methods (plus a little bit of inventive immediate artistry).
Virtually, how are you going to undertake this context-centric strategy in your work?
- Spend money on knowledge and data pipelines. An enormous a part of context engineering is having the information to inject. So construct that vector search index of your documentation, or arrange that database question that your agent can use. Deal with data sources as core options in growth. For instance, in case your AI assistant is for coding, be sure it may possibly pull in code from the repo or reference the type information. Lots of the worth you’ll get from an AI comes from the exterior data you provide to it.
- Develop immediate templates and libraries. Moderately than advert hoc prompts, begin creating structured templates to your wants. You might need a template for “reply with quotation” or “generate code diff given error.” These change into like features you reuse. Preserve them in model management. Doc their anticipated habits. That is the way you construct up a toolkit of confirmed context setups. Over time, your workforce can share and iterate on these, simply as they’d on shared code libraries.
- Use instruments and frameworks that offer you management. Keep away from “simply give us a immediate, we do the remainder” options in case you want reliability. Go for frameworks that allow you to peek underneath the hood and tweak issues—whether or not that’s a lower-level library like LangChain or a customized orchestration you construct. The extra visibility and management you will have over context meeting, the better debugging shall be when one thing goes incorrect.
- Monitor and instrument every part. In manufacturing, log the inputs and outputs (inside privateness limits) so you may later analyze them. Use observability instruments (like LangSmith, and so forth.) to hint how context was constructed for every request. When an output is dangerous, hint again and see what the mannequin noticed—was one thing lacking? Was one thing formatted poorly? It will information your fixes. Primarily, deal with your AI system as a considerably unpredictable service that it’s essential to monitor like every other—dashboards for immediate utilization, success charges, and so forth.
- Preserve the consumer within the loop. Context engineering isn’t nearly machine-machine data; it’s in the end about fixing a consumer’s drawback. Typically, the consumer can present context if requested the appropriate method. Take into consideration UX designs the place the AI asks clarifying questions or the place the consumer can present additional particulars to refine the context (like attaching a file, or choosing which codebase part is related). The time period “AI-assisted” goes each methods—AI assists the consumer, however the consumer can help AI by supplying context. A well-designed system facilitates that. For instance, if an AI reply is incorrect, let the consumer appropriate it and feed that correction again into context for subsequent time.
- Prepare your workforce (and your self). Make context engineering a shared self-discipline. In code critiques, begin reviewing prompts and context logic too. (“Is that this retrieval grabbing the appropriate docs? Is that this immediate part clear and unambiguous?”) In the event you’re a tech lead, encourage workforce members to floor points with AI outputs and brainstorm how tweaking context may repair it. Information sharing is vital as a result of the sphere is new—a intelligent immediate trick or formatting perception one individual discovers can seemingly profit others. I’ve personally discovered a ton simply studying others’ immediate examples and postmortems of AI failures.
As we transfer ahead, I anticipate context engineering to change into second nature—very similar to writing an API name or a SQL question is at present. Will probably be a part of the usual repertoire of software program growth. Already, many people don’t assume twice about doing a fast vector similarity search to seize context for a query; it’s simply a part of the movement. In a number of years, “Have you ever arrange the context correctly?” shall be as frequent a code assessment query as “Have you ever dealt with that API response correctly?”
In embracing this new paradigm, we don’t abandon the outdated engineering rules—we reapply them in new methods. In the event you’ve spent years honing your software program craft, that have is extremely helpful now: It’s what means that you can design wise flows, spot edge instances, and guarantee correctness. AI hasn’t made these expertise out of date; it’s amplified their significance in guiding AI. The position of the software program engineer just isn’t diminishing—it’s evolving. We’re changing into administrators and editors of AI, not simply writers of code. And context engineering is the method by which we direct the AI successfully.
Begin pondering when it comes to what info you present to the mannequin, not simply what query you ask. Experiment with it, iterate on it, and share your findings. By doing so, you’ll not solely get higher outcomes from at present’s AI but additionally be getting ready your self for the much more highly effective AI techniques on the horizon. Those that perceive easy methods to feed the AI will at all times have the benefit.
Glad context-coding!
I’m excited to share that I’ve written a brand new AI-assisted engineering ebook with O’Reilly. In the event you’ve loved my writing right here you could be focused on checking it out.
AI instruments are rapidly transferring past chat UX to classy agent interactions. Our upcoming AI Codecon occasion, Coding for the Agentic World, will spotlight how builders are already utilizing brokers to construct progressive and efficient AI-powered experiences. We hope you’ll be a part of us on September 9 to discover the instruments, workflows, and architectures defining the subsequent period of programming. It’s free to attend. Register now to avoid wasting your seat.
