What is Context Engineering? The new Vibe Coding

What is Context Engineering? The new Vibe Coding

Andrej Karpathy coins new term for Prompt Engineering

Photo by ilgmyzin on Unsplash

Andrej Karpathy, the OG AI scientist, is back. After coining “vibe coding” — that phase when everyone just played around with AI tools without really knowing what they were doing — he’s now given the world a better, sharper term: Context Engineering.

https://medium.com/media/9e34d3def0f95945cbae2c71f470c6a7/href

It’s not a new idea, exactly. It’s just something that finally has a name. And once you understand what it means, you’ll realize: this is where the real work in AI happens. Not in the prompt. Not in the fancy model. But in everything you feed into the model before it opens its mouth.

My new book on Model Context Protocol for beginners is live

Model Context Protocol: Advanced AI Agents for Beginners (Generative AI books)

So, What is Context Engineering?

Let’s break it down.

You already know what prompt engineering is. You write something like:

“Translate this email into Spanish”
or
“Summarize this article in bullet points”

That’s a prompt. It’s a command. It’s you telling the model what to do.

But you missed many things here

  • But what if the model doesn’t have enough background to actually do what you asked?
  • What if it doesn’t know what kind of tone you want?
  • Or that the article is part of a legal case, not just a random news piece?
  • Or that the email uses company-specific lingo that isn’t obvious from the prompt alone?

That’s where context engineering comes in.

It’s the job of setting the stage for the model. It’s preparing the whole situation — the facts, the tone, the intent, the previous messages, the examples, the history, even other tools the model can use — so that when the model finally reads your prompt, it actually has the right mental setup to do the job well.

In short:

Prompting is what you ask.
Context is what the model knows when you ask it.

Why is Context Engineering Different?

Because it’s system thinking, not just wordsmithing. Prompt engineering is a local skill. You tweak the words until the model behaves. Context engineering is global. You think about the full system:

What information does the model need at this point?

What should it remember from before?

What should be left out to avoid confusion?

What other steps will happen after this?

It’s closer to designing software than writing prompts. You’re not just trying to get a single response — you’re building a system that works across many interactions, across time, across inputs. You’re teaching the model how to think, not just what to say.

Why It Matters Now

The term “ChatGPT wrapper” gets thrown around a lot — sometimes as a joke, sometimes as an insult. But that phrase misses the point. The apps that work best today don’t just throw a prompt at the model and hope for the best. They carefully construct the entire context window — that limited chunk of memory the model sees at each step — with surgical care.

That window might include:

Instructions on how to respond

Examples of similar tasks

Summaries of earlier conversations

Output from a database or web search

Notes about the user’s preferences

State from previous steps

A checklist or format the response must follow

Too little, and the model flails. Too much, and it gets overwhelmed or slow. Context engineering is the delicate craft of feeding the model just the right information — no more, no less.

It’s like packing a bag for a hike. Take too little and you’re lost. Take too much and you can’t move.

Context Enegineering example

Imagine you’re building a tool that helps users review legal contracts. The user uploads a 10-page PDF and types:

“Is this contract fair for a freelance designer?”

A simple prompt-based system might just feed that PDF into the model with the user’s question, like:

“You are a legal expert. Read this contract and answer: Is it fair for a freelance designer?”

That might sort of work. But also:

  • What’s “fair”?
  • Does the model know what kind of freelance work?
  • Does it know what the user cares about?
  • Does it know if this is a first-time client or a regular?
  • Is the model supposed to flag legal risks? Suggest edits? Summarize?

The model will guess. And guessing leads to bad results.

Context Engineering comes to rescue

Before the model sees anything, you carefully prepare the context window — the slice of memory the model has access to during that single call.

Here’s what you include:

System Instructions: “You are an AI contract reviewer specialized in freelance design contracts. You analyze legal documents and give practical advice in clear, simple language.”

User Profile: “User is a freelance UX designer. This is their first time working with this client. They care about short payment terms and maintaining IP rights.”

Previous Conversation: “User asked about protecting their design assets and how to negotiate payment terms in the past.”

Document Summary (not the full PDF):

  • Payment terms: Net 60
  • IP clause: All rights transferred to client
  • Termination clause: No termination allowed for the first 90 days
    (This summary was generated and refined in a prior step.)

Examples of Fair vs. Unfair Clauses:

  • Example: “Net 60” is often unfavorable; “Net 15–30” is typical for freelance work.
  • Example: “Transfer of all IP” is common but negotiable — flag if it lacks attribution.

Final Task Prompt:

  • “Based on the above, evaluate whether the contract is fair for the user. Be specific. If possible, suggest how to negotiate the risky clauses.”

That’s context engineering.

It’s invisible to the user. But it’s the reason the tool feels smart, helpful, and human — instead of generic, robotic, or flat-out wrong.

Without this level of context prep, you’re not building an AI product. You’re just sending text to an API and hoping for the best.

Also, a few of you might think this is similar to system prompt. No, not at all

System Prompt ≠ Context Engineering

They’re related, but not interchangeable. Here’s the core difference:

System Prompt = One piece of the puzzle

The system prompt is like the intro paragraph to a model’s worldview. It tells the model how to behave in general:

“You are a helpful assistant.”
or
“You are a sarcastic movie critic from the 90s.”

It sets tone, voice, and basic boundaries. You give it once (at the start of the session or API call), and it influences everything that follows — kind of like telling a human: “From now on, speak like you’re in a courtroom.”

But that’s just the stage lights.

Context Engineering = The entire play

Context engineering is everything that goes into the model’s limited memory before the prompt lands. That includes:

  • The system prompt
  • The user profile
  • Prior conversation history
  • Examples
  • Summaries
  • External tool output
  • Metadata
  • Constraints, templates, rules, preferences
  • Even what you don’t show the model

It’s like being a director, scriptwriter, set designer, and stage manager all at once. You’re not just saying “Act professional.” You’re deciding which scene the actor walks into, who’s on stage with them, what happened in the last scene, and what props they can use.

Real-World Analogy:

A simple example to unerstand the difference

  • System Prompt: “You’re a chef.”
  • Prompt: “Make me something delicious.”
  • Context Engineering: “You’re a French-trained chef cooking for a vegan allergic to nuts who hates mushrooms, has only 20 minutes, and just ran a marathon. Here’s what they ate yesterday. Here’s what ingredients are in the fridge.”

Not Just a Prompt, But a Plan

Karpathy put it clearly:

“You prompt an LLM to tell you why the sky is blue. But apps build contexts (meticulously) for LLMs to solve their custom tasks.”

This is the shift.

As AI apps get more complex, the real job isn’t just crafting clever prompts. It’s designing the whole system of context: what the model sees, how much it remembers, what tools it has access to, what guardrails shape its behavior, and how each step connects to the next.

If AI models are the engines, context is the fuel and the road. And building that road is what context engineering is all about.

The Future Runs on Context

Context engineering isn’t some optional skill. It’s the core of how powerful AI systems will work from here on. It’s not flashy. It’s not always visible. But it’s the quiet, behind-the-scenes discipline that turns AI from a toy into a tool — from a chatbot into a teammate.

The prompt is just the tip of the iceberg. Context is everything underneath that makes it possible. And now it finally has a name.


What is Context Engineering? The new Vibe Coding was originally published in Data Science in Your Pocket on Medium, where people are continuing the conversation by highlighting and responding to this story.

Share this article
0
Share
Shareable URL
Prev Post

Jan-Nano: The 1st Deep Research LLM

Next Post

Gemini CLI : Google’s new AI Agent is killing it

Read next
Subscribe to our newsletter
Get notified of the best deals on our Courses, Tools and Giveaways..