T

The other night, I was trying to teach my eleven-month-old a very simple “game.” (Okay, technically she was just mashing blocks together, but let me pretend it was a game.)

I kept saying: “Put the blue block on top of the red one.” She looked at me, drooled, and proceeded to throw the red block across the room.

I tried again. This time slower. Then with hand gestures. Then, with exaggerated cheerleading. Nothing worked. Until I realized the real problem wasn’t my instruction.

It was the setup. The blocks were too far apart. The red one was on the floor. She couldn’t even see them at the same time. Once I put the blocks together, she figured it out instantly.

It wasn’t about the phrasing. It was about the context. And that, in a nutshell, is AI in 2025.

The Internet Thinks AI = Clever Prompts

If you spend five minutes on LinkedIn or X, you will see endless posts with “Top 100 ChatGPT Prompts That Will Change Your Life.”

Some of them work. Some don’t. Most don’t. Prompts are like magic spells: say the wrong word, and suddenly your “AI doctor” prescribes chamomile tea for a broken leg.

We have made prompts the star of the show. But here’s the truth: prompts are overrated. The real superpower is not what you type. It’s the environment you build around the model.

The myth of the perfect prompt

I will be blunt: there is no perfect prompt.

Sure, prompt engineering has its place. You can assign roles (“You are a Python security auditor”), give examples, set constraints, and even force chain-of-thought reasoning.

These tricks work… sometimes. But prompts are brittle.

Change a single word, and suddenly the model spirals into nonsense. Try the same prompt tomorrow, and the output may differ because the model’s randomness kicks in. Share the prompt with a teammate, and they’ll use it differently, with different results.

I have lived this firsthand. Early on, I spent hours trying to craft the “perfect prompt” for an app that predicts blood-sugar levels based on what you eat. Eventually, I realized I was solving the wrong problem.

Without access to real user data (past sugar levels, dietary preferences, exercise routines, etc.), the AI was basically guessing patterns based on static knowledge. No amount of prompt tweaking could fix that.

That’s why prompt engineering alone feels like playing the lottery. Sometimes you win, most of the time you don’t.

What Context Engineering Actually Means

Think of it as assembling everything the model needs to generate accurate answers:

#1 System prompts

These are the basic rules the model follows before it starts answering. They shape its “personality” and behavior, like telling it to be a teacher, a lawyer, or a travel guide.

This ensures the AI doesn’t just talk randomly but sticks to the role you have set.

Let’s say, for an AI travel concierge, the system prompt might say: “You are a travel expert helping people plan vacations, always responding in a friendly and professional tone.”

#2 RAG (retrieval-augmented generation)

This is the process of giving the model access to the most relevant information/data it needs at the moment, without overwhelming it with everything.

Think of it as knowing the right chapter in a textbook to find the answer, instead of reading the whole book. This keeps answers accurate and to the point.

For example, when you ask a travel chatbot ‘Are flights to Paris cheaper this week?’, RAG pulls real-time pricing data from airline databases instead of the model just guessing based on old information.

#3 Custom instructions

These are like house rules you set for the AI. They ensure the AI’s answers match your style, your company’s policies, or your workflow.

It’s how you make the AI “yours” rather than generic, like telling it to always sound professional but friendly in emails.

For example, the travel chatbot can be told to always recommend accessible activities for users who mention mobility needs, or always suggest hidden gem options for adventurous travelers.

#4 Structured data

This means giving the AI information in an organized way, such as a neatly labeled table or lists, instead of messy text.

The clearer the format, the easier it is for the model to follow along and produce reliable results. Imagine it like reading a clean recipe card instead of trying to cook from a crumpled page full of scribbles.

For example, hotel options are made available as a table listing price, rating, and amenities, so the AI can clearly compare and suggest the best-fit stays for the traveler.

#5 Memory

This is the AI’s ability to remember what happened before. Short-term memory keeps track of the last few lines of conversation so it doesn’t lose track mid‑chat.

Long‑term memory lets it recall details from earlier sessions, like remembering your food allergies or favorite topics. This makes interactions feel personal and consistent.

Example: If the traveler mentioned a love for French food in a past session, the AI travel chatbot will remember and prioritize French restaurants in Paris.

#6 Tools and APIs

These give the AI abilities beyond just writing words. With tools, it can check live data, connect to apps, or even complete tasks.

For example, the travel chatbot can check real-time Paris weather, book museum tickets, or send the trip itinerary to the traveler’s phone, all through integrated tools and APIs.

Without tools, the AI can only talk. With tools, it can actually get things done.

If prompt engineering is teaching the model to answer a question better, context engineering is ensuring the model has the textbooks, the notes, the calculator, and the exam rules in front of it.

For instance, imagine you are hungry and want to cook dinner.

Prompt engineering is like shouting a single instruction to the cook: “Make red sauce pasta, medium spicy!” You might get pasta, but the cook could guess the portion size, type of pasta, or even add ingredients you didn’t want. Sometimes it works, sometimes it doesn’t.

Context engineering is like handing the cook the full recipe, the ingredients you bought, your dietary notes, and even a picture of the dish you expect. Now the cook has everything they need to make the meal the way you want it correctly.

Practical tips for builders

So how do you actually apply context engineering as a PM, founder, or builder?

1. Stop over-optimizing prompts.

Don’t spend hours tweaking little words in a prompt. The main issue usually isn’t the wording. It’s that the model doesn’t have the right info to begin with. Instead of trying to be clever with phrasing, focus on giving it the right context.

2. Think in embeddings.

Embeddings are a way of turning data into math so the model can quickly search and find the most relevant pieces.

Imagine you have a giant library, and you want to find a specific book. You could either looking at every aisle, go through every book, and try to find the one you need. Or you could ask the librarian, who will tell you exactly where the book is.

Embeddings act like the librarian, and make searching easier and quicker. This makes answers faster and more accurate.

3. Evaluate, don’t just test.

Testing once is not enough. Evaluation means checking the AI across many situations: is it factually correct, does it follow rules, and is it helpful?

Think of it like quality-testing a car before release, not just taking it for one test drive. You don’t just drive it once on a highway and call it safe. Instead, you test it across different conditions:

  • In rain, snow, and fog (does it handle edge cases?)
  • With full load, empty, and unbalanced weight (performance under stress)
  • At different speeds on various terrains (consistency across scenarios)
  • With sudden obstacles and emergency braking (rule adherence under pressure)

Only after systematic testing across all these conditions can you confidently say the car is reliable.

Similarly, evaluating an AI system means:

  • Factual accuracy: Testing across diverse topics—does it give correct information consistently?
  • Rule adherence: Does it follow constraints in normal requests and tricky edge cases?
  • Helpfulness: Does it actually solve user problems across different use cases?
  • Robustness: How does it handle conflicting instructions, ambiguous prompts, or unusual requests?

A single test pass doesn’t prove reliability. Comprehensive evaluation across varied scenarios does.

4. Design with metadata.

Metadata is extra information that helps guide the model. For example, telling the AI who the user is (a student, a doctor, a manager), or what format you want the answer in (a table, a short note, a long explanation).

It’s like putting clear labels on boxes so you know exactly what’s inside. Directions like these make the results more consistent.

5. Integrate the right tools.

On its own, the AI can only talk. But when connected with tools like databases, calculators, or APIs, it can actually do things.

Think of it like giving a worker not just instructions, but also the right equipment: a hammer, a drill, a measuring tape. Tools let the AI act, not just guess.

6. Focus on UX flow.

User experience (UX) flow is about how people interact with the system.

If the design makes it easy to capture context, like forms that ask the right questions or remembering clicks from last time, the AI can give better answers. It’s like a good waiter who remembers your favorite drink and uses that to improve service over time.

Why this matters for PMs

If you are a PM in 2025, you need to know this: the companies that treat prompts as the core product will fall behind. Prompts don’t scale. Context does.

A great AI product doesn’t rely on clever incantations. It relies on robust systems that make the model smarter than it actually is.

The winners will be the teams who:

  • Connect their models to real, dynamic knowledge.
  • Enforce policies and constraints through context.
  • Build memory and state into their systems.
  • Orchestrate tools around the model, not just chat into it.

That’s how you future-proof your AI work.

Stop playing the prompt lottery

The next time you see a viral “ChatGPT Prompt Pack,” remember that prompts are like hacks. They work until they don’t. Instead, focus on context.

Build systems where the model always has the right information at the right time. Stop prompting. Start engineering.

Your turn now. If you are building with AI, where are you still relying on prompts? And what context could you be engineering instead?

Share this with a PM friend who’s still chasing the perfect prompt.

That’s it for today
—Sid

How I can help you:

  1. Fundamentals of Product Management - learn the fundamentals that will set you apart from the crowd and accelerate your PM career.
  2. Improve your communication: get access to 20 templates that will improve your written communication as a product manager by at least 10x.

More from 

Fundamentals of Product Management

View All

Join Our Newsletter and Get the Latest
Posts to Your Inbox

No Spam. Unsubscribe any time.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.