Part 1: Prompt engineering: The Art of Delegating to Generative AI
Ever wondered why some people get spot-on, helpful responses from AI, while others end up with vague or confusing answers? It’s not just luck—it’s the power of good prompting. The way you ask the AI to do something matters—a lot.
This is part one of a three-part series where we’ll explore what prompt engineering is and the different techniques you can use to guide AI effectively. Part two will cover the constituents of a prompt. In part thre, we’ll dive into how to create great prompts—with practical tips and examples to help you get the most out of AI.
What is prompt engineering?
So, what exactly is prompt engineering? It’s the art of telling an AI model what you want it to do. But why is it called an art—and yes, also a science? Because the way you frame your request can completely change the quality, accuracy, and usefulness of the response.
To understand why prompting is such a big deal, you first need to know how these AI models actually work.
The popular chat companions we’ve all seen, like ChatGPT, Gemini, and Hey Pi, have powerful AI models running in the background. For example, ChatGPT uses GPT-4, Gemini runs on Gemini 1.5, and Hey Pi uses the Inflection 2.5 model, and so on. These models are trained on massive amounts of text data, making them incredibly good at predicting the next word in a sentence—just like how your phone suggests the next word when you’re typing a message.
But unlike your phone’s basic autocomplete, these models are far more advanced. They can generate entire responses, write essays, solve problems, and even have flowing conversations. And the way you prompt them—the words you use, the details you include—makes all the difference.
If you’ve interacted with any of these chat companions, you would have been surprised by their remarkably human-like responses. That’s possible because of the large amounts of data they have been trained on (you can say almost everything that is publicly available on the internet) to make them learn human language and interpret text. By ingesting billions of sentences, LLMs learn the sentence structure, grammar, and semantics. The Idioms, colloquialisms, and conversational norms and also the contextual associations (e.g., "apple" could mean the fruit or the tech company, depending on the sentence).
We humans are naturally good at picking up on context when we talk to each other. It’s something we do without even thinking. With just a few words or a familiar phrase, we instantly know what the other person means. Our brains are great at building connections and storing memories, which helps us quickly understand the context of a conversation.
For example, if a friend says, "Same time tomorrow?" you don’t need any extra details—you know they’re talking about the coffee catch-up you just had. Your brain automatically fills in the missing pieces based on your shared experience. This ability to pick up on context makes our conversations smooth and meaningful, even when we’re not spelling everything out.
Why are good prompts so important?
Machines, unlike humans, don’t have this natural skill of picking context unless stated. Large Language Models (LLMs), like ChatGPT, depend entirely on the prompt you give them to know what’s going on. Unlike us, LLMs don’t have personal experiences or true memories. Instead, they rely entirely on the prompt and the conversation history to make up the context. Some AI systems (like chatbots with memory capabilities) can retain bits of information during a conversation or even across sessions. However, this "memory" is still artificial and based on stored data rather than true experiential recall like humans and it is still limited.
For longer conversations, LLMs can lose context when the window is exceeded, unlike humans, who remember it more reliably. That’s why clear and detailed prompts matter. A good prompt works like a quick reminder, helping the model catch on to what you’re talking about—even if it has 'forgotten' parts of the earlier conversation. The clearer your prompt, the better the model can follow along and give useful responses.
So how does a prompt help a model in building the right context
When a model learns a language, it doesn’t actually understand the meaning of words the way we do. Instead, it learns the connections between them based on how often they appear together. For example, if you say ‘coffee,’ the model knows that words like ‘mug,’ ‘caffeine,’ or ‘morning’ are more likely to follow than something random like ‘giraffe.’ It makes these predictions because it has seen these word associations countless times during training.
When you give the model a prompt, you’re steering it in a specific direction. The words you use act like breadcrumbs, guiding the model toward the right context. It then generates its response by picking the most likely next words based on everything you’ve said so far. As the conversation continues, it keeps building on this context, using each response as a reference point for the next one.
Imagine you use the word ‘bat’ in a conversation. Without any context, the model doesn’t know if you’re talking about the flying mammal or the cricket bat. But if your prompt says, ‘He swung the bat and hit a six,’ the model instantly knows that you’re referring to cricket. On the other hand, if you say, ‘The bat flew out of the cave,’ it knows you’re talking about the animal.
This is why being clear and specific with your prompt makes a big difference. For example, if you simply say, ‘Tell me about bats,’ the model might give you a mix of information about both the animal and the sports equipment. But if you specify, ‘Tell me about bats used in cricket,’ you’re guiding the model to stick to the right context.
So, by carefully choosing your words, you’re helping the model build and stick to the right context, making the conversation feel smooth and meaningful.
But context alone isn’t enough—you also need to be clear about what you want.
Think of it like delegating a task. When you ask someone to do something, you don’t just say, ‘Hey, do this,’ and walk away. You’re usually specific. You explain what you want, share a few details, and sometimes even give examples to make sure they get it right.
With humans, this works pretty well because we naturally pick up on context and fill in the blanks. If you tell a colleague, ‘Create a report like last month’s,’ they’ll probably know exactly what you mean—even without detailed instructions. That’s because they can lean on their memory, experience, and common sense.
But AI doesn’t have that luxury. It only knows what you tell it, nothing more. If you’re vague or unclear, it can’t read between the lines. It will just do its best with the limited information it has—which might not be what you had in mind.
That’s why prompting an AI is a bit like delegating to someone who has no prior knowledge of the task. You have to be clear, direct, and sometimes extra detailed to get the result you want.
Now, just like there are different ways to give instructions to people, there are also different ways to prompt an AI.
Prompt engineering techniques
Prompting isn’t just about typing a question or giving a command—it’s a blend of art and science. The science lies in understanding how the model processes information, while the art comes from the creativity and precision you use to frame your request. Well-crafted prompts can guide the model’s thinking, improve accuracy, and even spark creative responses.
So, how do you do it? Let’s explore the different ways you can prompt an AI—from straightforward instructions to more advanced techniques that bring out its full potential.
Zero-shot prompting
Zero-shot prompting is when you ask the model to perform a task without giving it any examples or prior context. You’re relying entirely on the model’s pre-existing knowledge to figure out what you want. It’s like asking someone to cook a dish they’ve never made before, hoping they’ve read enough recipes to get it right.
Example:
You type: “Explain how photosynthesis works.”
The model responds with a basic explanation based on what it already knows—no examples,
Few-shot prompting
Few-shot prompting is when you give the model a few examples before asking it to generate a response. This helps the model understand the format, tone, or style you’re aiming for. It’s like showing someone a few sample cake designs before asking them to bake one.
Example:
You say: “Here are two polite email replies. Now, write one for this situation.”
Example replies:
“Thank you for your inquiry. We will get back to you shortly.”
“We appreciate your feedback. Our team is reviewing it.”
Then, you add:
“Now, write a reply for a customer requesting a refund.”
The model uses the previous examples as a reference and creates a polite, professional
Chain-of-thought (CoT) prompting
Chain-of-thought prompting encourages the model to think through the problem step by step before giving the final answer. It’s like asking someone to explain how they solved a puzzle rather than just telling you the solution. This approach improves reasoning and accuracy, especially for complex tasks.
Example:
You ask: “What is 15% of 240? Explain the steps.”
Model response:
Instead of just giving the answer (“36”), the model walks you through it:
“First, find 10% of 240, which is 24. Then, find half of that, which is 12. Add them together: 24 + 12 = 36.”
Meta prompting
Meta prompting is when you ask the model how to prompt it properly. It’s like saying, “Hey, what’s the best way to ask you this question?” This technique helps you write clearer, more effective prompts by letting the model guide you.
Example:
You type: “How should I prompt you to get a detailed summary of AI trends?”
The model might respond with:
“Ask for a breakdown by industry, key players, and future predictions. You can also specify a timeframe, like ‘from 2020 to 2024.’”
Self-consistency prompting
Self-consistency prompting is when you ask the model to generate multiple answers to the same question and then pick the most consistent or reasonable one. It’s like asking three different people for directions and going with the most common route they suggest.
Example:
You ask: “Why do plants wilt?”
The model generates three possible explanations:
Lack of water.
Too much sun.
Poor soil quality.
It then evaluates them and concludes:
“Lack of water seems the most likely based on common symptoms.”
Prompt chaining
Prompt chaining is when you break down a complex task into smaller prompts, with each one building on the previous result. It’s like giving step-by-step instructions instead of dumping everything at once.
Example:
You want the model to create a business plan, so you do it in steps:
“Suggest three business ideas in the food industry.”
“For each idea, describe its target market.”
“Now, write a pitch for the most promising idea.”
By chaining prompts, you get a more detailed and refined output.
Tree of Thoughts (ToT)
Tree of Thoughts is when the model explores multiple trains of thought before making a final decision. It branches out into different reasoning paths and evaluates them, making it great for creative or complex tasks.
Example:
You ask:
“Suggest three ways to reduce office waste. List pros and cons for each. Then, recommend the best one.”
The model generates three solutions:
“Reduce paper usage: Eco-friendly but requires digital training.”
“Promote recycling: Good for sustainability but needs monitoring.”
“Switch to reusable supplies: Effective but costly initially.”
It then picks the most practical one based on the pros and cons.
RAG (Retrieval-Augmented Generation)
RAG is like Googling something before answering. The model retrieves external information (from a database or the web or a data source) and then uses it to generate a more accurate and up-to-date response.
Example:
You ask: “What are the latest rules for online payments in India?”
The model first fetches the most recent regulations and then uses that information to generate the response. This makes the answer more reliable and current.
Reflexion prompting
Reflexion prompting is when you ask the model to review and improve its own answer. It’s like asking someone to read their essay again and refine it. This leads to more thoughtful and detailed responses.
Example:
You type: “Explain how gravity works.”
After the initial answer, you follow up with:
“Now, review your answer and add more real-world examples.”
The model goes back, identifies gaps, and expands the explanation with examples like how gravity causes tides or keeps satellites in orbit.
ReAct (Reasoning + Acting)
ReAct is a two-step process where the model first reasons through the problem, then takes action (like retrieving information or performing calculations) before giving the final answer. It combines thinking and doing.
Example:
You ask: “Find the current population of Japan. Then, calculate its population density.”
The model first retrieves the population from external data, then uses that figure in the calculation. This makes it effective for multi-step or fact-based tasks.
Now, while we’ve looked at these prompting techniques individually, the real magic often happens when you combine them. Just like when you delegate tasks to a team, you don’t always rely on a single approach—you mix and match strategies based on the situation.
For example, you might start by giving a few examples (few-shot prompting) to show the model what you expect, then guide it through a step-by-step reasoning process (chain-of-thought) to ensure it gets the details right. Or you could chain multiple prompts together to maintain context across a longer conversation, using RAG to pull in fresh, reliable information when needed.
The best part about prompting is that there’s no single right way to do it. You can mix and match different techniques to get better results. This helps you guide the AI more effectively, get clearer and more accurate answers.
So, that’s part one of our journey into prompting! We’ve explored how AI models understand context and the different ways you can guide them with clever prompts—whether it’s through clear instructions, step-by-step reasoning, or even chaining prompts together.
But knowing the techniques is only half the game. In part two, we will look at what a prompt consists of and in part three, we’ll dive into the real fun part—learning how to craft great prompts. You’ll see how tiny tweaks in wording can completely change the AI’s response, and we’ll share some practical tips and tricks to help you get exactly what you want from your prompts.
So, stay tuned; it’s about to get even more interesting!
By Rajashree Rajadhyax
Comments
Post a Comment