What is Zero Shot and Few Shot Prompting?

Generative AI LLMs have countless ways to interact with them. Three of the most popular ways are zero shot prompting, few shot prompting, and chain of thought prompting.

Each prompting technique is subtly different but can produce wildly different results. Additionally, each method requires a different level of work.

No matter how you use LLMs like ChatGPT or Gemini, or SLMs like Microsoft Phi, you will want to get the best possible answers. Learning the different prompting techniques will help you maximize your AI experiences. These concepts are fundamental when using one of these models in an API or other automated workflow.

Read on to learn how to turbocharge your AI experiences with the proper prompting techniques.


A Primer on Some AI Terminology

Answering this question requires a quick background on a couple of AI concepts.

The first concept you need to know is “grounding.” You can ground an AI’s responses by giving it some basic facts and structure related to the task you want the language model to achieve. You might “ground” your AI’s responses by giving it a role (“you are an experienced programmer”), or you might give it a basic set of facts (“here’s the interview schedule”).

Grounding Prompt Template

You are an experienced [field of expertise] with over 10 years of experience in the industry. You have extensive knowledge of [relevant knowledge]. You are tasked with [your request].

A “shot” is related to grounding as it gives the AI examples of successful input relative to successful output. AI professionals use the term “shot” synonymously with “example.”

Think about learning to drive. In that case, a “shot” or example might be a text description of how to drive in a video game. It’s not precisely what you want to learn (driving in real life), but it has enough general concepts to give you an idea. There are steering wheels, gas pedals, brakes, and more.

You can think of this concept in terms of musical instruments. A shot for an AI system might show it how to play the trumpet. Users could then ask it to take that learned knowledge and try playing the clarinet.

With these two key terms in mind, it’s time to get on with zero shot and few shot prompting!

What is Zero Shot Prompting?

Zero-shot prompting is when you do not provide the LLM with any examples of what you want it to do. Instead, you rely on its existing knowledge and training to generate a response.

At first glance, you might wonder why anyone would use this technique. Wouldn’t it always be better to provide the LLM with some guidance?

Not always.

Sometimes, you want the LLM to be as creative as possible. You want to avoid binding the LLM with previous examples or having your thoughts interfere with its internal logic. 

Zero shot prompting achieves that for you. It lets the LLM be free to give you the data authentically.

Zero Shot Prompting Technique

The zero shot prompting technique is quite simple. Don’t include examples!

Here are four examples of zero shot prompts:

1. Can you give me five titles for my next blog post about Parakeets?
2. You're a professional software engineer. Can you tell me the difference between TypeScript and React?
3. Please write me a method to call a REST API using a key.
4. Please write me an email thanking a client for their continued business.

Notice only the second prompt contained Grounding.

These prompts do not provide examples or indicate any expected response. The response you get is just the LLM talking without your guidance.

Limitations of Zero Shot Prompting

The limitations of this technique are perhaps self-evident. 

Zero shot prompting means you may not get your desired response. Since you have not provided the LLM with examples, the email you get thanking the client might not sound like you. The five blog post titles might not be anything that excites you.

It’s like hiring an employee and telling them to “go fix the website.” They’ll fix it, but it might look different from how you want it to look.

If you are building any automated system, these prompts have another obvious problem: non-uniformity of output. A zero shot prompt means you’ll receive output from the LLM in the format it wants, not the format you want.

Lastly, this technique almost always leads to the most disparity in quality. Depending on the prompt and model’s knowledge, the titles for your blog post that you get might either be exciting or boring.

It all depends on what pathways activate in the neural net and the final answer from those chains, which results in inconsistent outputs that might not be ideal for your use-case or application.

What is Few Shot Prompting?

Few shot prompting is a technique that aims to ground the LLM with a few “shots” or examples with a request.

The best way to explore few shot prompting is to consider the LLM as an employee. Suppose you were onboarding a new employee, and you asked them to come up with ideas for car names.

You would probably not just leave the assignment at that. 

Instead, you’d likely say, “I love the name Lamborghini Huracan; it sounds powerful.” Or, you could point out that a Mustang sounds fast or a Cybertruck sounds futuristic.

You’d probably point out a few examples that you like to help ground your new employee and give them a hint as to what you’re expecting from them.

In this example, you’ve used few shot prompting with your employee. You’ve given them a task and a few “shots” to help them understand what you want.

Few Shot Prompting Technique

This few shot prompting technique is straightforward to implement. To do so, provide some examples with your request.

Consider one of these few shot prompts to Gemini.

Basic zero shot prompt to Gemini asking about car model names

Above, we can see that it comes up with some good names. But I’m the boss above, and I like Huracan, Mustang, and Cybertruck. Concepts like “Zephyr” or “Equinox” do not follow those same power and futuristic design themes. 

Suppose we modify the request to Gemini to include those examples as “shots.”

Few shot prompt to Gemini for car model names

It is now generating names closer to the target. Tempest, Bolt, Apex, and Raze, in particular, feel very powerful and futuristic.

With some context about what I like, the LLM now seeks to create names I like. Few shot prompting has helped constrain the model a bit with multiple examples (relevant examples, too). The number of examples is small, but even without advanced prompt engineering, it’s enough to get the job done!

Few Shot Prompting Tips

The biggest tip to learn with few shot prompting is how many examples to give. Give too few examples, and you’re back to zero-shot prompting. Or, worse, the LLM will try to match those one or two examples too precisely.

Here are the top three tips for few shot prompting:

  1. Give 2-3 Examples: The key in “few shot prompting” is “few.” Refrain from constricting the model unnecessarily with too many examples. Research suggests that giving more than two examples has little benefit.

  2. Give Your Prompt Structure: Explicitly call out the examples you are giving the model so it can add that to its context when generating a response. When few shot prompting your model, separate your examples with proper spacing and a delimiter. (For example use 3 dashes — to end a section)

  3. Consider Your Example Order: LLMs, particularly older ones, can sometimes give more weight to the text it parses last. Therefore, consider offering your best example last within your few shot prompt.

Few shot prompting should not feel burdensome. When creating few shot prompts, think about it as giving some quick context to the language model to help it successfully achieve more complex tasks – and don’t write it a book!

What is Chain of Thought Prompting?

Chain of thought prompting is a technique for getting language models to produce the correct answer where complex reasoning is necessary.

This concept first appeared in a paper in 2022 entitled “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.”

The idea behind this prompting technique is simple:

Suppose I ask you how to multiply 12 by 13.

That’s a zero shot prompt. Unless you have this memorized, you’d probably guess 150 or 160.

But what if I were to say to you that you could multiply 12 by 13 using the following process:

12 x 13 equals 10 x 13 + 2 x 13. 13 x 10 is 130, as you add the zero at the end. 2 x 13 is 26, so the answer is 156.

Using this same logic, multiply 11 by 15.

Your response might be that 10 x 15 is 150, and then we need to add 15 for 165.

This example is the essence of chain of thought prompting.

You can use prompt engineering to help the LLM by showing the steps necessary to arrive at the correct answer. This technique is usually the best way for large language models to produce the proper output in complex tasks.

Chain of Thought Prompting Technique

Implementing chain of thought prompting can be as simple as appending “think about this process step by step and write them out” to the end of your prompt. 

Remember elementary school when your teacher would make you “show your work” on math tests? Yep, that same concept works for a language model.

Consider this example with GPT-3.5. Please note that chain of thought processing is standard in more advanced models. However, in older models or models without the same logic reasoning built-in, chain of thought can help a model arrive at the correct answer.

Consider the following riddle: “Adam has two sisters. Each sister has two brothers. How many brothers does Adam have?”

The answer is one since each sister has two brothers, Adam, and another brother. 

Here is GPT-3.5’s output.

A zero shot, no chain of thought prompt asking a riddle of GPT-3.5

We can see above that the model cannot figure out the riddle correctly. It arrives at the incorrect answer.

What happens if we insert the most basic chain of thought prompting – asking the LLM to reason this out step by step?

A zero shot chain of thought prompt that gets GPT-3.5 to solve the riddle properly

Success! The model now arrives at the correct answer because we asked the model to break down each step. Furthermore, the model reasons each step properly.

Technically, the above technique is “zero shot chain of thought prompting” because we did not provide any examples but requested the LLM to follow a chain of thought to arrive at its conclusion. As you may guess, you can combine few shot prompting with chain of thought to arrive at a more robust solution.

Chain of Thought Prompting Tips

Chain of thought prompting is a powerful technique that helps you extract the most from your language models.

Here are three tips to make your chain of thought prompts as robust as possible.

  1. Start Small: Prompting the AI to merely think about its actions step-by-step is a simple chain of thought prompt that sometimes works well (see example above). Try that, and if that doesn’t work, move on to the more complex chain of thought sequences.

  2. Articulate Examples: If you need to provide an example chain of thought for the LLM, make sure it is clear and articulate. It should show all steps.

  3. Use a Few Examples: Don’t use many examples when executing this type of prompt. Use the same 2-3 to give the language model an idea of what you want.

If your language model needs help with reasoning tasks, this technique is a great way to help it overcome those and give you the correct answer.

Key Differences Between Few Shot, Zero Shot and Chain of Thought Prompting

For reference, here’s a quick rundown of each technique, including the critical differences between the methods above and where you’d want to use each technique.

Few Shot PromptingZero Shot PromptingChain of Thought
DefinitionProviding the model with a few examples (shots) of the task to guide its response.Asking the model to perform a task without providing any examples.Encouraging the model to generate intermediate reasoning steps before final response.
UsageCommon in tasks where examples can clearly illustrate the expected response pattern.Used when no relevant examples are available or desired for guidance.Utilized for tasks requiring complex reasoning and multi-step solutions.
Needs Examples?Yes, typically 2-5 examples of the task.No examples provided.Examples may or may not be provided, but emphasis is on intermediate steps.
Training DependencyRelies on model’s ability to generalize from the given examples to similar tasks.Relies on the model’s pre-trained knowledge and ability to understand task from prompt.Depends on the model’s capability to perform logical reasoning and multi-step processes.
Capability Suitable for moderately complex tasks where patterns are learnable from few examples.Ideal for simple tasks or when the model has been extensively pre-trained on similar tasks.Best for complex tasks requiring step-by-step reasoning or problem-solving.
StrengthsProvides clear guidance, improves accuracy by reducing ambiguity in the task.Simplicity, faster setup as no examples need to be prepared.Enhances performance on tasks requiring logical reasoning, can explain the reasoning process.
WeaknessesRequires careful selection of representative examples, can be labor-intensive.Can lead to lower accuracy if the task is ambiguous or not well-understood by the model.May increase computation time due to generation of intermediate steps, complex to implement.
Task ExamplesText classification, translation, summarization with provided examples.Simple Q&A, straightforward text generation tasks.Mathematical problem-solving, logical reasoning tasks, complex decision making.

Zero Shot Prompting

The key difference with zero-shot prompting is that you do not need to provide any example data. You are leveraging the full power of the LLM’s creativity and giving it free rein to produce the output it deems best.

Advantages:

  • Less work to engineer the prompt

  • Maximum creativity from LLM

Use zero shot prompting when you have a vague task and no specific structure for what you want to see. You’ll often use zero shot prompts for idea generation, sentiment analysis, language translation, and getting answers to questions.

Few Shot Prompting

Few shot prompting’s key difference is that you provide example data, which helps “train” the LLM in real-time on what you’d like to see as output. You’re taking advantage of the LLM’s creativity and training while providing it with some examples and guardrails.

Advantages:

  • Ensure the output you receive is something that you want

  • Constrains the LLM output, helping to guard against hallucinations

Use few shot prompting when you want the LLM to achieve a task similar to tasks before it. You’ll use this prompt when building automated agents or asking for diagnoses. Real-world examples include customer support automation, interpreting medical diagnoses, and code generation (that may have to follow a specific format).

Chain of Thought Prompting

Lastly, the key difference of chain of thought prompting is that it helps the LLM reason through the steps to arrive at a correct answer. You can combine this technique with either zero shot or few shot prompting.

Advantages:

  • Helps the LLM arrive at the right answer

  • Helps debug issues since you can see what the model is thinking when generating the answer

Typically, you’d use chain of thought prompting in two cases. The first is when you want confidence that the model has answered correctly. You can manually verify each step to see if it has gotten anything wrong. The second common use case is in educational applications where displaying both the answer and process is important.

Final Thoughts

Prompting techniques substantially impact the quality of output that the LLM generates. Including a few examples can be the difference between output that you or your customers love and output that misses the mark.

Zero shot prompting is often an acceptable option when interacting with an AI agent, like ChatGPT, and you can ask follow-up questions. It’s usually fine to ask for some ideas for a blog post because you’ll be able to ask more questions later.

For further learning on the topic you should consider an AI newsletter (particularly “Patent Drop” for everything technical in AI and tech), or investing in some books on AI.

For less flexible scenarios like when an AI agent is part of a workflow, you’ll want to ground your AI with few shot prompting so it has an idea of what to provide. If you don’t, your agent might not be as helpful as you’d like it to be.

Experiment with your particular language model to find the right amount of guidance it needs to achieve the results you want. Most users will find few shot prompting to give them the results they want while simultaneously letting the AI do the bulk of the work.

Similar Posts