Lyzr Agent Studio is now live! 🎉

Prompt Engineering 101: How to Write Powerful Prompts

Prompt Engineering 101: How to Write Powerful Prompts

Table of Contents

Build your 1st AI agent today!

Often overlooked, prompt engineering is a game-changer when building production-ready enterprise-grade generative AI applications. Greg Brockman, President at OpenAI, emphasized the significance of prompts with a simple yet powerful tweet: “Prompting is the art of communicating eloquently to an AI.” And he’s spot on!

Why is prompt engineering so crucial, especially when developing private GenAI applications? The answer lies in the capabilities and challenges of models like GPT-4. Imagine GPT-4 as a vast reservoir of knowledge, rumored to be built on an astonishing 1.2 trillion parameters and over a hundred expert models. This advanced state-of-the-art language model, or LLM, holds more information than a collective of human minds could. However, this immense knowledge base is a double-edged sword. Faced with a query, GPT-4 can delve into many potential answers, consulting various expert models for the most accurate response. This process, while thorough, can lead to an overload of information.

Here’s where the power of prompt engineering comes into play. It’s like guiding a searchlight in the vast ocean of GPT-4’s knowledge. We can direct GPT-4 to a ‘focus zone’ by crafting focused and precise prompts. This ensures that the responses are accurate and relevant to our specific query. It’s like having a conversational GPS for navigating the complex paths of generative AI.

In short, prompt engineering is not just a tool; it’s a vital skill for anyone venturing into the field of generative AI applications. It’s the key to unlocking the true potential of models like GPT-4, especially when creating applications that require a tailored and specific response. As we explore the SDKs for building GenAI applications, remember that a well-engineered prompt is your best ally.

Mastering Prompt Engineering Techniques

The world of prompt engineering has evolved rapidly since the debut of the GPT-3 model in 2022. Among the vast choices of emerging techniques, some have stood out for their effectiveness and popularity. This post will delve into these prominent techniques, examining their applications in various real-world scenarios, especially in an enterprise setting.

Zero-Shot Prompting: This is the most straightforward technique. Here, you pose a single question to GPT and receive an immediate answer. No pre-context or example is provided; it’s a direct and uncomplicated approach. This is how most of us interact with ChatGPT – the commercial version.

Prompt
“Which is the easiest programming language to learn?”

Answer
Python: Known for its readability, Python has a straightforward syntax that’s similar to English. It’s widely used for web development, data analysis, artificial intelligence, and scientific computing.”

Few-Shot Prompting: Large language models exhibit impressive abilities in zero-shot learning, yet they often struggle with more intricate tasks under the zero-shot framework. To enhance performance in such cases, few-shot prompting is employed as a strategy for in-context learning. This involves including examples within the prompt, acting as a guide to refining the model’s output. These examples effectively set the stage for the model, conditioning it to produce more accurate responses for subsequent prompts.

Prompt

“Can you use below examples to classify the hiring candidate?

Candidate 1: Good in programming, but poor team player.Decision: Not Selected

Candidate 2: Average in programming, but a great communicator.Decision: Not Selected

Candidate 3: Good in programming and a great team player.Decision: ?”

Answer

Based on the criteria illustrated in your examples, the decision for Candidate 3, who is both good in programming and a great team player, would likely be:

Decision: Selected

Five-Shot Prompting: An extension of a few-shot, but with five examples. Gemini notably used this approach to demonstrate Gemini Ultra’s superiority over GPT-4. Although the debate was, Gemini compared a CoT32 (Chain of Thought) output of the Gemini Ultra model with a Five-Shot prompting results of GPT-4. And Gemini Ultra could do only slightly better. This goes to show the power of Five-Shot prompting.

image
Prompt

Task: Rewrite the following sentences in a more formal tone.
Original: Hey, can u meet me at 7?
Rewritten: Hello, are you able to meet me at 7 o’clock?
Original: btw, i finished the assignment.
Rewritten: By the way, I have completed the assignment.
Original: lol, that’s too funny 😀
Rewritten: That is quite amusing. 😊
Original: idk what to do about this
Rewritten: I am uncertain as to how to proceed with this matter.
Original: omg, did u see that?!
Rewritten: Oh my goodness, did you witness that event?
Original: This is funny, Lol!
Rewritten: ?

Answer

Rewritten: This is amusing, indeed. Laughing out loud!

Chain of Thought (COT) Prompting: Chain of Thought (COT) Prompting is extremely powerful. It guides AI models like GPT-4 to approach problems through a step-by-step reasoning process, much like a human would. This method transforms complex queries into simpler, logical steps, making the AI’s responses more accurate, detailed, and contextually relevant. By mimicking human-like problem-solving, COT Prompting significantly enhances the AI’s ability to understand and address nuanced or multifaceted questions.

In a “chain of thought” approach, the model:

  1. Breaks down the problem: It dissects it into smaller, more manageable components.
  2. Processes each component: The model then methodically processes each component, considering different aspects and possible outcomes.
  3. Explains reasoning: Crucially, the model articulates the reasoning behind each step. This transparency makes it easier to understand how the model arrived at its conclusion.
  4. Arrives at a conclusion: Finally, the model combines the insights from each step to arrive at a well-reasoned conclusion or solution.
Prompt

Solve the following problem using a chain of thought approach:

“If a farmer has 15 apples and gives away 8, then buys another 12, how many apples does the farmer have in the end?”

1. The farmer starts with 15 apples.
2. The farmer gives away 8 apples. So, we subtract 8 from 15, which is 15 – 8 = 7 apples.
3. Now, the farmer has 7 apples.
4. Then, the farmer buys 12 more apples. To find out how many apples the farmer has now, we add 12 to the 7 apples the farmer already has.
5. So, 7 apples + 12 apples = 19 apples.
6. Therefore, after all these transactions, the farmer has 19 apples.

Question: “A car dealer has 30 cars at the start of the week. If he sells 10 cars on Wednesday, 8 apples on Thursday, and purchases 13 cars on Friday, how many cars will he have by the end of the week?”

Answer

To solve this problem using a chain of thought approach, let’s break it down step by step:

1. Initial Number of Cars: The car dealer starts the week with 30 cars
2. Cars Sold on Wednesday: On Wednesday, the dealer sells 10 cars. This means we subtract 10 from the initial number. So, 30 – 10 = 20 cars remaining after Wednesday.
3. Cars Sold on Thursday: On Thursday, 8 more cars are sold. We subtract these from the remaining number. So, 20 – 8 = 12 cars remaining after Thursday.
4. Cars Purchased on Friday: The dealer purchased 13 more cars on Friday. We add these to the current number. So, 12 + 13 = 25 cars.

Therefore, by the end of the week, the car dealer will have 25 cars.

React (Reason and Act): ReAct is a system that employs Large Language Models (LLMs) to create reasoning paths and specific actions combined.

This process of generating reasoning paths enables the model to develop, monitor, and revise its action plans and adapt to unforeseen situations. The action phase allows the model to connect with external sources, such as knowledge databases or different environments, to collect additional data.

This ReAct system enhances the ability of LLMs to interact with outside tools, acquiring extra information that contributes to more accurate and factual answers.

Prompt

Using the ReAct framework, determine the release year of the Tesla Model S.

Thought 1: I need to find out when the Tesla Model S was first released to the public. The best source for this information would be the manufacturer’s history or a reliable automotive database.

Act 1: Search[Tesla Model S release year]

Obs 1: The search results indicate that the Tesla Model S was first released in 2012.Thought 2: Now that I have the initial release year, it would be prudent to check for any significant updates or new generations of the Model S that could be considered as a re-release or update.

Act 2: Search[Tesla Model S generation updates]

Obs 2: Further search results show that while there have been updates to the Tesla Model S, the model name has remained the same, and there have been no new generations labeled with a different year.

Thought 3: With the information gathered, it’s clear that the Tesla Model S was first released in 2012, and despite updates, no new generation has been named differently.
Act 3: Conclude[Tesla Model S first release year]

Question

Using the ReAct framework, determine which material is best for insulating a house.

Answer

Thought 1: To identify the optimal insulation material for a house, I need to consider the properties that define good insulation, such as thermal resistance, cost, environmental impact, and durability.

Act 1: Search[best insulation materials for homes]

Obs 1: Initial search findings suggest popular insulation materials include fiberglass, cellulose, spray foam, and mineral wool, each with distinct advantages and R-values, which measure thermal resistance.

Thought 2: I need to compare the R-values of these materials to determine which provides the best thermal resistance. However, the effectiveness of insulation is not solely dependent on R-value.

Act 2: Search[compare R-values of insulation materials]

Obs 2: The comparison indicates that spray foam generally offers the highest R-value per inch among the materials listed. Fiberglass and mineral wool also provide good thermal resistance, while cellulose is noted for its eco-friendly nature.

Thought 3: Besides R-value, I should consider other factors like the material’s cost, environmental friendliness, ease of installation, and suitability for different climates and house designs.

Act 3: Search[insulation material cost-effectiveness and environmental impact]

Obs 3: The search reveals that spray foam provides excellent insulation but is more expensive. Cellulose is cost-effective and environmentally sustainable due to its recycled content. Fiberglass and mineral wool are affordable and widely used, but their environmental and health impacts are mixed.

Thought 4: Balancing thermal efficiency with environmental sustainability and cost, cellulose emerges as a strong candidate, especially for homeowners seeking an eco-friendly option. Spray foam could be preferable when maximum insulation per inch is required, and budget is less constrained.

Act 4: Conclude[Optimal insulation material choice]

Based on the gathered information and considering a balance of effectiveness, cost, and environmental impact, cellulose is recommended as the best material for insulating a house. However, the final decision should account for the specific needs and constraints of the project.

Combining Techniques

The real magic happens when you blend these approaches. Pairing few-shot with CoT or ReAct, elevates the effectiveness of your prompts. This hybrid approach guides GPT not just in how to think but also in producing outputs that mirror your provided examples. These combinations are becoming the gold standard among prompt engineering techniques, especially potent for those who’ve mastered the art of prompting.

You will never go wrong with CoT+FS (Chain of Thought + Few Shot)

Take, for instance, the Chain of Thought combined with Few-Shot. This technique is widely acclaimed by experts in the field, offering a balance of guided reasoning and example-based learning, leading to more accurate and contextually relevant outputs.

Few-Shot Example 1:

Question: How do I calculate the income tax for an annual salary of $50,000?
Chain of Thought: Identify the tax bracket the salary falls into.Determine the base tax amount for that bracket.Calculate additional tax for the income exceeding the bracket’s lower threshold.Add the base tax and additional tax for the total tax amount.
Answer: For a $50,000 salary, [insert specific tax calculation based on current tax laws and brackets].

Few-Shot Example 2:
Question: What is the tax on a salary of $75,000 with a deductible of $5,000?
Chain of Thought:Subtract the deductible from the total salary to get the taxable income.Identify the tax bracket for the taxable income.Compute the base tax for that bracket.Calculate the additional tax on income over the bracket’s lower threshold.Add the base tax and additional tax for the final tax amount.
Answer: For a $75,000 salary with a $5,000 deductible, [insert specific tax calculation].

Prompt

How to calculate income tax for a salary of $90,000 with a deductible of $10,000?”

Answer

Chain of Thought:First, reduce the salary by a deductible to find the taxable income ($90,000 – $10,000 = $80,000).Determine the appropriate tax bracket for $80,000.Calculate the base tax for this bracket.Determine the additional tax on the income exceeding the lower limit of the bracket.Sum the base tax and the additional tax to get the total income tax.
Answer: For a $90,000 salary with a $10,000 deductible, [insert specific tax calculation based on the identified bracket and current tax laws].

Innovative Techniques Emerging: 

The field is constantly evolving with new techniques. For example, I developed an SSR technique (Split, Summarize, and Rerank). This powers the YouTube Summarizer app (https://www.lyzr.ai/demos/), showcasing its efficacy in distilling and prioritizing information. From my experience with various YouTube summarizers, the SSR-powered one stands out for its performance.

image 1

With its growing capabilities, prompt engineering is proving to be more versatile than we initially realized. With LLMs like Anthropic and GPT-4 offering increasingly larger context windows, the need for complex reranking algorithms and RAG techniques diminishes. You can extract precisely the information you need by honing your prompt engineering skills and choosing the right techniques.

In summary, while numerous prompting methods are out there, focusing on the top techniques like CoT, ReAct, and their combinations with one-shot or few-shot will give you a solid starting point to explore their potential. Remember, the key lies in understanding and effectively applying these techniques to harness the full power of generative AI.

What’s your Reaction?
+1
5
+1
0
+1
0
+1
0
+1
1
+1
0
+1
0
Book A Demo: Click Here
Join our Slack: Click Here
Link to our GitHub: Click Here
Share this:
Enjoyed the blog? Share it—your good deed for the day!
You might also like
Need a demo?
Speak to the founding team.
Launch prototypes in minutes. Go production in hours.
No more chains. No more building blocks.