A Guide on Prompt Engineering for Beginners

Are you curious about the potential of generative AI to enhance your work, but skeptical about the hype and unsure how to get the most out of it? One of the most powerful yet accessible ways to harness AI is through practicing the digital skill of prompt engineering – the process of designing effective prompts to get the most out of AI language models.

At its core, prompt engineering is about learning how to communicate with AI to direct it to perform useful tasks for you and your organization. By providing the right instructions, context, inputs, and output formats, you can steer large language models to generate relevant, insightful content and analysis to support your mission.

 

Table of Contents

 


Elements of Effective Prompts

While prompting AI models is more an art than a rigid formula, understanding the key components will help you craft prompts to elicit the responses you’re looking for:

  1. Instruction: Clearly specify the task you want the AI to perform, whether that’s summarizing a long report, brainstorming campaign ideas, or analyzing donor data. Be direct and specific in your instruction.
  2. Context: Provide any relevant background info the AI needs to give a good response. This could include details about your non-profit’s focus area, target audience, past campaigns, etc. More context often leads to more relevant outputs.
  3. Input Data: Include the specific piece of text, data, or question you want the AI to process or respond to. Make sure the input is well-formatted and error-free.
  4. Output Indicator: Specify the desired format or structure for the AI’s response. This could be something like “List the top 3 ideas” or “Write a 100-word intro paragraph.” Setting expectations for the output helps get a relevant result.

While not every prompt needs all four elements, thinking through each one helps ensure you’re giving the AI model enough to work with.

 


Quick Tips for Designing Effective Prompts

As you start experimenting with prompts, keep these best practices in mind:

  1. Start simple.
    Begin with basic, straightforward prompts and gradually add more complexity and context. Building up prompt complexity over time as you gauge results is better than an overly complicated prompt from the get-go.
  1. Be specific and direct in your instructions.
    Clearly communicate the core task you want the AI to perform. Avoid imprecise or contradictory language. The more focused and detailed the prompt, the better the output.
  1. Focus on the key information needed for the task at hand.
    While more context is often better, too much irrelevant or unnecessary detail can be counterproductive. Aim to give the AI just enough background to effectively address the task.
  1. Provide clear output indicators and format guidelines.
    If you have a vision for what you want the end result to look like, spell it out for the AI. The clearer you are on your desired output, the more likely you’ll get a good result.
  1. Emphasize the “do’s” not the “don’ts”.
    Avoid getting caught up in what you don’t want. Focus your prompt language on the specific things you’re looking for the AI to accomplish or produce.
  1. Iterate and experiment.
    Crafting the perfect prompt is a trial-and-error process. If the first attempt doesn’t quite hit the mark, make adjustments and try again. Tweak the instructions, add more context, clarify the output format, and see how the results improve.

By mastering the art and science of prompt engineering, we can all start tapping into the power of AI to work more efficiently, creatively and effectively towards our mission.

But this is just the tip of the iceberg! 

Continue reading our in-depth guide to learn cutting-edge prompt engineering techniques, see real-world examples from non-profits, and access plug-and-play prompts to get started.

 


Specific Prompt Engineering Methods and Frameworks

Prompt Engineering for Brainstorming or Coming up with Multiple Solutions to a Problem



Zero-Shot Prompt Engineering

When to Use: Zero-shot prompts are useful when you want a language model to perform a task it has not been explicitly trained on, without providing any task-specific examples. Zero-shot prompts leverage the broad knowledge and capabilities the model has acquired during pretraining on a large corpus. They are particularly effective for tasks that are similar to those the model may have encountered during pretraining, or that can be easily expressed through natural language instructions.

Example:

  • ✅Correct Application:
    • Prompt: “A non-profit organization is creating a chatbot to answer questions about their volunteer opportunities and donation process. Generate three example user queries the chatbot should be able to handle.”
    • Why Correct: This prompt provides clear instructions for the desired task (generating example user queries) and specifies the relevant context (a non-profit’s chatbot for volunteering and donations). It does not include any examples, relying solely on the model’s pretraining knowledge to generate appropriate queries. The task of coming up with plausible user queries is well-suited for zero-shot prompting, as it draws upon the model’s general understanding of how people seek information.
    • Outcome: The model generates several realistic user queries that a non-profit’s chatbot may need to handle, such as “How can I sign up to volunteer at an upcoming event?”, “What types of volunteer roles are currently available?”, and “Can I set up a recurring monthly donation?”. These outputs demonstrate the model’s ability to infer the types of questions users might ask in this scenario.

  • ❌Incorrect Application:
    • Prompt: “A non-profit organization wants to analyze trends in their volunteer sign-ups and donations over the past 5 years. What are the key metrics they should track?”
    • Why Incorrect: While this prompt presents a clear task related to a non-profit’s operations, it is not well-suited for zero-shot prompting. Analyzing multi-year trends and identifying key metrics requires domain knowledge about non-profit administration and data analysis that the model is unlikely to have learned from its pretraining data alone. Without more context or examples demonstrating the types of metrics to consider, the model may struggle to provide a relevant or comprehensive answer. This type of analytical task is better tackled through techniques like few-shot prompting or fine-tuning on a dataset of non-profit metrics.
    • Outcome: The model’s response may include some generic metrics like “number of volunteers” and “total donation amount”, but it likely fails to suggest more insightful metrics for trend analysis, such as volunteer retention rate, average donation size, or year-over-year growth percentages. The model’s lack of non-profit domain expertise limits its ability to provide nuanced recommendations in this zero-shot setting.



Few-Shot Prompt Engineering

When to Use: Few-shot prompting is effective when you have a small number of examples (usually between 1-10) of the desired task that you can provide to the model to guide its output. It’s useful when zero-shot prompting is not sufficient for the model to grasp the task, but you don’t have enough data to fine-tune the model. Few-shot prompts give the model a clearer pattern to follow compared to just an instruction.

Example:

  • ✅Correct Application:
    • Prompt: “Here are some example questions volunteers might ask a non-profit chatbot:
      • Q: What volunteer opportunities do you have available this weekend?
      • A: This weekend we have a beach cleanup event on Saturday from 10am-1pm, and a food bank shift on Sunday afternoon from 1pm-4pm. Would you like to sign up for either of those?
      • Q: How can I make a donation to support your organization?
      • A: Thank you for your generosity! You can make a secure donation on our website at nonprofit.org/donate, or mail a check payable to “Nonprofit Name” to our office at 123 Main St, Anytown USA. All donations are tax-deductible.
  • Why Correct: This prompt provides the model with two clear examples of the types of questions and answers that are expected, demonstrating the desired question-answer format, tone, and level of detail. By giving multiple examples, the model has more context to infer the task without explicit instruction. The open-ended “Q:” at the end invites the model to generate a new relevant question that follows the patterns in the examples.
  • Outcome: The model is likely to generate a coherent question that a volunteer might plausibly ask, such as “Q: What kind of training is provided for new volunteers?” This question fits the format and subject matter of the examples. The model has essentially learned the task of producing relevant questions from the few-shot prompts.

  • ❌Incorrect Application:
    • Prompt: “The non-profit needs some questions for their chatbot to answer. Here are a few:
      • Q: When is the next volunteer event?
      • Q: What items are accepted for donation drives?”
    • Why Incorrect: While this prompt tries to use a few-shot approach by listing example questions, it does not provide corresponding answers. Without answer examples, the model does not have enough context to infer the full task of generating appropriate question-answer pairs. The examples are also quite brief and lack variety, so the model may struggle to generate questions that are meaningfully distinct from the given examples.
    • Outcome: Due to the limited examples and lack of answers, the model’s generated question might be a close paraphrase of the examples, like “Q: Where can I sign up to volunteer?” While still a relevant question, this does not demonstrate real understanding of the question-answering task. The model would likely fail to provide a coherent answer if prompted to do so, since it hasn’t seen any answer examples to learn from.



Self Consistency Prompt Engineering

When to Use: Self-consistency prompting is useful when you want a language model to generate multiple diverse reasoning paths that lead to the same final answer. It leverages the intuition that for complex reasoning tasks, there are often multiple valid ways to arrive at the correct answer. By prompting the model to sample different reasoning paths and then aggregating the final answers, self-consistency can improve accuracy and robustness compared to relying on a single answer set.

Example:

  • ✅Correct Application:
    • Prompt: “A non-profit organization wants to estimate the environmental impact of its annual fundraising gala. Generate multiple reasoning paths to calculate the total carbon footprint of the event, considering factors like attendee travel, catering, and energy usage. Provide a final estimate in tons of CO2 equivalent.”
    • Why Correct: This prompt leverages self-consistency effectively by directing the model to generate multiple independent reasoning paths considering various emission sources. Each reasoning path makes slightly different assumptions but follows a logical flow to arrive at a final CO2e estimate. Aggregating the final answers from the different paths provides a more robust overall estimate range. The diversity in the reasoning allows the model to explore the problem from multiple angles and reinforces confidence in the conclusions.
    • Outcome: The model generates 3 distinct reasoning paths that each calculate the total carbon footprint by making reasonable assumptions about attendee travel, catering, and energy usage. While the specific numbers differ across paths, they triangulate to a consistent estimate in the range of 19-42 tons CO2e. This self-consistency across reasoning paths despite variation in calculation specifics lends credibility to the final aggregated estimate.

  • ❌Incorrect Application:
    • Prompt: “A non-profit wants to estimate the carbon footprint of its annual fundraising gala in tons of CO2 equivalent. Provide a step-by-step calculation:”
    • Why Incorrect: While this prompt outlines the task at a high-level, it fails to leverage the key aspects of self-consistency prompting. Critically, it doesn’t direct the model to generate multiple diverse reasoning paths and aggregate the final answers. Without that explicit guidance to explore different assumptions and calculation specifics, the model is likely to produce a single narrowly scoped estimate. This loses out on the benefits of self-consistency in terms of exploring the problem space and reinforcing the final conclusion through multiple reasoning paths arriving at similar answers.
    • Outcome: The model generates a single reasoning path that, while it produces a reasonable estimate, this single path doesn’t provide the same robustness and exploratory reasoning value as the self-consistency approach. The model doesn’t consider alternative assumptions that could lead to different estimates reinforcing the final conclusion.


Multi-Persona Prompt Engineering

When to Use: This framework is effective when different perspectives or roles need to be simulated to generate diverse responses or content. By instructing the AI to adopt various personas, you can explore multiple angles or solutions to a problem.

Example:

  • ✅Correct Application:
    • Prompt: “As a budget-conscious student, why would you choose our product? Now, as a luxury-seeking professional, why would you choose our product?”
    • Why Correct: The prompt clearly divides the task into two distinct personas, guiding the AI to consider the product’s appeal from varied customer viewpoints.
    • Outcome: AI crafts two tailored sales pitches, one highlighting affordability and the other luxury, demonstrating an understanding of diverse customer needs.

  • ❌Incorrect Application:
    • Prompt: “Why is our product the best choice?”
    • Why Incorrect: This prompt does not specify any persona for the AI to adopt, leading to a generalized and potentially less effective sales pitch that doesn’t address specific customer segments.
    • Outcome: AI generates a one-size-fits-all pitch that may fail to resonate with either targeted customer persona, missing the opportunity for tailored persuasion

 


Prompt Engineering for Analytical or Complex Problem Solving

Chain of Thought (CoT) Prompt Engineering

When to Use: Employ Chain-of-Thought (CoT) prompt engineering for solving complex problems that necessitate a multi-step reasoning process. It is particularly useful in scenarios where breaking down the problem into a series of logical steps can lead to a clearer, more comprehensive solution.

Example:

  • ✅Correct Application:
    • Prompt: “We have received a donation earmarked for educational programs. Let’s think step by step to allocate these funds efficiently: First, identify the educational programs with the highest immediate needs. Next, calculate the potential impact of the donation on these programs. Finally, consider how the allocation aligns with our long-term philanthropic goals.”
    • Why Correct: This prompt effectively employs CoT prompting by guiding the AI through a reasoned, step-by-step analysis of how to allocate a donation, demonstrating an understanding of strategic philanthropy.
    • Outcome: The AI produces a reasoned allocation plan that prioritizes educational programs based on immediate need, impact potential, and strategic alignment, enhancing the efficacy of the donation.

  • ❌Incorrect Application:
    • Prompt:“We need to increase our social media engagement. Let’s think step by step to create a social media strategy.”
    • Why Incorrect: The person believes this prompt effectively uses CoT by asking for a step-by-step strategy. However, it lacks specific steps for the AI to reason through, making it too vague to generate the detailed, reasoned output that CoT prompting is capable of.
    • Outcome: The AI’s response lacks the detailed reasoning process that true CoT prompting would elicit, resulting in a generic social media strategy that may not address specific organizational needs or goals.



Chain of Thought Factored Decomposition Prompt Engineering

When to Use: This technique is particularly effective in addressing complex queries by breaking them down into simpler, more manageable components. It encourages sequential reasoning while dissecting the given task into subcomponents, making it ideal for fields such as non-profit sales, marketing, philanthropy, and customer service, where challenges can be multifaceted.

✅Correct Application:

  • Prompt: “A customer is unhappy with their recent purchase because it does not meet their needs as they had expected. First, identify the customer’s main concerns by asking targeted questions. Next, based on their concerns, suggest alternative products that more closely match their needs. Finally, outline the steps for returning the original product.”
  • Why Correct: This prompt correctly applies the Chain-of-Thought Factored Decomposition technique by sequentially breaking down the customer service process into understanding the problem, suggesting solutions, and facilitating a resolution. It ensures the AI provides a detailed, step-by-step guide that addresses the customer’s issue comprehensively.
  • Outcome: The AI generates a structured response that first seeks to understand the customer’s dissatisfaction, then recommends alternative products based on the customer’s needs, and finally, explains the return process, offering a complete solution to the customer’s problem.

❌Incorrect Application:

  • Prompt: “Create a detailed plan to increase donations over the next quarter by identifying potential donors and crafting personalized outreach messages.”
  • Why Incorrect: Although the prompt initially seems well-structured, it improperly applies the Chain-of-Thought Factored Decomposition technique by lumping together complex tasks without guiding the AI to address each component separately. It assumes the AI can automatically segment the task into manageable steps without explicit instructions to do so, leading to a lack of detailed, step-by-step reasoning in the response.
  • Outcome: The AI provides a broad overview that touches on identifying potential donors and crafting messages but lacks the detailed, sequential decomposition necessary for a comprehensive plan. The response might miss critical steps in the donor identification process, fail to tailor outreach strategies effectively, and overlook the importance of measuring and analyzing the impact of these efforts.



Tree of Thought Prompt Engineering

When to Use: Tree of Thoughts (ToT) Prompting is particularly effective for tasks that are too complex for a linear approach and require a methodical exploration of different possibilities, such as strategic planning, problem-solving, and creativity tasks. It is best utilized when a decision must be informed by considering various potential outcomes or when navigating through complex information to arrive at a solution.

Example:

  • ✅Correct Application:
    • Prompt: “Consider we are facing declining sales in product X. Let’s explore potential strategies using ToT: First, evaluate the impact of a marketing campaign aimed at highlighting product X’s unique features. Next, consider a pricing strategy adjustment. Finally, assess the introduction of a loyalty program. For each strategy, evaluate as ‘highly effective,’ ‘potentially effective,’ or ‘ineffective’ based on our target market’s preferences and our current market position.”
    • Why Correct: This prompt effectively employs ToT by guiding the Language Model (LM) to explore different strategic options as distinct branches within a thought tree. It encourages the LM to evaluate each option systematically, mirroring a strategic planning process that involves considering multiple potential actions and their outcomes.
    • Outcome: The LM provides a structured exploration of each strategy, assessing their potential effectiveness based on logical reasoning and available data, ultimately offering a multi-faceted view on how to tackle the sales decline issue.


  • ❌Incorrect Application:
    • Prompt: “Let’s increase sales for product X using ToT. Step 1: Consider marketing campaign effectiveness. Step 2: Think about pricing strategy adjustment. Step 3: Reflect on introducing a loyalty program.”
    • Why Incorrect: While this prompt attempts to adopt the ToT framework by breaking down the task into steps, it fails to direct the LM to systematically explore different possibilities within a tree structure. It merely lists steps without framing them as independent thought branches for evaluation or encouraging the exploration of potential outcomes and their implications. The prompt lacks the depth and interactive exploration characteristic of ToT, such as evaluating options as ‘highly effective,’ ‘potentially effective,’ or ‘ineffective,’ and considering the interplay between different strategies.
    • Outcome: The LM sequentially addresses each listed step without deep exploration or comparison, leading to a linear and shallow analysis that doesn’t fully utilize the potential of ToT for comprehensive problem-solving and strategic planning.

 


Prompt Engineering for Creating Structured Reports or Outlines

Skeleton of Thought (SoT) Prompt Engineering

When to Use: This approach is suitable when planning and structure are necessary before diving into the details. It’s perfect for creating a well-organized response or document where the overall structure is crucial to the coherence of the final output.

Example:

  • ✅Correct Application:
    • Prompt: “Outline the structure for a report on the impact of our recent clean water initiative, starting with an introduction of the project, followed by the methodology of our approach, the outcomes achieved, challenges faced, and concluding with the future steps.”
    • Why Correct: This prompt guides the AI to create a structured outline focusing on critical aspects of the report. It ensures a comprehensive and logically organized document that will be easy to follow and flesh out.
    • Outcome: The AI generates an organized outline that clearly segments the report into introduction, methodology, outcomes, challenges, and future steps, making it easier to elaborate on each section with detailed content.

  • ❌Incorrect Application:
    • Prompt: “Outline a strategy to increase donations next quarter by analyzing donor data and planning outreach.”
    • Why Incorrect: While attempting to use SoT by asking for an outline, this prompt fails by being too broad and not specifying the need for a structured step-by-step approach or including all necessary components like evaluating impact and making adjustments.
    • Outcome: AI produces a vague outline that lacks depth and misses critical steps, such as evaluating potential impact and adjustments, essential for effective strategic planning.



Show-Me vs. Tell-Me Prompt Engineering

When to Use: Choose this framework based on the desired outcome: use “show-me” for examples and demonstrations, and “tell-me” for explicit instructions or explanations. This approach is ideal for training material, customer support, or when illustrating concepts with examples is more effective than direct instructions.

Example:

  • ✅Correct Application:
    • Prompt: “Given the customer’s history of purchasing eco-friendly products, recommend a new eco-friendly product they haven’t tried yet.”
    • Why Correct: The prompt uses specific background information to tailor the AI’s response, making it relevant and personalized for the customer.
    • Outcome: AI uses the customer’s purchase history to recommend a new, relevant product, enhancing personalization and consistency in customer service.

  • ❌Incorrect Application:
    • Prompt: “Recommend a product.”
    • Why Incorrect: Without incorporating the persistent context of the customer’s preferences, the prompt leads to a generic recommendation, failing to utilize the continuity that could personalize the response.
    • Outcome: AI suggests a random product, missing the chance to connect with the customer’s known preferences.

 


Prompt Engineering for Marketing or Customer Service Messages

Directional Stimulus Prompt Engineering

When to Use: Directional Stimulus Prompting (DSP) is particularly useful for tasks where guiding the language model (LM) toward a specific direction or output is crucial. This includes situations where incorporating specific information, terminology, or perspectives is necessary to meet the desired output criteria. It’s effective in tasks like content generation with specified constraints, targeted information extraction, and scenarios requiring adherence to a particular theme or inclusion of certain keywords.

Example:

  • ✅Correct Application:
    • Prompt: “Generate a summary for the recent fundraising event by a non-profit organization, ensuring to include the following keywords: ‘community engagement’, ‘fundraising goals met’, ‘volunteer participation’, and ‘future projects’. The summary should highlight the success of the event, the role of community and volunteers, and mention plans for the funds raised.”
    • Why Correct: This prompt successfully applies DSP by explicitly listing specific keywords and thematic elements that the LM needs to include in the generated content. This ensures the generated summary is aligned with the organization’s messaging and goals, focuses on the event’s success, and acknowledges the contribution of volunteers, meeting the specified requirements.
    • Outcome: The LM generates a concise summary that effectively incorporates the given keywords, emphasizing the success of the fundraising event, the active involvement of the community and volunteers, and outlines how the raised funds will be utilized for future projects, aligning perfectly with the given directives.


  • ❌Incorrect Application:
    • Prompt: “Write about the recent non-profit fundraising event, including details about community, goals, volunteers, and future plans.”
    • Why Incorrect: Although this prompt aims to guide the LM towards a similar end goal as the correct application, it falls short by not specifying the need for certain keywords or themes explicitly. This lack of direction might lead to a summary that misses key points or fails to emphasize the event’s success and the specific areas of interest such as ‘community engagement’ or ‘fundraising goals met’.
    • Outcome: The LM’s response might cover the event in a general manner but lacks the targeted focus and inclusion of specified keywords, resulting in a summary that doesn’t fully meet the desired criteria or effectively communicate the event’s achievements and future implications.



Reflexion Prompt Engineering

When to Use: Reflexion Prompting is particularly effective for iterative learning tasks where immediate and direct feedback from the environment or task at hand can be translated into linguistic feedback for self-improvement. It’s highly beneficial for situations that involve sequential decision-making, coding challenges, or complex reasoning tasks where the learner (in this case, a large language model) can benefit from reflecting on past actions to inform future decisions. Reflexion is particularly suited for roles or tasks in non-profits that involve analytical thinking, problem-solving, or learning from past experiences to improve future outcomes. Examples include strategic planning, data analysis, grant writing, or any scenario where adaptive learning from feedback is crucial.

✅Correct Application:

  • Prompt: “Imagine we are analyzing the effectiveness of our latest fundraising campaign. Using Reflexion: Initially, consider our social media outreach strategy and its impact on donations. Reflect on feedback regarding audience engagement levels and the donation conversion rate. Next, evaluate our email marketing campaign by reflecting on open rates and the conversion from readership to donations. Finally, assess community outreach events by reflecting on participant feedback and the subsequent donations received. For each element, use Reflexion to verbalize how we might adjust our strategy based on what we’ve learned to improve future outcomes.”
  • Why Correct: This prompt uses Reflexion by instructing the Language Model (LM) to reflect on specific feedback for various fundraising strategies, integrating episodic memories of past outcomes to propose improved strategies. It mirrors an adaptive learning process akin to a non-profit organization analyzing and learning from the effectiveness of different fundraising tactics.
  • Outcome: The LM provides a thoughtful analysis of each strategy based on past performance feedback, offering insights into potential adjustments for future fundraising efforts, thus demonstrating an effective application of Reflexion in a non-profit context.

❌Incorrect Application:

  • Prompt: “Let’s improve our marketing. First, think about how our social media campaign performed without considering specific feedback. Then, consider our email blasts’ effectiveness in a general sense. Lastly, reflect on our community events’ success without focusing on detailed feedback.”
  • Why Incorrect: While this prompt attempts to use Reflexion by encouraging reflective thought on various strategies, it fails to leverage detailed, specific feedback from past actions for iterative learning. It lacks the critical component of Reflexion – the incorporation of episodic memory and specific feedback into the reflection process. The prompt doesn’t guide the LM to analyze past outcomes in detail or suggest future improvements based on specific learnings, which is crucial for effective Reflexion.
  • Outcome: The LM provides a generic analysis lacking in-depth reflection on specific feedback or detailed suggestions for improvement, thus missing the opportunity to apply Reflexion effectively in strategic planning and analysis within a non-profit setting.