Great prompts are not written from scratch every time. Expert prompt engineers work from proven templates, adapting them to specific tasks. These 10 templates cover the vast majority of real-world use cases. Each includes the template structure, an explanation of when it works best, and a concrete example you can modify immediately.
When to use: Math, logic, multi-step reasoning, any task where the model needs to "show its work" to arrive at a correct answer. Improves accuracy on complex problems by 40-70%.
Template
[State the problem or question]
Think through this step-by-step:
1. First, identify the key information given.
2. Then, determine what approach to use.
3. Work through each step, showing your reasoning.
4. Verify your answer by checking it against the original constraints.
Provide your final answer after your reasoning.
Example
Prompt: A store offers a 20% discount on all items. If you also have a $15 coupon that applies after the discount, and the original price is $89.99, what is the final price? Think through this step-by-step.
Why it works: Without CoT, models often jump to wrong answers on multi-step math. The explicit step-by-step instruction forces the model to allocate tokens to intermediate reasoning, dramatically improving accuracy.
When to use: Classification, formatting, style matching, any task where showing examples is clearer than writing instructions. Essential when you need consistent output format.
Template
Here are examples of [task description]:
Input: [example input 1]
Output: [example output 1]
Input: [example input 2]
Output: [example output 2]
Input: [example input 3]
Output: [example output 3]
Now complete this:
Input: [actual input]
Output:
Example
Prompt: Classify customer feedback as POSITIVE, NEGATIVE, or NEUTRAL:
Input: "The delivery was super fast, love it!" / Output: POSITIVE
Input: "Product broke after 2 days. Terrible." / Output: NEGATIVE
Input: "It arrived on the expected date." / Output: NEUTRAL
Input: "Pretty good quality but shipping took forever."
Why it works: Examples implicitly communicate the classification criteria, tone sensitivity, and exact output format without lengthy instructions.
When to use: When you need the model to adopt a specific expertise, personality, or communication style. Especially effective for domain-specific tasks like legal, medical, or technical analysis.
Template
You are a [specific role] with [X years] of experience in [domain].
Your expertise includes:
- [Skill/knowledge area 1]
- [Skill/knowledge area 2]
- [Skill/knowledge area 3]
When responding:
- [Communication style guideline 1]
- [Communication style guideline 2]
- [Constraint or boundary]
You are currently helping [describe the user and their goal].
Example
System Prompt: You are a senior database architect with 15 years of experience in PostgreSQL and distributed systems. Your expertise includes query optimization, schema design, and migration strategies. When responding, always consider performance implications, suggest indexes where appropriate, and flag potential scaling bottlenecks. You are currently helping a startup engineer design their first production database schema.
Why it works: The role activates relevant knowledge patterns. The specificity (15 years, PostgreSQL, startup context) narrows the response space to exactly the kind of advice needed.
When to use: When you need machine-parseable output (JSON, YAML, CSV) or when outputs will be consumed by downstream systems. Critical for building reliable pipelines.
Template
Analyze the following [input type] and respond ONLY with a valid JSON object matching this exact schema:
```json
{
"field_1": "string - [description of what goes here]",
"field_2": number,
"field_3": ["array", "of", "strings"],
"field_4": {
"nested_field": "string"
}
}
```
Do not include any text outside the JSON object. Do not use markdown code fences in your response.
[Input to analyze]:
[actual input]
Example
Prompt: Analyze this product review and respond ONLY with valid JSON: {"sentiment": "positive|negative|mixed", "key_topics": ["array of topics mentioned"], "purchase_intent": true|false, "summary": "one sentence summary"}
Why it works: Providing the exact schema with field descriptions eliminates ambiguity. The "ONLY" and "Do not" constraints prevent the model from adding explanatory text that breaks parsing.
When to use: Complex multi-part tasks where you need each step completed in order. Great for data transformation, analysis workflows, and content creation pipelines.
Template
Complete the following steps in order. Show the output of each step before proceeding to the next.
Step 1: [First action with specific instructions]
Step 2: [Second action, may reference Step 1 output]
Step 3: [Third action, builds on previous steps]
Step 4: [Final action - synthesis or formatting]
Input to process:
[your input data]
Example
Prompt: Step 1: Extract all company names from the following press release. Step 2: For each company, identify if they are described as a partner, competitor, or customer. Step 3: Create a relationship map showing connections between the companies. Step 4: Summarize the key business implications in 3 bullet points.
Why it works: Explicit step ordering prevents the model from skipping ahead or merging steps. Requiring visible output at each step enables verification and debugging.
When to use: Evaluating options, making decisions, product comparisons, technology selection. Forces balanced, structured evaluation instead of biased narratives.
Template
Compare [Option A] and [Option B] across the following dimensions:
For each dimension, provide:
- A rating (1-5) for each option
- A brief explanation (2-3 sentences) justifying the rating
- A clear winner for that dimension
Dimensions to evaluate:
1. [Dimension 1 - e.g., Performance]
2. [Dimension 2 - e.g., Cost]
3. [Dimension 3 - e.g., Ease of use]
4. [Dimension 4 - e.g., Scalability]
5. [Dimension 5 - e.g., Community/Support]
End with an overall recommendation based on [specific use case or priorities].
Example
Prompt: Compare PostgreSQL and MongoDB for a real-time analytics dashboard that processes 10,000 events per second, used by a team of 3 backend engineers. Evaluate on: query performance for time-series data, operational complexity, cost at scale, developer experience, and ecosystem maturity.
Why it works: The rating system forces quantitative assessment. The dimension-by-dimension structure prevents the model from cherry-picking arguments. The use-case context ensures relevant trade-offs are highlighted.
When to use: Extracting specific data points from unstructured text -- emails, documents, web pages, transcripts. The backbone of document processing pipelines.
Template
Extract the following information from the text below. If a field is not found in the text, use "NOT_FOUND" as the value. Do not infer or guess -- only extract what is explicitly stated.
Fields to extract:
- [Field 1]: [description and expected format]
- [Field 2]: [description and expected format]
- [Field 3]: [description and expected format]
- [Field 4]: [description and expected format]
Text:
"""
[paste text here]
"""
Example
Prompt: Extract from this email: sender_name (full name), company (organization name), requested_meeting_date (ISO 8601 format), urgency (low/medium/high based on language), and action_items (list of specific requests).
Why it works: The NOT_FOUND fallback prevents hallucination. Specifying expected formats (ISO 8601) ensures consistency. The "do not infer" instruction is critical for accuracy in extraction tasks.
When to use: Content creation, copywriting, creative writing. Constraints paradoxically increase creativity by focusing the model's output space. Prevents generic, bland outputs.
Template
Write a [content type] about [topic] with the following constraints:
Audience: [specific target audience]
Tone: [specific tone - e.g., conversational but authoritative]
Length: [specific word/sentence count]
Must include: [required elements - keywords, concepts, CTAs]
Must avoid: [things to exclude - jargon, cliches, etc.]
Structure: [specific format - e.g., hook, 3 points, conclusion]
Additional context: [relevant background the model needs]
Example
Prompt: Write a LinkedIn post about RAG implementation lessons. Audience: senior engineers. Tone: experienced practitioner sharing hard-won lessons, not promotional. Length: 200 words max. Must include: one specific metric or number. Must avoid: "game-changer," "revolutionize," "leverage." Structure: contrarian hook, 3 concrete lessons, question to drive engagement.
Why it works: The "must avoid" list eliminates AI-sounding cliches. Length constraints force conciseness. The structural requirement ensures a compelling format rather than a generic essay.
When to use: Generating code, debugging, refactoring, or explaining code. Specifying constraints upfront dramatically improves code quality and reduces back-and-forth iterations.
Template
Write [language] code that [describes functionality].
Requirements:
- [Requirement 1 - e.g., use specific library/pattern]
- [Requirement 2 - e.g., handle these edge cases]
- [Requirement 3 - e.g., performance constraints]
Context:
- This code will be used in [describe the system/environment]
- It must integrate with [existing code/APIs]
- Error handling should [describe error strategy]
Include:
- Type annotations / docstrings
- Input validation
- Unit test examples for key functions
Do not use [deprecated APIs, specific patterns to avoid].
Example
Prompt: Write a Python async function that retries API calls with exponential backoff. Requirements: max 3 retries, configurable base delay, only retry on 429/500/502/503 status codes. Context: used in a FastAPI service processing 1000 requests/min. Include type hints and a pytest test. Do not use the deprecated asyncio.ensure_future.
Why it works: Specifying language, pattern, edge cases, and integration context upfront avoids the most common failure mode: generating code that works in isolation but fails in the actual system.
When to use: When you are unsure how to prompt for a specific task, or when you want to create a reusable prompt template for a recurring workflow. This is the "teach a model to fish" template.
Template
I need to [describe your high-level goal].
My specific context:
- [Key detail 1 about your situation]
- [Key detail 2 about constraints]
- [Key detail 3 about desired output]
Generate an optimized prompt that I can use to accomplish this goal. The generated prompt should:
1. Include a clear role/persona if beneficial
2. Specify the exact output format needed
3. Include any necessary constraints or guardrails
4. Anticipate and handle edge cases
5. Be self-contained (someone else could use it without additional context)
Also explain WHY you structured the prompt that way, so I can learn the principles.
Example
Prompt: I need to analyze customer support tickets to identify product bugs vs. user errors vs. feature requests. My context: we get 500 tickets/day, they are in English but often contain technical jargon, and I need to route them to different teams. Generate an optimized prompt for this classification task.
Why it works: Meta-prompting leverages the model's knowledge of what makes good prompts. The "explain why" instruction turns it into a learning experience. The self-contained requirement ensures the generated prompt is reusable by the whole team.
Explore Full Prompt Engineering Tutorials →
Get More Templates and Techniques
New prompt engineering patterns, real-world case studies, and advanced techniques delivered weekly. Join 25,000+ AI practitioners who level up their prompting skills.
Subscribe to the Newsletter