Prompt Engineering
Learn what Prompt Engineering means in AI and machine learning, with examples and related concepts.
Definition
Prompt Engineering is the practice of designing and refining the text instructions (prompts) you give to an LLM to get better, more consistent results.
It’s the difference between asking “write me some code” and giving a structured request with context, examples, and constraints. A well-crafted prompt can make a $20/month model outperform a $200/month model that receives vague instructions.
Prompt engineering isn’t just about wording — it includes techniques like few-shot learning, chain-of-thought reasoning, role prompting, and structured output formatting.
How It Works
Effective prompts typically include several components:
┌─ System Prompt (role, behavior rules, constraints)
│
├─ Context (background info, documents, examples)
│
├─ Task (what you want the model to do)
│
├─ Format (how you want the output structured)
│
└─ Examples (few-shot: input/output pairs)
The model processes all of these together to shape its response.
Why It Matters
- Cost savings — Better prompts reduce retries and token waste
- Consistency — Structured prompts produce predictable output formats
- Quality — The right technique (CoT, few-shot) unlocks capabilities the model has but won’t show with simple prompts
- Safety — System prompts define boundaries and prevent misuse
Example
Bad Prompt
Summarize this article.
Good Prompt
You are a tech journalist writing for a developer audience.
Summarize the following article in exactly 3 bullet points.
Each bullet should be one sentence, focusing on practical implications.
Use present tense.
Article:
{article_text}
Few-Shot Prompt (with examples)
from anthropic import Anthropic
client = Anthropic()
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=256,
system="Extract structured data from product descriptions.",
messages=[{
"role": "user",
"content": """Extract the product name, price, and category.
Example 1:
Input: "The new AirPods Pro 2 are available for $249 in the audio category"
Output: {"name": "AirPods Pro 2", "price": "$249", "category": "audio"}
Example 2:
Input: "Get the Galaxy S24 Ultra smartphone for just $1,299"
Output: {"name": "Galaxy S24 Ultra", "price": "$1,299", "category": "smartphone"}
Now extract from:
Input: "The RTX 5090 graphics card is priced at $1,999"
Output:"""
}]
)
print(response.content[0].text)
# → {"name": "RTX 5090", "price": "$1,999", "category": "graphics card"}
Key Techniques
| Technique | When to Use | Effect |
|---|---|---|
| Few-shot | Formatting, classification | Shows the model exactly what you want |
| Chain-of-Thought | Math, logic, reasoning | Forces step-by-step thinking |
| Role prompting | Tone/expertise control | ”You are a senior engineer…” |
| Structured output | Data extraction, APIs | Request JSON, XML, or markdown tables |
| Constraints | Limiting scope | ”In 3 sentences or fewer” |
Key Takeaways
- Prompt engineering is a skill, not a hack — it’s how you communicate effectively with LLMs
- Start with clear instructions, add examples if the output isn’t right
- Use chain-of-thought for anything involving reasoning or math
- System prompts set the baseline behavior; user prompts give the specific task
- Test prompts with edge cases, not just happy paths
Part of the DeepRaft Glossary — AI and ML terms explained for developers.