System Prompt
Learn what System Prompt means in AI and machine learning, with examples and related concepts.
Definition
System prompt is a special instruction given to an LLM that sets its behavior, personality, and constraints for an entire conversation. It’s the “behind the scenes” configuration that runs before the user says anything.
When you use ChatGPT or Claude through the API, every request has three parts: the system prompt (how to behave), the user messages (what the human says), and the assistant messages (what the AI previously said). The system prompt is invisible to the end user but fundamentally shapes every response.
System prompts are how developers turn a general-purpose AI into a specialized tool. The same underlying model can be a code reviewer, a customer support agent, a creative writing coach, or a medical information assistant — the system prompt makes the difference.
How It Works
API Request Structure:
┌──────────────────────────────────────────────┐
│ System Prompt (sets behavior for all turns) │
│ "You are a senior Python developer. Review │
│ code for bugs, security issues, and style. │
│ Be direct and specific. Use code examples." │
├──────────────────────────────────────────────┤
│ Message 1 [user]: "Review this function..." │
│ Message 2 [assistant]: "I found 3 issues..." │
│ Message 3 [user]: "What about error handling?" │
│ Message 4 [assistant]: (generated now) │
└──────────────────────────────────────────────┘
The system prompt influences every response in the conversation.
What System Prompts Typically Define
| Component | Purpose | Example |
|---|---|---|
| Role | Who the AI is | ”You are a senior DevOps engineer” |
| Behavior | How to respond | ”Be concise. Use bullet points.” |
| Constraints | What NOT to do | ”Never recommend deprecated packages” |
| Format | Output structure | ”Always respond in JSON format” |
| Knowledge | Domain context | ”The user is on the Pro plan ($29/mo)“ |
| Tone | Communication style | ”Friendly but professional” |
Why It Matters
- Product differentiation — The system prompt is what makes your AI app different from raw ChatGPT
- Consistency — Without a system prompt, the model defaults to generic behavior. With one, every response follows your rules.
- Safety — System prompts enforce guardrails: “Never provide medical diagnoses” or “Always recommend consulting a professional”
- Efficiency — A well-crafted system prompt eliminates the need to repeat instructions in every user message
Example
from anthropic import Anthropic
client = Anthropic()
# Example 1: Code review assistant
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system="""You are a senior code reviewer. Follow these rules:
- Focus on bugs, security vulnerabilities, and performance issues
- Rate severity: 🔴 critical, 🟡 warning, 🟢 suggestion
- Show the fix with a code diff
- Be direct — no pleasantries or padding""",
messages=[{
"role": "user",
"content": """Review this Python function:
def get_user(user_id):
query = f"SELECT * FROM users WHERE id = {user_id}"
return db.execute(query)"""
}]
)
# → 🔴 SQL Injection vulnerability. Use parameterized queries:
# query = "SELECT * FROM users WHERE id = %s"
# return db.execute(query, (user_id,))
# Example 2: Customer support bot with product knowledge
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system="""You are a support agent for AcmeCloud (cloud hosting platform).
Product info:
- Starter plan: $9/mo, 1 GB RAM, 10 GB storage
- Pro plan: $29/mo, 4 GB RAM, 50 GB storage
- Enterprise: custom pricing, contact [email protected]
Rules:
- Never make up features that don't exist
- For billing issues, direct users to [email protected]
- If you're unsure, say so and offer to connect them with a human agent
- Keep responses under 3 sentences when possible""",
messages=[{
"role": "user",
"content": "How much storage do I get on the Pro plan?"
}]
)
# → "The Pro plan includes 50 GB of storage for $29/month."
# Example 3: JSON-only output
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=256,
system="You are a data extraction API. Always respond with valid JSON only. No explanations.",
messages=[{
"role": "user",
"content": "Extract entities: 'Tim Cook announced the new MacBook Pro at Apple Park on Tuesday.'"
}]
)
# → {"people": ["Tim Cook"], "products": ["MacBook Pro"], "locations": ["Apple Park"], "dates": ["Tuesday"]}
System Prompt vs User Prompt
| System Prompt | User Prompt | |
|---|---|---|
| Who writes it | Developer | End user |
| Visibility | Hidden from user | Visible in chat |
| Persistence | Same across all messages | Changes each turn |
| Purpose | Configure behavior | Request information |
| Priority | Higher (generally) | Lower |
Note: system prompts are not a security boundary. Determined users can extract or override them through prompt injection. Don’t put secrets in system prompts.
Key Takeaways
- System prompts configure the AI’s role, behavior, constraints, and output format for an entire conversation
- They’re what turn a general-purpose model into a specialized product (code reviewer, support bot, etc.)
- A good system prompt is specific, concise, and includes both what to do and what not to do
- System prompts are not secure — never put API keys or secrets in them
- All major APIs (Claude, OpenAI, Gemini) support system prompts with similar priority semantics
Part of the DeepRaft Glossary — AI and ML terms explained for developers.