prompt-engineering

System Prompt

Learn what System Prompt means in AI and machine learning, with examples and related concepts.

Definition

System prompt is a special instruction given to an LLM that sets its behavior, personality, and constraints for an entire conversation. It’s the “behind the scenes” configuration that runs before the user says anything.

When you use ChatGPT or Claude through the API, every request has three parts: the system prompt (how to behave), the user messages (what the human says), and the assistant messages (what the AI previously said). The system prompt is invisible to the end user but fundamentally shapes every response.

System prompts are how developers turn a general-purpose AI into a specialized tool. The same underlying model can be a code reviewer, a customer support agent, a creative writing coach, or a medical information assistant — the system prompt makes the difference.

How It Works

API Request Structure:

┌──────────────────────────────────────────────┐
│ System Prompt (sets behavior for all turns)    │
│ "You are a senior Python developer. Review    │
│  code for bugs, security issues, and style.   │
│  Be direct and specific. Use code examples."  │
├──────────────────────────────────────────────┤
│ Message 1 [user]: "Review this function..."   │
│ Message 2 [assistant]: "I found 3 issues..."  │
│ Message 3 [user]: "What about error handling?" │
│ Message 4 [assistant]: (generated now)         │
└──────────────────────────────────────────────┘

The system prompt influences every response in the conversation.

What System Prompts Typically Define

ComponentPurposeExample
RoleWho the AI is”You are a senior DevOps engineer”
BehaviorHow to respond”Be concise. Use bullet points.”
ConstraintsWhat NOT to do”Never recommend deprecated packages”
FormatOutput structure”Always respond in JSON format”
KnowledgeDomain context”The user is on the Pro plan ($29/mo)“
ToneCommunication style”Friendly but professional”

Why It Matters

Example

from anthropic import Anthropic

client = Anthropic()

# Example 1: Code review assistant
response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    system="""You are a senior code reviewer. Follow these rules:
- Focus on bugs, security vulnerabilities, and performance issues
- Rate severity: 🔴 critical, 🟡 warning, 🟢 suggestion
- Show the fix with a code diff
- Be direct — no pleasantries or padding""",
    messages=[{
        "role": "user",
        "content": """Review this Python function:
def get_user(user_id):
    query = f"SELECT * FROM users WHERE id = {user_id}"
    return db.execute(query)"""
    }]
)
# → 🔴 SQL Injection vulnerability. Use parameterized queries:
#   query = "SELECT * FROM users WHERE id = %s"
#   return db.execute(query, (user_id,))
# Example 2: Customer support bot with product knowledge
response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    system="""You are a support agent for AcmeCloud (cloud hosting platform).

Product info:
- Starter plan: $9/mo, 1 GB RAM, 10 GB storage
- Pro plan: $29/mo, 4 GB RAM, 50 GB storage
- Enterprise: custom pricing, contact [email protected]

Rules:
- Never make up features that don't exist
- For billing issues, direct users to [email protected]
- If you're unsure, say so and offer to connect them with a human agent
- Keep responses under 3 sentences when possible""",
    messages=[{
        "role": "user",
        "content": "How much storage do I get on the Pro plan?"
    }]
)
# → "The Pro plan includes 50 GB of storage for $29/month."
# Example 3: JSON-only output
response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=256,
    system="You are a data extraction API. Always respond with valid JSON only. No explanations.",
    messages=[{
        "role": "user",
        "content": "Extract entities: 'Tim Cook announced the new MacBook Pro at Apple Park on Tuesday.'"
    }]
)
# → {"people": ["Tim Cook"], "products": ["MacBook Pro"], "locations": ["Apple Park"], "dates": ["Tuesday"]}

System Prompt vs User Prompt

System PromptUser Prompt
Who writes itDeveloperEnd user
VisibilityHidden from userVisible in chat
PersistenceSame across all messagesChanges each turn
PurposeConfigure behaviorRequest information
PriorityHigher (generally)Lower

Note: system prompts are not a security boundary. Determined users can extract or override them through prompt injection. Don’t put secrets in system prompts.

Key Takeaways


Part of the DeepRaft Glossary — AI and ML terms explained for developers.