Agentic Workflow
Learn what Agentic Workflow means in AI and machine learning, with examples and related concepts.
Definition
Agentic workflow is a design pattern where an AI model operates autonomously in a loop — planning, executing actions, observing results, and adapting — rather than just generating a single response to a single prompt.
In a standard AI interaction, you ask a question and get an answer. In an agentic workflow, the AI breaks a complex goal into steps, uses tools to take actions (search the web, run code, edit files), evaluates the results, and decides what to do next. It keeps going until the task is done or it gets stuck.
Claude Code is a prime example: when you ask it to “add authentication to this app,” it doesn’t just generate code — it reads your codebase, plans an approach, creates files, runs tests, fixes failures, and iterates until the feature works. This is fundamentally different from a one-shot chatbot response.
How It Works
Traditional LLM:
User prompt → Model → Single response → Done
Agentic Workflow:
User goal → Model → Plan
↓
Step 1: Search codebase → Observe results
↓
Step 2: Edit file → Observe results
↓
Step 3: Run tests → Tests fail
↓
Step 4: Read error → Fix code
↓
Step 5: Run tests → Tests pass
↓
Done ✓
The agent loop typically follows a pattern like ReAct (Reasoning + Acting):
THINK: "I need to find where authentication is handled"
ACT: search_codebase("auth middleware")
OBSERVE: Found auth.ts at line 42, uses JWT tokens
THINK: "I should add the new route guard before the existing middleware"
ACT: edit_file("auth.ts", add_route_guard)
OBSERVE: File updated successfully
THINK: "Let me verify this works"
ACT: run_tests("auth.test.ts")
OBSERVE: 2 tests pass, 1 fails — missing token refresh
THINK: "I need to handle token refresh in the new guard"
ACT: edit_file("auth.ts", add_token_refresh)
OBSERVE: File updated
THINK: "Run tests again"
ACT: run_tests("auth.test.ts")
OBSERVE: All 3 tests pass ✓
DONE: "Authentication route guard added and tested"
Why It Matters
- Complex tasks — Agents handle multi-step tasks that no single prompt can solve (debugging, refactoring, research)
- Error recovery — Unlike one-shot generation, agents can detect failures and try different approaches
- Tool integration — Agents connect LLMs to real-world actions: file systems, APIs, databases, browsers
- Automation — Agentic workflows can automate entire development workflows, from issue to PR
The Autonomy Spectrum
Not all agent use cases require full autonomy:
| Level | Description | Example |
|---|---|---|
| Copilot | Suggests next step, human executes | GitHub Copilot autocomplete |
| Semi-autonomous | Executes steps, human approves each | Claude Code (default mode) |
| Autonomous | Plans and executes with minimal oversight | CI/CD pipeline agents |
| Multi-agent | Multiple agents collaborate on subtasks | CrewAI, research teams |
Example
# Simple agentic loop using Claude's tool use
from anthropic import Anthropic
client = Anthropic()
tools = [
{
"name": "read_file",
"description": "Read a file from the filesystem",
"input_schema": {
"type": "object",
"properties": {"path": {"type": "string"}},
"required": ["path"]
}
},
{
"name": "run_command",
"description": "Run a shell command",
"input_schema": {
"type": "object",
"properties": {"command": {"type": "string"}},
"required": ["command"]
}
}
]
def execute_tool(name: str, input: dict) -> str:
if name == "read_file":
return open(input["path"]).read()
elif name == "run_command":
import subprocess
result = subprocess.run(input["command"], shell=True, capture_output=True, text=True)
return result.stdout + result.stderr
# The agent loop
messages = [{"role": "user", "content": "Find and fix any Python syntax errors in the src/ directory"}]
while True:
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=4096,
tools=tools,
messages=messages,
)
# If the model wants to use a tool, execute it and continue
if response.stop_reason == "tool_use":
tool_block = next(b for b in response.content if b.type == "tool_use")
result = execute_tool(tool_block.name, tool_block.input)
messages.append({"role": "assistant", "content": response.content})
messages.append({
"role": "user",
"content": [{"type": "tool_result", "tool_use_id": tool_block.id, "content": result}]
})
else:
# Model is done — print final response
print(response.content[0].text)
break
Agentic Tools Comparison
| Tool | Approach | Best For |
|---|---|---|
| Claude Code | Single agent + tool use | Software engineering tasks |
| AutoGPT | Autonomous goal-driven agent | General automation experiments |
| CrewAI | Multi-agent role-based teams | Complex workflows with specialization |
| LangGraph | Graph-based agent workflows | Custom agent architectures |
| OpenAI Assistants | Managed agent runtime | Hosted agent applications |
Key Takeaways
- Agentic workflows let AI operate in a loop: plan, act, observe, adapt — not just generate text
- They enable complex multi-step tasks like debugging, research, and code refactoring
- The ReAct pattern (Reason + Act) is the most common agent architecture
- Tool use is what makes agents practical — connecting LLMs to real-world actions
- The trend is toward more autonomous agents, but human oversight remains important for high-stakes tasks
Part of the DeepRaft Glossary — AI and ML terms explained for developers.