agents

Tool Use

Learn what Tool Use (Function Calling / Tool Use) means in AI and machine learning, with examples and related concepts.

Definition

Tool use (also called function calling) is the ability of an LLM to invoke external functions or APIs to take actions in the real world — reading files, searching the web, querying databases, or executing code.

Without tool use, an LLM is just a text generator. It can describe how to check the weather, but it can’t actually check it. Tool use changes this: you define available tools (with names, descriptions, and parameter schemas), and the model decides when and how to call them to fulfill the user’s request.

This is the key building block of AI agents and agentic workflows. When Claude Code edits your files or runs your tests, it’s using tool use. When ChatGPT browses the web or generates images, that’s tool use. It bridges the gap between “knowing things” and “doing things.”

How It Works

1. DEFINE TOOLS — Tell the model what tools are available

   tools = [
     { name: "get_weather", params: { city: string } },
     { name: "search_web",  params: { query: string } },
     { name: "send_email",  params: { to: string, body: string } }
   ]

2. USER ASKS — "What's the weather in Tokyo?"

3. MODEL DECIDES — Instead of guessing, the model outputs a tool call:
   → { tool: "get_weather", params: { city: "Tokyo" } }

4. YOUR CODE EXECUTES — Call the actual weather API
   → { temperature: 22, condition: "cloudy" }

5. MODEL RESPONDS — Uses the real data to answer:
   → "It's 22°C and cloudy in Tokyo right now."

The model never executes tools itself — it outputs a structured request, your application executes it, and the result is fed back to the model. This keeps humans in control of what actually happens.

Parallel Tool Use

Modern models can call multiple tools simultaneously:

User: "Compare the weather in Tokyo and New York"

Model outputs TWO tool calls at once:
  → get_weather({ city: "Tokyo" })
  → get_weather({ city: "New York" })

Both execute in parallel → results fed back → model synthesizes

Why It Matters

Example

# Claude tool use — complete working example
from anthropic import Anthropic
import json

client = Anthropic()

# Define tools the model can use
tools = [
    {
        "name": "get_stock_price",
        "description": "Get the current stock price for a given ticker symbol",
        "input_schema": {
            "type": "object",
            "properties": {
                "ticker": {
                    "type": "string",
                    "description": "Stock ticker symbol (e.g., AAPL, GOOGL)"
                }
            },
            "required": ["ticker"]
        }
    },
    {
        "name": "calculate",
        "description": "Evaluate a mathematical expression",
        "input_schema": {
            "type": "object",
            "properties": {
                "expression": {
                    "type": "string",
                    "description": "Math expression to evaluate (e.g., '100 * 1.15')"
                }
            },
            "required": ["expression"]
        }
    }
]

# Simulate tool execution
def run_tool(name: str, input: dict) -> str:
    if name == "get_stock_price":
        # In production, call a real API
        prices = {"AAPL": 198.50, "GOOGL": 175.20, "MSFT": 420.00}
        price = prices.get(input["ticker"], "Unknown")
        return json.dumps({"ticker": input["ticker"], "price": price})
    elif name == "calculate":
        return str(eval(input["expression"]))  # simplified for demo

# Send request with tools
response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    tools=tools,
    messages=[{
        "role": "user",
        "content": "What would 100 shares of Apple stock be worth right now?"
    }]
)

# Handle tool calls in a loop
messages = [{"role": "user", "content": "What would 100 shares of Apple stock be worth right now?"}]

while True:
    response = client.messages.create(
        model="claude-sonnet-4-6", max_tokens=1024, tools=tools, messages=messages
    )

    if response.stop_reason == "tool_use":
        # Execute each tool call
        tool_results = []
        for block in response.content:
            if block.type == "tool_use":
                result = run_tool(block.name, block.input)
                tool_results.append({
                    "type": "tool_result",
                    "tool_use_id": block.id,
                    "content": result
                })

        messages.append({"role": "assistant", "content": response.content})
        messages.append({"role": "user", "content": tool_results})
    else:
        print(response.content[0].text)
        break
# → "100 shares of Apple (AAPL) at $198.50 would be worth $19,850.00."

Tool Use Across Providers

ProviderFeature NameKey Difference
Anthropic (Claude)Tool UseStructured tool_use blocks, parallel calls
OpenAI (GPT)Function Callingfunction_call in response, parallel supported
Google (Gemini)Function CallingSimilar to OpenAI, grounding with Google Search
MCPModel Context ProtocolOpen standard for tool interoperability

Key Takeaways


Part of the DeepRaft Glossary — AI and ML terms explained for developers.