Anthropic Claude logo
PythonJavaScript

Anthropic Claude Integration

Use The Handover as a Claude tool so the model requests human approval mid-conversation.

The Anthropic Claude API has native tool use support. You define a tool with a name, description, and input schema — Claude decides when to call it based on the conversation, and your code handles the call and returns the result. This is the cleanest pattern for giving Claude a human-in-the-loop gate.

How tool use works

The flow is a loop between your code and the API:

  1. You send a message with your tool definitions
  2. Claude responds — either with stop_reason: "end_turn" (done) or stop_reason: "tool_use" (wants to call a tool)
  3. When tool_use: extract the tool call from resp.content, execute it, send back a tool_result
  4. Claude receives the result and continues — loop repeats until end_turn

The key difference from OpenAI: Anthropic uses input_schema (not parameters), and tool results are sent as a user message containing a tool_result block.

Installation

pip install anthropic requests

Step 1: Define the tool

The description is the most important field — Claude's entire decision about when to call the tool comes from it. Per Anthropic's docs: aim for at least 3–4 sentences, explain exactly when it should and shouldn't be used.

tools = [{
    "name": "request_human_approval",
    "description": (
        "Request human approval before taking a sensitive, irreversible, or high-value action. "
        "Use this tool for: any financial transaction or refund, sending emails or messages to customers, "
        "deleting or permanently modifying data, deploying code, or any action that would be hard to undo. "
        "Do not use it for read-only operations, lookups, or internal analysis."
    ),
    "input_schema": {
        "type": "object",
        "properties": {
            "action":  {"type": "string", "description": "Exactly what you want to do — be specific, not vague"},
            "context": {"type": "string", "description": "Why, relevant background, and any alternatives considered"},
            "urgency": {"type": "string", "enum": ["low", "medium", "high", "critical"], "description": "How time-sensitive this is"},
        },
        "required": ["action", "context", "urgency"],
    },
}]

Step 2: Run the agent loop

import anthropic, requests, time

client         = anthropic.Anthropic()
HANDOVER_KEY   = "ho_your_key_here"
APPROVER_EMAIL = "approver@yourcompany.com"
HEADERS        = {"Authorization": f"Bearer {HANDOVER_KEY}"}

def call_handover(action: str, context: str, urgency: str = "medium") -> str:
    # Create the decision — approver gets an email with Approve/Deny buttons
    resp = requests.post("https://thehandover.xyz/decisions", headers=HEADERS, json={
        "action": action, "context": context, "urgency": urgency,
        "approver": APPROVER_EMAIL,
    })
    resp.raise_for_status()
    did = resp.json()["id"]

    # Poll every 5 seconds for up to 10 minutes
    for _ in range(120):
        time.sleep(5)
        r = requests.get(f"https://thehandover.xyz/decisions/{did}", headers=HEADERS).json()
        if r["status"] != "pending":
            notes = r.get("response_notes") or "none"
            return f"Decision: {r['status']}. Approver notes: {notes}"
    return "Approval timed out — no response within 10 minutes. Do not proceed."

def run_agent(user_message: str) -> str:
    messages = [{"role": "user", "content": user_message}]

    while True:
        resp = client.messages.create(
            model="claude-opus-4-6",
            max_tokens=4096,
            system=(
                "You are a helpful assistant. Before any financial transaction, "
                "customer email, data deletion, or deployment, you MUST call "
                "request_human_approval first. Do not proceed if the decision is denied."
            ),
            tools=tools,
            messages=messages,
        )

        # Always append the assistant's response before handling anything
        messages.append({"role": "assistant", "content": resp.content})

        if resp.stop_reason == "end_turn":
            # Done — extract the text response
            return next(b.text for b in resp.content if b.type == "text")

        # stop_reason == "tool_use": handle each tool call and send results back
        tool_results = []
        for block in resp.content:
            if block.type == "tool_use":
                result = call_handover(**block.input)
                tool_results.append({
                    "type": "tool_result",
                    "tool_use_id": block.id,
                    "content": result,
                })

        # Tool results are sent as a user message — Claude continues from here
        messages.append({"role": "user", "content": tool_results})

print(run_agent("Issue a €340 refund to customer #4821."))

Claude's judgment: Of the major models, Claude is particularly good at deciding when to call approval tools without being hand-held. A clear system prompt still matters, but Claude will often correctly infer from context — especially for high-stakes or irreversible actions. Use claude-opus-4-6 for the most reliable tool-call decisions.

JavaScript / TypeScript

Installation

npm install @anthropic-ai/sdk
import Anthropic from "@anthropic-ai/sdk";

const client         = new Anthropic();
const HANDOVER_KEY   = "ho_your_key_here";
const APPROVER_EMAIL = "approver@yourcompany.com";

const tools: Anthropic.Tool[] = [{
  name: "request_human_approval",
  description:
    "Request human approval before taking a sensitive, irreversible, or high-value action. " +
    "Use for: financial transactions, customer emails, data deletion, deployments. " +
    "Do not use for read-only lookups or internal analysis.",
  input_schema: {
    type: "object",
    properties: {
      action:  { type: "string", description: "Exactly what you want to do — be specific" },
      context: { type: "string", description: "Why, and any relevant background" },
      urgency: { type: "string", enum: ["low", "medium", "high", "critical"] },
    },
    required: ["action", "context", "urgency"],
  },
}];

async function callHandover(action: string, context: string, urgency = "medium") {
  const resp = await fetch("https://thehandover.xyz/decisions", {
    method: "POST",
    headers: { "Authorization": `Bearer ${HANDOVER_KEY}`, "Content-Type": "application/json" },
    body: JSON.stringify({ action, context, urgency, approver: APPROVER_EMAIL }),
  });
  const { id } = await resp.json();
  for (let i = 0; i < 120; i++) {
    await new Promise(r => setTimeout(r, 5000));
    const r = await (await fetch(`https://thehandover.xyz/decisions/${id}`, {
      headers: { "Authorization": `Bearer ${HANDOVER_KEY}` },
    })).json();
    if (r.status !== "pending") return `Decision: ${r.status}. Notes: ${r.response_notes ?? "none"}`;
  }
  return "Timed out. Do not proceed.";
}

async function runAgent(userMessage: string) {
  const messages: Anthropic.MessageParam[] = [
    { role: "user", content: userMessage },
  ];

  while (true) {
    const resp = await client.messages.create({
      model: "claude-opus-4-6",
      max_tokens: 4096,
      system: "Before any financial transaction, customer email, or data deletion, call request_human_approval.",
      tools,
      messages,
    });

    messages.push({ role: "assistant", content: resp.content });

    if (resp.stop_reason === "end_turn") {
      const text = resp.content.find(b => b.type === "text");
      return text?.type === "text" ? text.text : "";
    }

    const toolResults: Anthropic.ToolResultBlockParam[] = [];
    for (const block of resp.content) {
      if (block.type === "tool_use") {
        const input = block.input as { action: string; context: string; urgency?: string };
        const result = await callHandover(input.action, input.context, input.urgency);
        toolResults.push({ type: "tool_result", tool_use_id: block.id, content: result });
      }
    }

    messages.push({ role: "user", content: toolResults });
  }
}

console.log(await runAgent("Issue a €340 refund to customer #4821."));

Writing an effective system prompt

Claude's tool-call decisions are shaped by both the tool description and your system prompt. The description handles what the tool does — the system prompt handles when your agent should use it:

  • Name specific actions: "any refund", "any email to a customer", "any DELETE query" — not "sensitive actions"
  • Set thresholds: "any transaction over €100" is clearer than "large transactions"
  • Use MUST: "you MUST call request_human_approval" is more reliable than "you should"
  • State the consequence: "Do not proceed if the decision is denied" — Claude will otherwise sometimes continue anyway

Production tip: use a callback instead of polling

Polling inside the tool blocks the process for up to 10 minutes. For longer-running or async agents, pass a callback_url when creating the decision:

requests.post("https://thehandover.xyz/decisions", headers=HEADERS, json={
    "action": action, "context": context, "urgency": urgency,
    "approver": APPROVER_EMAIL,
    "callback_url": "https://your-server.com/webhook/handover",
})

Your webhook fires the instant the approver clicks. See the webhooks docs for the payload format.

Ready to add human oversight?

Free to start. No credit card required. Five minutes to integrate.

Get Started Free