LangChain agents are remarkably capable. Given a set of tools and a goal, they plan, reason, and execute — often without any human involvement. That's the point. But it's also the risk. An agent with access to your email, database, or billing system can do real damage if it misinterprets instructions or acts on bad data.
The solution isn't to make your agent less capable. It's to give it a way to pause and ask before crossing certain lines. This is human-in-the-loop oversight, and it's straightforward to implement with The Handover API.
What we're building
We'll add a request_human_approval tool to a LangChain agent. When the agent determines that an action is sensitive enough to warrant review, it calls this tool. The designated approver gets an email with Approve / Deny buttons. The agent waits for the response and proceeds accordingly.
Prerequisites
- A Handover API key (free tier works)
- Python:
pip install langchain langchain-openai langgraph requests - JavaScript:
npm install @langchain/core @langchain/openai @langchain/langgraph zod - An approver email address
Step 1: Define the approval tool
In Python, use the @tool decorator from langchain_core.tools. The docstring is what the model reads to decide when to call it — write it clearly.
from langchain_core.tools import tool
import requests, time
HANDOVER_API_KEY = "ho_your_key_here"
APPROVER_EMAIL = "approver@yourcompany.com"
@tool
def request_human_approval(action: str, context: str, urgency: str = "medium") -> str:
"""Request human approval before taking a sensitive or irreversible action.
Use for: financial transactions, sending emails to customers,
deleting or modifying records, deployments, or any action the user
would want to review before it happens.
Args:
action: A clear description of what you want to do — be specific
context: Why you want to do it and any relevant background
urgency: How time-sensitive this is — low | medium | high | critical
"""
headers = {"Authorization": f"Bearer {HANDOVER_API_KEY}"}
resp = requests.post("https://thehandover.xyz/decisions", headers=headers, json={
"action": action, "context": context,
"urgency": urgency, "approver": APPROVER_EMAIL,
})
did = resp.json()["id"]
for _ in range(120):
time.sleep(5)
r = requests.get(f"https://thehandover.xyz/decisions/{did}", headers=headers).json()
if r["status"] != "pending":
notes = r.get("response_notes") or "none"
return f"Decision: {r['status']}. Approver notes: {notes}"
return "Approval timed out — no response within 10 minutes. Do not proceed."
In JavaScript/TypeScript, use tool() from @langchain/core/tools with a Zod schema:
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const requestHumanApproval = tool(
async ({ action, context, urgency }) => {
const resp = await fetch("https://thehandover.xyz/decisions", {
method: "POST",
headers: { "Authorization": `Bearer ${process.env.HANDOVER_API_KEY}`, "Content-Type": "application/json" },
body: JSON.stringify({ action, context, urgency, approver: process.env.APPROVER_EMAIL }),
});
const { id } = await resp.json();
for (let i = 0; i < 120; i++) {
await new Promise(r => setTimeout(r, 5000));
const r = await (await fetch(`https://thehandover.xyz/decisions/${id}`, {
headers: { "Authorization": `Bearer ${process.env.HANDOVER_API_KEY}` },
})).json();
if (r.status !== "pending") return `Decision: ${r.status}. Notes: ${r.response_notes ?? "none"}`;
}
return "Approval timed out. Do not proceed.";
},
{
name: "request_human_approval",
description: "Request human approval before a sensitive or irreversible action. Use for financial transactions, customer emails, record deletion, deployments.",
schema: z.object({
action: z.string().describe("What you want to do — be specific"),
context: z.string().describe("Why, and any relevant background"),
urgency: z.enum(["low", "medium", "high", "critical"]).default("medium"),
}),
}
);
Step 2: Create the agent
The current recommended pattern is create_react_agent from LangGraph — it replaces the older AgentExecutor.
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
llm = ChatOpenAI(model="gpt-4o")
system_prompt = """You are a helpful assistant.
Before taking any of the following actions, you MUST call request_human_approval first:
- Any financial transaction or refund
- Sending emails or messages to customers
- Deleting or permanently modifying records
- Any deployment or infrastructure change
Be specific in the 'action' field — describe exactly what you intend to do."""
agent = create_react_agent(llm, [request_human_approval], prompt=system_prompt)
In TypeScript:
import { ChatOpenAI } from "@langchain/openai";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { SystemMessage, HumanMessage } from "@langchain/core/messages";
const llm = new ChatOpenAI({ model: "gpt-4o" });
const agent = createReactAgent({ llm, tools: [requestHumanApproval] });
Step 3: Run it
# Python
result = agent.invoke({
"messages": [{"role": "user", "content": "Send a follow-up email to the Acme Corp lead offering a 20% discount."}]
})
print(result["messages"][-1].content)
# Agent calls request_human_approval automatically
# Approver receives an email and clicks Approve or Deny
# Agent resumes with the decision
// TypeScript
const result = await agent.invoke({
messages: [
new SystemMessage("Before any customer email, refund, or data change, call request_human_approval."),
new HumanMessage("Send a follow-up email to the Acme Corp lead offering a 20% discount."),
],
});
console.log(result.messages.at(-1)?.content);
Tip: The polling loop is fine for short-lived scripts. For production agents that might wait hours for a response, use callback_url — your agent fires the decision and moves on, and The Handover POSTs the result to your server when the human decides. See the webhooks docs.
What the approver sees
The moment the agent calls request_human_approval, the designated approver receives an email containing the action description, the agent's reasoning (from context), the urgency level, and one-click Approve / Deny buttons. No login required — they click directly from their inbox.
Writing an effective system prompt
The system prompt is what controls when the agent actually calls the approval tool. Vague instructions get ignored. Be explicit:
- Name specific actions: "any refund", "any email to a customer", "any DELETE operation"
- Use MUST: "you MUST call request_human_approval before..." is more reliable than "you should"
- Set thresholds: "any transaction over €100" rather than "large transactions"
- Cover edge cases: If the agent could route around approval (e.g. issuing store credit instead of a refund), close the loophole explicitly
Ready to add human oversight to your agent?
Free to start. No credit card required. Takes five minutes.
Get Started Free