构建一个基本代理
首先,创建一个可以回答问题和调用工具的简单代理。该代理将使用Claude Sonnet 4.5作为其语言模型,一个基本的天气函数作为工具,以及一个简单的提示来指导其行为。在此示例中,您需要设置一个Claude (Anthropic) 账户并获取一个 API 密钥。然后,在您的终端中设置
ANTHROPIC_API_KEY 环境变量。复制
向 AI 提问
import { createAgent, tool } from "langchain";
import * as z from "zod";
const getWeather = tool(
(input) => `It's always sunny in ${input.city}!`,
{
name: "get_weather",
description: "Get the weather for a given city",
schema: z.object({
city: z.string().describe("The city to get the weather for"),
}),
}
);
const agent = createAgent({
model: "claude-sonnet-4-5-20250929",
tools: [getWeather],
});
console.log(
await agent.invoke({
messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
})
);
要了解如何使用LangSmith追踪您的代理,请参阅 LangSmith 文档。
构建一个真实世界的代理
接下来,构建一个实用的天气预报代理,演示关键的生产概念- 详细的系统提示 以获得更好的代理行为
- 创建工具 以与外部数据集成
- 模型配置 以获得一致的响应
- 结构化输出 以获得可预测的结果
- 对话记忆 以实现类似聊天的交互
- 创建并运行代理 创建一个功能齐全的代理
1
定义系统提示
系统提示定义了您代理的角色和行为。保持其具体和可操作。
复制
向 AI 提问
const systemPrompt = `You are an expert weather forecaster, who speaks in puns.
You have access to two tools:
- get_weather_for_location: use this to get the weather for a specific location
- get_user_location: use this to get the user's location
If a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean wherever they are, use the get_user_location tool to find their location.`;
2
创建工具
工具是您的代理可以调用的函数。通常,工具会希望连接到外部系统,并为此依赖运行时配置。请注意这里的
getUserLocation 工具正是这样做的。复制
向 AI 提问
import { type Runtime } from "@langchain/langgraph";
import { tool } from "langchain";
import * as z from "zod";
const getWeather = tool(
(input) => `It's always sunny in ${input.city}!`,
{
name: "get_weather_for_location",
description: "Get the weather for a given city",
schema: z.object({
city: z.string().describe("The city to get the weather for"),
}),
}
);
type AgentRuntime = Runtime<{ user_id: string }>;
const getUserLocation = tool(
(_, config: AgentRuntime) => {
const { user_id } = config.context;
return user_id === "1" ? "Florida" : "SF";
},
{
name: "get_user_location",
description: "Retrieve user information based on user ID",
}
);
Zod 是一个用于验证和解析预定义模式的库。您可以使用它来定义工具的输入模式,以确保代理仅使用正确的参数调用工具。或者,您可以将
schema 属性定义为 JSON schema 对象。请记住,JSON 模式在运行时不会进行验证。示例:使用 JSON 模式作为工具输入
示例:使用 JSON 模式作为工具输入
复制
向 AI 提问
const getWeather = tool(
({ city }) => `It's always sunny in ${city}!`,
{
name: "get_weather_for_location",
description: "Get the weather for a given city",
schema: {
type: "object",
properties: {
city: {
type: "string",
description: "The city to get the weather for"
}
},
required: ["city"]
},
}
);
3
4
定义响应格式
如果您的代理响应需要匹配特定的模式,可以选择定义结构化响应格式。
复制
向 AI 提问
const responseFormat = z.object({
punny_response: z.string(),
weather_conditions: z.string().optional(),
});
5
6
创建并运行代理
现在,用所有组件组装您的代理并运行它!
复制
向 AI 提问
import { createAgent } from "langchain";
const agent = createAgent({
model: "claude-sonnet-4-5-20250929",
systemPrompt: systemPrompt,
tools: [getUserLocation, getWeather],
responseFormat,
checkpointer,
});
// `thread_id` is a unique identifier for a given conversation.
const config = {
configurable: { thread_id: "1" },
context: { user_id: "1" },
};
const response = await agent.invoke(
{ messages: [{ role: "user", content: "what is the weather outside?" }] },
config
);
console.log(response.structuredResponse);
// {
// punny_response: "Florida is still having a 'sun-derful' day ...",
// weather_conditions: "It's always sunny in Florida!"
// }
// Note that we can continue the conversation using the same `thread_id`.
const thankYouResponse = await agent.invoke(
{ messages: [{ role: "user", content: "thank you!" }] },
config
);
console.log(thankYouResponse.structuredResponse);
// {
// punny_response: "You're 'thund-erfully' welcome! ...",
// weather_conditions: undefined
// }
显示 完整示例代码
显示 完整示例代码
复制
向 AI 提问
import { createAgent, tool, initChatModel } from "langchain";
import { MemorySaver, type Runtime } from "@langchain/langgraph";
import * as z from "zod";
// Define system prompt
const systemPrompt = `You are an expert weather forecaster, who speaks in puns.
You have access to two tools:
- get_weather_for_location: use this to get the weather for a specific location
- get_user_location: use this to get the user's location
If a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean wherever they are, use the get_user_location tool to find their location.`;
// Define tools
const getWeather = tool(
({ city }) => `It's always sunny in ${city}!`,
{
name: "get_weather_for_location",
description: "Get the weather for a given city",
schema: z.object({
city: z.string(),
}),
}
);
const getUserLocation = tool(
(_, config: Runtime<{ user_id: string}>) => {
const { user_id } = config.context;
return user_id === "1" ? "Florida" : "SF";
},
{
name: "get_user_location",
description: "Retrieve user information based on user ID",
schema: z.object({}),
}
);
// Configure model
const model = await initChatModel(
"claude-sonnet-4-5-20250929",
{ temperature: 0 }
);
// Define response format
const responseFormat = z.object({
punny_response: z.string(),
weather_conditions: z.string().optional(),
});
// Set up memory
const checkpointer = new MemorySaver();
// Create agent
const agent = createAgent({
model: "claude-sonnet-4-5-20250929",
systemPrompt: systemPrompt,
tools: [getUserLocation, getWeather],
responseFormat,
checkpointer,
});
// Run agent
// `thread_id` is a unique identifier for a given conversation.
const config = {
configurable: { thread_id: "1" },
context: { user_id: "1" },
};
const response = await agent.invoke(
{ messages: [{ role: "user", content: "what is the weather outside?" }] },
config
);
console.log(response.structuredResponse);
// {
// punny_response: "Florida is still having a 'sun-derful' day! The sunshine is playing 'ray-dio' hits all day long! I'd say it's the perfect weather for some 'solar-bration'! If you were hoping for rain, I'm afraid that idea is all 'washed up' - the forecast remains 'clear-ly' brilliant!",
// weather_conditions: "It's always sunny in Florida!"
// }
// Note that we can continue the conversation using the same `thread_id`.
const thankYouResponse = await agent.invoke(
{ messages: [{ role: "user", content: "thank you!" }] },
config
);
console.log(thankYouResponse.structuredResponse);
// {
// punny_response: "You're 'thund-erfully' welcome! It's always a 'breeze' to help you stay 'current' with the weather. I'm just 'cloud'-ing around waiting to 'shower' you with more forecasts whenever you need them. Have a 'sun-sational' day in the Florida sunshine!",
// weather_conditions: undefined
// }
要了解如何使用LangSmith追踪您的代理,请参阅 LangSmith 文档。
- 理解上下文 并记住对话
- 智能地使用多个工具
- 以一致的格式提供结构化响应
- 通过上下文处理用户特定信息
- 在交互过程中保持对话状态
以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。