API 参考有关所有功能和配置选项的详细文档,请参阅
ChatAnthropic API 参考。AWS Bedrock 和 Google VertexAI请注意,某些 Anthropic 模型也可以通过 AWS Bedrock 和 Google VertexAI 访问。请参阅
ChatBedrock 和 ChatVertexAI 集成,通过这些服务使用 Anthropic 模型。概览
集成详情
| 类别 | 包 | 可序列化 | JS/TS 支持 | 下载量 | 最新版本 | |
|---|---|---|---|---|---|---|
ChatAnthropic | langchain-anthropic | ❌ | 测试版 | ✅ (npm) |
模型功能
设置
要访问 Anthropic (Claude) 模型,你需要安装langchain-anthropic 集成包并获取 Claude API 密钥。
安装
复制
向 AI 提问
pip install -U langchain-anthropic
凭据
前往 console.anthropic.com/ 注册 Anthropic 并生成 API 密钥。完成此操作后,设置ANTHROPIC_API_KEY 环境变量
复制
向 AI 提问
import getpass
import os
if "ANTHROPIC_API_KEY" not in os.environ:
os.environ["ANTHROPIC_API_KEY"] = getpass.getpass("Enter your Anthropic API key: ")
复制
向 AI 提问
os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
os.environ["LANGSMITH_TRACING"] = "true"
实例化
现在我们可以实例化我们的模型对象并生成聊天完成复制
向 AI 提问
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-haiku-4-5-20251001",
temperature=0,
max_tokens=1024,
timeout=None,
max_retries=2,
# other params...
)
调用
复制
向 AI 提问
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = model.invoke(messages)
ai_msg
复制
向 AI 提问
AIMessage(content="J'adore la programmation.", response_metadata={'id': 'msg_018Nnu76krRPq8HvgKLW4F8T', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 29, 'output_tokens': 11}}, id='run-57e9295f-db8a-48dc-9619-babd2bedd891-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40})
复制
向 AI 提问
print(ai_msg.text)
复制
向 AI 提问
J'adore la programmation.
内容块
使用工具、扩展思维 和其他功能时,单个 AnthropicAIMessage 的内容可以是单个字符串或内容块列表。例如,当 Anthropic 模型调用工具时,工具调用是消息内容的一部分(以及在标准 AIMessage.tool_calls 中公开)
复制
向 AI 提问
from langchain_anthropic import ChatAnthropic
from typing_extensions import Annotated
model = ChatAnthropic(model="claude-haiku-4-5-20251001")
def get_weather(
location: Annotated[str, ..., "Location as city and state."]
) -> str:
"""Get the weather at a location."""
return "It's sunny."
model_with_tools = model.bind_tools([get_weather])
response = model_with_tools.invoke("Which city is hotter today: LA or NY?")
response.content
复制
向 AI 提问
[{'text': "I'll help you compare the temperatures of Los Angeles and New York by checking their current weather. I'll retrieve the weather for both cities.",
'type': 'text'},
{'id': 'toolu_01CkMaXrgmsNjTso7so94RJq',
'input': {'location': 'Los Angeles, CA'},
'name': 'get_weather',
'type': 'tool_use'},
{'id': 'toolu_01SKaTBk9wHjsBTw5mrPVSQf',
'input': {'location': 'New York, NY'},
'name': 'get_weather',
'type': 'tool_use'}]
content_blocks 将以标准格式呈现内容,该格式在不同提供商之间保持一致
复制
向 AI 提问
response.content_blocks
复制
向 AI 提问
[{'type': 'text',
'text': "I'll help you compare the temperatures of Los Angeles and New York by checking their current weather. I'll retrieve the weather for both cities."},
{'type': 'tool_call',
'name': 'get_weather',
'args': {'location': 'Los Angeles, CA'},
'id': 'toolu_01CkMaXrgmsNjTso7so94RJq'},
{'type': 'tool_call',
'name': 'get_weather',
'args': {'location': 'New York, NY'},
'id': 'toolu_01SKaTBk9wHjsBTw5mrPVSQf'}]
.tool_calls 属性以标准格式专门访问工具调用
复制
向 AI 提问
ai_msg.tool_calls
复制
向 AI 提问
[{'name': 'GetWeather',
'args': {'location': 'Los Angeles, CA'},
'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A'},
{'name': 'GetWeather',
'args': {'location': 'New York, NY'},
'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP'}]
多模态
Claude 支持将图像和 PDF 输入作为内容块,既可以使用 Anthropic 的原生格式(请参阅 视觉 和 PDF 支持 的文档),也可以使用 LangChain 的标准格式。文件 API
Claude 还支持通过其托管的 Files API 与文件进行交互。请参阅下面的示例。 Files API 还可以用于将文件上传到容器,以便与 Claude 的内置代码执行工具一起使用。有关详细信息,请参阅下面的代码执行部分。图像
图像
复制
向 AI 提问
# Upload image
import anthropic
client = anthropic.Anthropic()
file = client.beta.files.upload(
# Supports image/jpeg, image/png, image/gif, image/webp
file=("image.png", open("/path/to/image.png", "rb"), "image/png"),
)
image_file_id = file.id
# Run inference
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
betas=["files-api-2025-04-14"],
)
input_message = {
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image.",
},
{
"type": "image",
"file_id": image_file_id,
},
],
}
model.invoke([input_message])
PDF
复制
向 AI 提问
# Upload document
import anthropic
client = anthropic.Anthropic()
file = client.beta.files.upload(
file=("document.pdf", open("/path/to/document.pdf", "rb"), "application/pdf"),
)
pdf_file_id = file.id
# Run inference
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
betas=["files-api-2025-04-14"],
)
input_message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe this document."},
{"type": "file", "file_id": pdf_file_id}
],
}
model.invoke([input_message])
扩展思维
某些 Claude 模型支持扩展思维功能,该功能将输出导致最终答案的分步推理过程。 请参阅 Anthropic 指南此处中适用的模型。 要使用扩展思维,请在初始化ChatAnthropic 时指定 thinking 参数。它也可以在调用期间作为 kwarg 传入。 你需要指定一个令牌预算才能使用此功能。请参阅下面的使用示例:复制
向 AI 提问
import json
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
max_tokens=5000,
thinking={"type": "enabled", "budget_tokens": 2000},
)
response = model.invoke("What is the cube root of 50.653?")
print(json.dumps(response.content_blocks, indent=2))
复制
向 AI 提问
[
{
"type": "reasoning",
"reasoning": "To find the cube root of 50.653, I need to find the value of $x$ such that $x^3 = 50.653$.\n\nI can try to estimate this first. \n$3^3 = 27$\n$4^3 = 64$\n\nSo the cube root of 50.653 will be somewhere between 3 and 4, but closer to 4.\n\nLet me try to compute this more precisely. I can use the cube root function:\n\ncube root of 50.653 = 50.653^(1/3)\n\nLet me calculate this:\n50.653^(1/3) \u2248 3.6998\n\nLet me verify:\n3.6998^3 \u2248 50.6533\n\nThat's very close to 50.653, so I'm confident that the cube root of 50.653 is approximately 3.6998.\n\nActually, let me compute this more precisely:\n50.653^(1/3) \u2248 3.69981\n\nLet me verify once more:\n3.69981^3 \u2248 50.652998\n\nThat's extremely close to 50.653, so I'll say that the cube root of 50.653 is approximately 3.69981.",
"extras": {"signature": "ErUBCkYIBxgCIkB0UjV..."}
},
{
"text": "The cube root of 50.653 is approximately 3.6998.\n\nTo verify: 3.6998\u00b3 = 50.6530, which is very close to our original number.",
"type": "text"
}
]
提示缓存
Anthropic 支持对 提示的元素 进行缓存,包括消息、工具定义、工具结果、图像和文档。这允许你重用大型文档、指令、少量样本文档和其他数据,以减少延迟和成本。 要启用提示元素的缓存,请使用cache_control 键标记其关联的内容块。请参阅下面的示例:消息
复制
向 AI 提问
import requests
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
# Pull LangChain readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text
messages = [
{
"role": "system",
"content": [
{
"type": "text",
"text": "You are a technology expert.",
},
{
"type": "text",
"text": f"{readme}",
"cache_control": {"type": "ephemeral"},
},
],
},
{
"role": "user",
"content": "What's LangChain, according to its README?",
},
]
response_1 = model.invoke(messages)
response_2 = model.invoke(messages)
usage_1 = response_1.usage_metadata["input_token_details"]
usage_2 = response_2.usage_metadata["input_token_details"]
print(f"First invocation:\n{usage_1}")
print(f"\nSecond:\n{usage_2}")
复制
向 AI 提问
First invocation:
{'cache_read': 0, 'cache_creation': 1458}
Second:
{'cache_read': 1458, 'cache_creation': 0}
扩展缓存缓存生命周期默认为 5 分钟。如果这太短,你可以通过启用 并指定
"extended-cache-ttl-2025-04-11" beta 标头来应用一小时缓存:复制
向 AI 提问
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
betas=["extended-cache-ttl-2025-04-11"],
)
"cache_control": {"type": "ephemeral", "ttl": "1h"}。缓存令牌计数的详细信息将包含在响应的 usage_metadata 的 InputTokenDetails 中:复制
向 AI 提问
response = model.invoke(messages)
response.usage_metadata
复制
向 AI 提问
{
"input_tokens": 1500,
"output_tokens": 200,
"total_tokens": 1700,
"input_token_details": {
"cache_read": 0,
"cache_creation": 1000,
"ephemeral_1h_input_tokens": 750,
"ephemeral_5m_input_tokens": 250,
}
}
工具
复制
向 AI 提问
from langchain_anthropic import convert_to_anthropic_tool
from langchain.tools import tool
# For demonstration purposes, we artificially expand the
# tool description.
description = (
f"Get the weather at a location. By the way, check out this readme: {readme}"
)
@tool(description=description)
def get_weather(location: str) -> str:
return "It's sunny."
# Enable caching on the tool
weather_tool = convert_to_anthropic_tool(get_weather)
weather_tool["cache_control"] = {"type": "ephemeral"}
model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
model_with_tools = model.bind_tools([weather_tool])
query = "What's the weather in San Francisco?"
response_1 = model_with_tools.invoke(query)
response_2 = model_with_tools.invoke(query)
usage_1 = response_1.usage_metadata["input_token_details"]
usage_2 = response_2.usage_metadata["input_token_details"]
print(f"First invocation:\n{usage_1}")
print(f"\nSecond:\n{usage_2}")
复制
向 AI 提问
First invocation:
{'cache_read': 0, 'cache_creation': 1809}
Second:
{'cache_read': 1809, 'cache_creation': 0}
会话应用中的增量缓存
提示缓存可用于多轮对话,以在不重复处理的情况下保持早期消息的上下文。 我们可以通过使用cache_control 标记最后一条消息来启用增量缓存。Claude 将自动使用最长的先前缓存前缀进行后续消息。 下面,我们实现了一个包含此功能的简单聊天机器人。我们遵循 LangChain 聊天机器人教程,但添加了一个自定义的 reducer,它会自动使用 cache_control 标记每条用户消息中的最后一个内容块。请参阅下面:复制
向 AI 提问
import requests
from langchain_anthropic import ChatAnthropic
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, StateGraph, add_messages
from typing_extensions import Annotated, TypedDict
model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
# Pull LangChain readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text
def messages_reducer(left: list, right: list) -> list:
# Update last user message
for i in range(len(right) - 1, -1, -1):
if right[i].type == "human":
right[i].content[-1]["cache_control"] = {"type": "ephemeral"}
break
return add_messages(left, right)
class State(TypedDict):
messages: Annotated[list, messages_reducer]
workflow = StateGraph(state_schema=State)
# Define the function that calls the model
def call_model(state: State):
response = model.invoke(state["messages"])
return {"messages": [response]}
# Define the (single) node in the graph
workflow.add_edge(START, "model")
workflow.add_node("model", call_model)
# Add memory
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
复制
向 AI 提问
from langchain.messages import HumanMessage
config = {"configurable": {"thread_id": "abc123"}}
query = "Hi! I'm Bob."
input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f"\n{output['messages'][-1].usage_metadata['input_token_details']}")
复制
向 AI 提问
================================== Ai Message ==================================
Hello, Bob! It's nice to meet you. How are you doing today? Is there something I can help you with?
{'cache_read': 0, 'cache_creation': 0}
复制
向 AI 提问
query = f"Check out this readme: {readme}"
input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f"\n{output['messages'][-1].usage_metadata['input_token_details']}")
复制
向 AI 提问
================================== Ai Message ==================================
I can see you've shared the README from the LangChain GitHub repository. This is the documentation for LangChain, which is a popular framework for building applications powered by Large Language Models (LLMs). Here's a summary of what the README contains:
LangChain is:
- A framework for developing LLM-powered applications
- Helps chain together components and integrations to simplify AI application development
- Provides a standard interface for models, embeddings, vector stores, etc.
Key features/benefits:
- Real-time data augmentation (connect LLMs to diverse data sources)
- Model interoperability (swap models easily as needed)
- Large ecosystem of integrations
The LangChain ecosystem includes:
- LangSmith - For evaluations and observability
- LangGraph - For building complex agents with customizable architecture
- LangSmith - For deployment and scaling of agents
The README also mentions installation instructions (`pip install -U langchain`) and links to various resources including tutorials, how-to guides, conceptual guides, and API references.
Is there anything specific about LangChain you'd like to know more about, Bob?
{'cache_read': 0, 'cache_creation': 1498}
复制
向 AI 提问
query = "What was my name again?"
input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f"\n{output['messages'][-1].usage_metadata['input_token_details']}")
复制
向 AI 提问
================================== Ai Message ==================================
Your name is Bob. You introduced yourself at the beginning of our conversation.
{'cache_read': 1498, 'cache_creation': 269}
cache_control 键。
令牌高效的工具使用
Anthropic 支持一个(测试版)令牌高效的工具使用功能。要使用它,请在实例化模型时指定相关的测试版标头。复制
向 AI 提问
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
betas=["token-efficient-tools-2025-02-19"],
)
@tool
def get_weather(location: str) -> str:
"""Get the weather at a location."""
return "It's sunny."
model_with_tools = model.bind_tools([get_weather])
response = model_with_tools.invoke("What's the weather in San Francisco?")
print(response.tool_calls)
print(f"\nTotal tokens: {response.usage_metadata['total_tokens']}")
复制
向 AI 提问
[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'toolu_01EoeE1qYaePcmNbUvMsWtmA', 'type': 'tool_call'}]
Total tokens: 408
引用
Anthropic 支持 引用 功能,该功能允许 Claude 根据用户提供的源文档为其答案附加上下文。当查询中包含带有"citations": {"enabled": True} 的 文档 或 search result 内容块时,Claude 可能会在其响应中生成引用。
简单示例
在此示例中,我们传递一个纯文本文档。在后台,Claude 自动将 输入文本分块成句子,这些句子用于生成引用。复制
向 AI 提问
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-haiku-4-5-20251001")
messages = [
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "text",
"media_type": "text/plain",
"data": "The grass is green. The sky is blue.",
},
"title": "My Document",
"context": "This is a trustworthy document.",
"citations": {"enabled": True},
},
{"type": "text", "text": "What color is the grass and sky?"},
],
}
]
response = model.invoke(messages)
response.content
复制
向 AI 提问
[{'text': 'Based on the document, ', 'type': 'text'},
{'text': 'the grass is green',
'type': 'text',
'citations': [{'type': 'char_location',
'cited_text': 'The grass is green. ',
'document_index': 0,
'document_title': 'My Document',
'start_char_index': 0,
'end_char_index': 20}]},
{'text': ', and ', 'type': 'text'},
{'text': 'the sky is blue',
'type': 'text',
'citations': [{'type': 'char_location',
'cited_text': 'The sky is blue.',
'document_index': 0,
'document_title': 'My Document',
'start_char_index': 20,
'end_char_index': 36}]},
{'text': '.', 'type': 'text'}]
在工具结果中(代理 RAG)
需要
langchain-anthropic>=0.3.17search_result 内容块列表。例如:复制
向 AI 提问
def retrieval_tool(query: str) -> list[dict]:
"""Access my knowledge base."""
# Run a search (e.g., with a LangChain vector store)
results = vector_store.similarity_search(query=query, k=2)
# Package results into search_result blocks
return [
{
"type": "search_result",
# Customize fields as desired, using document metadata or otherwise
"title": "My Document Title",
"source": "Source description or provenance",
"citations": {"enabled": True},
"content": [{"type": "text", "text": doc.page_content}],
}
for doc in results
]
使用 LangGraph 的端到端示例
使用 LangGraph 的端到端示例
在这里,我们演示一个端到端示例,其中我们使用示例文档填充 LangChain 向量存储,并为 Claude 配备一个查询这些文档的工具。这里的工具接受搜索查询和
category 字符串文字,但可以使用任何有效的工具签名。复制
向 AI 提问
from typing import Literal
from langchain.chat_models import init_chat_model
from langchain.embeddings import init_embeddings
from langchain_core.documents import Document
from langchain_core.vectorstores import InMemoryVectorStore
from langgraph.checkpoint.memory import InMemorySaver
from langchain.agents import create_agent
# Set up vector store
embeddings = init_embeddings("openai:text-embedding-3-small")
vector_store = InMemoryVectorStore(embeddings)
document_1 = Document(
id="1",
page_content=(
"To request vacation days, submit a leave request form through the "
"HR portal. Approval will be sent by email."
),
metadata={
"category": "HR Policy",
"doc_title": "Leave Policy",
"provenance": "Leave Policy - page 1",
},
)
document_2 = Document(
id="2",
page_content="Managers will review vacation requests within 3 business days.",
metadata={
"category": "HR Policy",
"doc_title": "Leave Policy",
"provenance": "Leave Policy - page 2",
},
)
document_3 = Document(
id="3",
page_content=(
"Employees with over 6 months tenure are eligible for 20 paid vacation days "
"per year."
),
metadata={
"category": "Benefits Policy",
"doc_title": "Benefits Guide 2025",
"provenance": "Benefits Policy - page 1",
},
)
documents = [document_1, document_2, document_3]
vector_store.add_documents(documents=documents)
# Define tool
async def retrieval_tool(
query: str, category: Literal["HR Policy", "Benefits Policy"]
) -> list[dict]:
"""Access my knowledge base."""
def _filter_function(doc: Document) -> bool:
return doc.metadata.get("category") == category
results = vector_store.similarity_search(
query=query, k=2, filter=_filter_function
)
return [
{
"type": "search_result",
"title": doc.metadata["doc_title"],
"source": doc.metadata["provenance"],
"citations": {"enabled": True},
"content": [{"type": "text", "text": doc.page_content}],
}
for doc in results
]
# Create agent
model = init_chat_model("claude-haiku-4-5-20251001")
checkpointer = InMemorySaver()
agent = create_agent(model, [retrieval_tool], checkpointer=checkpointer)
# Invoke on a query
config = {"configurable": {"thread_id": "session_1"}}
input_message = {
"role": "user",
"content": "How do I request vacation days?",
}
async for step in agent.astream(
{"messages": [input_message]},
config,
stream_mode="values",
):
step["messages"][-1].pretty_print()
与文本分割器一起使用
Anthropic 还允许你使用 自定义文档 类型指定自己的拆分。LangChain 文本拆分器 可用于为此目的生成有意义的拆分。请参阅下面的示例,其中我们拆分 LangChain README(一个 Markdown 文档)并将其作为上下文传递给 Claude复制
向 AI 提问
import requests
from langchain_anthropic import ChatAnthropic
from langchain_text_splitters import MarkdownTextSplitter
def format_to_anthropic_documents(documents: list[str]):
return {
"type": "document",
"source": {
"type": "content",
"content": [{"type": "text", "text": document} for document in documents],
},
"citations": {"enabled": True},
}
# Pull readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text
# Split into chunks
splitter = MarkdownTextSplitter(
chunk_overlap=0,
chunk_size=50,
)
documents = splitter.split_text(readme)
# Construct message
message = {
"role": "user",
"content": [
format_to_anthropic_documents(documents),
{"type": "text", "text": "Give me a link to LangChain's tutorials."},
],
}
# Query model
model = ChatAnthropic(model="claude-haiku-4-5-20251001")
response = model.invoke([message])
上下文管理
Anthropic 支持上下文编辑功能,该功能将自动管理模型的上下文窗口(例如,通过清除工具结果)。 有关详细信息和配置选项,请参阅 Anthropic 文档。上下文管理自
langchain-anthropic>=0.3.21 开始支持复制
向 AI 提问
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
betas=["context-management-2025-06-27"],
context_management={"edits": [{"type": "clear_tool_uses_20250919"}]},
)
model_with_tools = model.bind_tools([{"type": "web_search_20250305", "name": "web_search"}])
response = model_with_tools.invoke("Search for recent developments in AI")
内置工具
Anthropic 支持多种内置工具,可以按照常用方式绑定到模型。Claude 将生成符合其内部工具架构的工具调用网络搜索
Claude 可以使用 网页搜索工具 来运行搜索并用引用来支持其响应。网页搜索工具自
langchain-anthropic>=0.3.13 起支持复制
向 AI 提问
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
tool = {"type": "web_search_20250305", "name": "web_search", "max_uses": 3}
model_with_tools = model.bind_tools([tool])
response = model_with_tools.invoke("How do I update a web app to TypeScript 5.5?")
网络抓取
Claude 可以使用 网页抓取工具 来运行搜索并用引用来支持其响应。 来自 langchain_anthropic 导入 ChatAnthropic复制
向 AI 提问
model = ChatAnthropic(
model="claude-haiku-4-5-20251001",
betas=["web-fetch-2025-09-10"], # Enable web fetch beta
)
tool = {"type": "web_fetch_20250910", "name": "web_fetch", "max_uses": 3}
model_with_tools = model.bind_tools([tool])
response = model_with_tools.invoke(
"Please analyze the content at https://example.com/article"
)
你必须添加
'web-fetch-2025-09-10' beta 标头才能使用网页抓取。代码执行
Claude 可以使用 代码执行工具 在沙盒环境中执行 Python 代码。代码执行自
langchain-anthropic>=0.3.14 起支持复制
向 AI 提问
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
betas=["code-execution-2025-05-22"],
)
tool = {"type": "code_execution_20250522", "name": "code_execution"}
model_with_tools = model.bind_tools([tool])
response = model_with_tools.invoke(
"Calculate the mean and standard deviation of [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"
)
与 Files API 一起使用
与 Files API 一起使用
使用 Files API,Claude 可以编写代码访问文件,进行数据分析和其他用途。请参阅下面的示例请注意,Claude 可能会在代码执行过程中生成文件。你可以使用 Files API 访问这些文件
复制
向 AI 提问
# Upload file
import anthropic
client = anthropic.Anthropic()
file = client.beta.files.upload(
file=open("/path/to/sample_data.csv", "rb")
)
file_id = file.id
# Run inference
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
betas=["code-execution-2025-05-22"],
)
tool = {"type": "code_execution_20250522", "name": "code_execution"}
model_with_tools = model.bind_tools([tool])
input_message = {
"role": "user",
"content": [
{
"type": "text",
"text": "Please plot these data and tell me what you see.",
},
{
"type": "container_upload",
"file_id": file_id,
},
]
}
model_with_tools.invoke([input_message])
复制
向 AI 提问
# Take all file outputs for demonstration purposes
file_ids = []
for block in response.content:
if block["type"] == "code_execution_tool_result":
file_ids.extend(
content["file_id"]
for content in block.get("content", {}).get("content", [])
if "file_id" in content
)
for i, file_id in enumerate(file_ids):
file_content = client.beta.files.download(file_id)
file_content.write_to_file(f"/path/to/file_{i}.png")
内存工具
Claude 支持内存工具,用于在对话线程之间进行客户端存储和检索上下文。有关详细信息,请参阅此处的文档。Anthropic 的内置内存工具自
langchain-anthropic>=0.3.21 起支持复制
向 AI 提问
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
betas=["context-management-2025-06-27"],
)
model_with_tools = model.bind_tools([{"type": "memory_20250818", "name": "memory"}])
response = model_with_tools.invoke("What are my interests?")
远程 MCP
Claude 可以使用 MCP 连接器工具 进行模型生成的远程 MCP 服务器调用。远程 MCP 自
langchain-anthropic>=0.3.14 起支持复制
向 AI 提问
from langchain_anthropic import ChatAnthropic
mcp_servers = [
{
"type": "url",
"url": "https://mcp.deepwiki.com/mcp",
"name": "deepwiki",
"tool_configuration": { # optional configuration
"enabled": True,
"allowed_tools": ["ask_question"],
},
"authorization_token": "PLACEHOLDER", # optional authorization
}
]
model = ChatAnthropic(
model="claude-sonnet-4-5-20250929",
betas=["mcp-client-2025-04-04"],
mcp_servers=mcp_servers,
)
response = model.invoke(
"What transport protocols does the 2025-03-26 version of the MCP "
"spec (modelcontextprotocol/modelcontextprotocol) support?"
)
文本编辑器
文本编辑器工具可用于查看和修改文本文件。有关详细信息,请参阅此处的文档。复制
向 AI 提问
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
tool = {"type": "text_editor_20250124", "name": "str_replace_editor"}
model_with_tools = model.bind_tools([tool])
response = model_with_tools.invoke(
"There's a syntax error in my primes.py file. Can you help me fix it?"
)
print(response.text)
response.tool_calls
复制
向 AI 提问
I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error.
复制
向 AI 提问
[{'name': 'str_replace_editor',
'args': {'command': 'view', 'path': '/repo/primes.py'},
'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ',
'type': 'tool_call'}]
API 参考
有关所有功能和配置选项的详细文档,请参阅ChatAnthropic API 参考。
以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。