概览
在本教程中,我们将使用LangGraph构建一个检索智能体。 LangChain提供了内置的智能体实现,这些实现使用LangGraph原语。如果需要更深层次的定制,可以直接在LangGraph中实现智能体。本指南演示了检索智能体的示例实现。检索智能体在您希望LLM决定是从向量存储中检索上下文还是直接回复用户时非常有用。 通过本教程,我们将完成以下任务:- 获取并预处理用于检索的文档。
- 索引这些文档以进行语义搜索,并为智能体创建检索工具。
- 构建一个智能体RAG系统,该系统可以决定何时使用检索工具。

概念
我们将涵盖以下概念设置
让我们下载所需的软件包并设置API密钥复制
向 AI 提问
pip install -U langgraph "langchain[openai]" langchain-community langchain-text-splitters bs4
复制
向 AI 提问
import getpass
import os
def _set_env(key: str):
if key not in os.environ:
os.environ[key] = getpass.getpass(f"{key}:")
_set_env("OPENAI_API_KEY")
注册LangSmith,快速发现问题并提高LangGraph项目的性能。LangSmith允许您使用跟踪数据来调试、测试和监控使用LangGraph构建的LLM应用程序。
1. 预处理文档
- 获取用于RAG系统的文档。我们将使用来自Lilian Weng优秀博客的三个最新页面。我们将首先使用
WebBaseLoader工具获取页面内容。
复制
向 AI 提问
from langchain_community.document_loaders import WebBaseLoader
urls = [
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking/",
"https://lilianweng.github.io/posts/2024-07-07-hallucination/",
"https://lilianweng.github.io/posts/2024-04-12-diffusion-video/",
]
docs = [WebBaseLoader(url).load() for url in urls]
复制
向 AI 提问
docs[0][0].page_content.strip()[:1000]
- 将获取的文档分割成更小的块,以便索引到我们的向量存储中
复制
向 AI 提问
from langchain_text_splitters import RecursiveCharacterTextSplitter
docs_list = [item for sublist in docs for item in sublist]
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=100, chunk_overlap=50
)
doc_splits = text_splitter.split_documents(docs_list)
复制
向 AI 提问
doc_splits[0].page_content.strip()
2. 创建检索工具
现在我们有了分割好的文档,我们可以将它们索引到一个向量存储中,用于语义搜索。- 使用内存中的向量存储和OpenAI嵌入
复制
向 AI 提问
from langchain_core.vectorstores import InMemoryVectorStore
from langchain_openai import OpenAIEmbeddings
vectorstore = InMemoryVectorStore.from_documents(
documents=doc_splits, embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()
- 使用LangChain预构建的
create_retriever_tool创建检索工具
复制
向 AI 提问
from langchain_classic.tools.retriever import create_retriever_tool
retriever_tool = create_retriever_tool(
retriever,
"retrieve_blog_posts",
"Search and return information about Lilian Weng blog posts.",
)
- 测试工具
复制
向 AI 提问
retriever_tool.invoke({"query": "types of reward hacking"})
3. 生成查询
现在我们将开始为我们的智能体RAG图构建组件(节点和边)。 请注意,这些组件将作用于MessagesState——包含带有聊天消息列表的messages键的图状态。- 构建一个
generate_query_or_respond节点。它将调用LLM根据当前图状态(消息列表)生成响应。给定输入消息,它将决定是使用检索工具进行检索,还是直接回复用户。请注意,我们通过.bind_tools让聊天模型访问我们之前创建的retriever_tool。
复制
向 AI 提问
from langgraph.graph import MessagesState
from langchain.chat_models import init_chat_model
response_model = init_chat_model("gpt-4o", temperature=0)
def generate_query_or_respond(state: MessagesState):
"""Call the model to generate a response based on the current state. Given
the question, it will decide to retrieve using the retriever tool, or simply respond to the user.
"""
response = (
response_model
.bind_tools([retriever_tool]).invoke(state["messages"])
)
return {"messages": [response]}
- 在随机输入上试用
复制
向 AI 提问
input = {"messages": [{"role": "user", "content": "hello!"}]}
generate_query_or_respond(input)["messages"][-1].pretty_print()
复制
向 AI 提问
================================== Ai Message ==================================
Hello! How can I help you today?
- 提出需要语义搜索的问题
复制
向 AI 提问
input = {
"messages": [
{
"role": "user",
"content": "What does Lilian Weng say about types of reward hacking?",
}
]
}
generate_query_or_respond(input)["messages"][-1].pretty_print()
复制
向 AI 提问
================================== Ai Message ==================================
Tool Calls:
retrieve_blog_posts (call_tYQxgfIlnQUDMdtAhdbXNwIM)
Call ID: call_tYQxgfIlnQUDMdtAhdbXNwIM
Args:
query: types of reward hacking
4. 评估文档
- 添加一个条件边——
grade_documents——来确定检索到的文档是否与问题相关。我们将使用具有结构化输出模式GradeDocuments的模型进行文档评估。grade_documents函数将根据评估决定(generate_answer或rewrite_question)返回要跳转到的节点名称。
复制
向 AI 提问
from pydantic import BaseModel, Field
from typing import Literal
GRADE_PROMPT = (
"You are a grader assessing relevance of a retrieved document to a user question. \n "
"Here is the retrieved document: \n\n {context} \n\n"
"Here is the user question: {question} \n"
"If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant. \n"
"Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question."
)
class GradeDocuments(BaseModel):
"""Grade documents using a binary score for relevance check."""
binary_score: str = Field(
description="Relevance score: 'yes' if relevant, or 'no' if not relevant"
)
grader_model = init_chat_model("gpt-4o", temperature=0)
def grade_documents(
state: MessagesState,
) -> Literal["generate_answer", "rewrite_question"]:
"""Determine whether the retrieved documents are relevant to the question."""
question = state["messages"][0].content
context = state["messages"][-1].content
prompt = GRADE_PROMPT.format(question=question, context=context)
response = (
grader_model
.with_structured_output(GradeDocuments).invoke(
[{"role": "user", "content": prompt}]
)
)
score = response.binary_score
if score == "yes":
return "generate_answer"
else:
return "rewrite_question"
- 使用工具响应中的不相关文档运行此操作
复制
向 AI 提问
from langchain_core.messages import convert_to_messages
input = {
"messages": convert_to_messages(
[
{
"role": "user",
"content": "What does Lilian Weng say about types of reward hacking?",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "1",
"name": "retrieve_blog_posts",
"args": {"query": "types of reward hacking"},
}
],
},
{"role": "tool", "content": "meow", "tool_call_id": "1"},
]
)
}
grade_documents(input)
- 确认相关文档被如此分类
复制
向 AI 提问
input = {
"messages": convert_to_messages(
[
{
"role": "user",
"content": "What does Lilian Weng say about types of reward hacking?",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "1",
"name": "retrieve_blog_posts",
"args": {"query": "types of reward hacking"},
}
],
},
{
"role": "tool",
"content": "reward hacking can be categorized into two types: environment or goal misspecification, and reward tampering",
"tool_call_id": "1",
},
]
)
}
grade_documents(input)
5. 重写问题
- 构建
rewrite_question节点。检索工具可能会返回潜在的不相关文档,这表明需要改进原始用户问题。为此,我们将调用rewrite_question节点。
复制
向 AI 提问
REWRITE_PROMPT = (
"Look at the input and try to reason about the underlying semantic intent / meaning.\n"
"Here is the initial question:"
"\n ------- \n"
"{question}"
"\n ------- \n"
"Formulate an improved question:"
)
def rewrite_question(state: MessagesState):
"""Rewrite the original user question."""
messages = state["messages"]
question = messages[0].content
prompt = REWRITE_PROMPT.format(question=question)
response = response_model.invoke([{"role": "user", "content": prompt}])
return {"messages": [{"role": "user", "content": response.content}]}
- 试用
复制
向 AI 提问
input = {
"messages": convert_to_messages(
[
{
"role": "user",
"content": "What does Lilian Weng say about types of reward hacking?",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "1",
"name": "retrieve_blog_posts",
"args": {"query": "types of reward hacking"},
}
],
},
{"role": "tool", "content": "meow", "tool_call_id": "1"},
]
)
}
response = rewrite_question(input)
print(response["messages"][-1]["content"])
复制
向 AI 提问
What are the different types of reward hacking described by Lilian Weng, and how does she explain them?
6. 生成答案
- 构建
generate_answer节点:如果通过评估检查,我们可以根据原始问题和检索到的上下文生成最终答案。
复制
向 AI 提问
GENERATE_PROMPT = (
"You are an assistant for question-answering tasks. "
"Use the following pieces of retrieved context to answer the question. "
"If you don't know the answer, just say that you don't know. "
"Use three sentences maximum and keep the answer concise.\n"
"Question: {question} \n"
"Context: {context}"
)
def generate_answer(state: MessagesState):
"""Generate an answer."""
question = state["messages"][0].content
context = state["messages"][-1].content
prompt = GENERATE_PROMPT.format(question=question, context=context)
response = response_model.invoke([{"role": "user", "content": prompt}])
return {"messages": [response]}
- 试用
复制
向 AI 提问
input = {
"messages": convert_to_messages(
[
{
"role": "user",
"content": "What does Lilian Weng say about types of reward hacking?",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "1",
"name": "retrieve_blog_posts",
"args": {"query": "types of reward hacking"},
}
],
},
{
"role": "tool",
"content": "reward hacking can be categorized into two types: environment or goal misspecification, and reward tampering",
"tool_call_id": "1",
},
]
)
}
response = generate_answer(input)
response["messages"][-1].pretty_print()
复制
向 AI 提问
================================== Ai Message ==================================
Lilian Weng categorizes reward hacking into two types: environment or goal misspecification, and reward tampering. She considers reward hacking as a broad concept that includes both of these categories. Reward hacking occurs when an agent exploits flaws or ambiguities in the reward function to achieve high rewards without performing the intended behaviors.
7. 组装图
现在我们将所有节点和边组装成一个完整的图- 从
generate_query_or_respond开始,并确定是否需要调用retriever_tool - 使用
tools_condition路由到下一步- 如果
generate_query_or_respond返回tool_calls,则调用retriever_tool检索上下文 - 否则,直接回复用户
- 如果
- 评估检索到的文档内容与问题的相关性(
grade_documents)并路由到下一步- 如果不相关,则使用
rewrite_question重写问题,然后再次调用generate_query_or_respond - 如果相关,则继续到
generate_answer并使用包含检索到的文档上下文的ToolMessage生成最终响应
- 如果不相关,则使用
复制
向 AI 提问
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode, tools_condition
workflow = StateGraph(MessagesState)
# Define the nodes we will cycle between
workflow.add_node(generate_query_or_respond)
workflow.add_node("retrieve", ToolNode([retriever_tool]))
workflow.add_node(rewrite_question)
workflow.add_node(generate_answer)
workflow.add_edge(START, "generate_query_or_respond")
# Decide whether to retrieve
workflow.add_conditional_edges(
"generate_query_or_respond",
# Assess LLM decision (call `retriever_tool` tool or respond to the user)
tools_condition,
{
# Translate the condition outputs to nodes in our graph
"tools": "retrieve",
END: END,
},
)
# Edges taken after the `action` node is called.
workflow.add_conditional_edges(
"retrieve",
# Assess agent decision
grade_documents,
)
workflow.add_edge("generate_answer", END)
workflow.add_edge("rewrite_question", "generate_query_or_respond")
# Compile
graph = workflow.compile()
复制
向 AI 提问
from IPython.display import Image, display
display(Image(graph.get_graph().draw_mermaid_png()))

8. 运行智能体RAG
现在让我们通过一个问题来测试完整的图复制
向 AI 提问
for chunk in graph.stream(
{
"messages": [
{
"role": "user",
"content": "What does Lilian Weng say about types of reward hacking?",
}
]
}
):
for node, update in chunk.items():
print("Update from node", node)
update["messages"][-1].pretty_print()
print("\n\n")
复制
向 AI 提问
Update from node generate_query_or_respond
================================== Ai Message ==================================
Tool Calls:
retrieve_blog_posts (call_NYu2vq4km9nNNEFqJwefWKu1)
Call ID: call_NYu2vq4km9nNNEFqJwefWKu1
Args:
query: types of reward hacking
Update from node retrieve
================================= Tool Message ==================================
Name: retrieve_blog_posts
(Note: Some work defines reward tampering as a distinct category of misalignment behavior from reward hacking. But I consider reward hacking as a broader concept here.)
At a high level, reward hacking can be categorized into two types: environment or goal misspecification, and reward tampering.
Why does Reward Hacking Exist?#
Pan et al. (2022) investigated reward hacking as a function of agent capabilities, including (1) model size, (2) action space resolution, (3) observation space noise, and (4) training time. They also proposed a taxonomy of three types of misspecified proxy rewards:
Let's Define Reward Hacking#
Reward shaping in RL is challenging. Reward hacking occurs when an RL agent exploits flaws or ambiguities in the reward function to obtain high rewards without genuinely learning the intended behaviors or completing the task as designed. In recent years, several related concepts have been proposed, all referring to some form of reward hacking:
Update from node generate_answer
================================== Ai Message ==================================
Lilian Weng categorizes reward hacking into two types: environment or goal misspecification, and reward tampering. She considers reward hacking as a broad concept that includes both of these categories. Reward hacking occurs when an agent exploits flaws or ambiguities in the reward function to achieve high rewards without performing the intended behaviors.
以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。