跳到主要内容
应用程序必须使用配置文件进行配置,才能部署到 LangSmith(或进行自托管)。本操作指南讨论了设置 JavaScript 应用程序以使用package.json指定项目依赖项进行部署的基本步骤。 本演练基于此存储库,您可以试用它来了解有关如何设置应用程序以进行部署的更多信息。 最终的存储库结构将如下所示:
my-app/
├── src # all project code lies within here
   ├── utils # optional utilities for your graph
   ├── tools.ts # tools for your graph
   ├── nodes.ts # node functions for your graph
   └── state.ts # state definition of your graph
   └── agent.ts # code for constructing your graph
├── package.json # package dependencies
├── .env # environment variables
└── langgraph.json # configuration file for LangGraph
LangSmith 部署支持部署 LangGraph 。但是,图的节点的实现可以包含任意 Python 代码。这意味着任何框架都可以在节点内实现并部署在 LangSmith 部署上。这使您可以将核心应用程序逻辑保留在 LangGraph 之外,同时仍然使用 LangSmith 进行部署、扩展和可观测性
每一步之后,都会提供一个示例文件目录,以演示代码如何组织。

指定依赖项

依赖项可以在package.json中指定。如果没有创建这些文件,那么稍后可以在配置文件中指定依赖项。 示例package.json文件:
{
  "name": "langgraphjs-studio-starter",
  "packageManager": "yarn@1.22.22",
  "dependencies": {
    "@langchain/community": "^0.2.31",
    "@langchain/core": "^0.2.31",
    "@langchain/langgraph": "^0.2.0",
    "@langchain/openai": "^0.2.8"
  }
}
部署应用程序时,将使用您选择的包管理器安装依赖项,前提是它们符合下面列出的兼容版本范围
"@langchain/core": "^0.3.42",
"@langchain/langgraph": "^0.2.57",
"@langchain/langgraph-checkpoint": "~0.0.16",
示例文件目录
my-app/
└── package.json # package dependencies

指定环境变量

环境变量可以选择在文件中指定(例如.env)。请参阅环境变量参考以配置部署的其他变量。 示例.env文件:
MY_ENV_VAR_1=foo
MY_ENV_VAR_2=bar
OPENAI_API_KEY=key
TAVILY_API_KEY=key_2
示例文件目录
my-app/
├── package.json
└── .env # environment variables

定义图

实现您的图。图可以在单个文件或多个文件中定义。记下要包含在应用程序中的每个编译图的变量名。变量名稍后将在创建配置文件时使用。 这是一个示例agent.ts
import type { AIMessage } from "@langchain/core/messages";
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { ChatOpenAI } from "@langchain/openai";

import { MessagesAnnotation, StateGraph } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";

const tools = [new TavilySearchResults({ maxResults: 3 })];

// Define the function that calls the model
async function callModel(state: typeof MessagesAnnotation.State) {
  /**
   * Call the LLM powering our agent.
   * Feel free to customize the prompt, model, and other logic!
   */
  const model = new ChatOpenAI({
    model: "gpt-4o",
  }).bindTools(tools);

  const response = await model.invoke([
    {
      role: "system",
      content: `You are a helpful assistant. The current date is ${new Date().getTime()}.`,
    },
    ...state.messages,
  ]);

  // MessagesAnnotation supports returning a single message or array of messages
  return { messages: response };
}

// Define the function that determines whether to continue or not
function routeModelOutput(state: typeof MessagesAnnotation.State) {
  const messages = state.messages;
  const lastMessage: AIMessage = messages[messages.length - 1];
  // If the LLM is invoking tools, route there.
  if ((lastMessage?.tool_calls?.length ?? 0) > 0) {
    return "tools";
  }
  // Otherwise end the graph.
  return "__end__";
}

// Define a new graph.
// See https://github.langchain.ac.cn/langgraphjs/how-tos/define-state/#getting-started for
// more on defining custom graph states.
const workflow = new StateGraph(MessagesAnnotation)
  // Define the two nodes we will cycle between
  .addNode("callModel", callModel)
  .addNode("tools", new ToolNode(tools))
  // Set the entrypoint as `callModel`
  // This means that this node is the first one called
  .addEdge("__start__", "callModel")
  .addConditionalEdges(
    // First, we define the edges' source node. We use `callModel`.
    // This means these are the edges taken after the `callModel` node is called.
    "callModel",
    // Next, we pass in the function that will determine the sink node(s), which
    // will be called after the source node is called.
    routeModelOutput,
    // List of the possible destinations the conditional edge can route to.
    // Required for conditional edges to properly render the graph in Studio
    ["tools", "__end__"]
  )
  // This means that after `tools` is called, `callModel` node is called next.
  .addEdge("tools", "callModel");

// Finally, we compile it!
// This compiles it into a graph you can invoke and deploy.
export const graph = workflow.compile();
示例文件目录
my-app/
├── src # all project code lies within here
   ├── utils # optional utilities for your graph
   ├── tools.ts # tools for your graph
   ├── nodes.ts # node functions for your graph
   └── state.ts # state definition of your graph
   └── agent.ts # code for constructing your graph
├── package.json # package dependencies
├── .env # environment variables
└── langgraph.json # configuration file for LangGraph

创建 API 配置

创建一个名为langgraph.json配置文件。有关配置文件 JSON 对象中每个键的详细说明,请参阅配置文件参考 示例langgraph.json文件:
{
  "node_version": "20",
  "dockerfile_lines": [],
  "dependencies": ["."],
  "graphs": {
    "agent": "./src/agent.ts:graph"
  },
  "env": ".env"
}
请注意,CompiledGraph的变量名出现在顶级graphs键中每个子键的值的末尾(即:<variable_name>)。
配置位置 配置文件必须放置在与包含编译图和相关依赖项的 TypeScript 文件处于相同或更高层次的目录中。

下一步

设置好项目并将其放入 GitHub 存储库后,就可以部署您的应用程序了。
以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。
© . This site is unofficial and not affiliated with LangChain, Inc.