跳到主要内容
您可以使用 LangSmith 追踪 Vercel AI SDK 的运行。本指南将通过一个示例进行演示。

安装

此包装器要求 AI SDK v5 和 langsmith>=0.3.63。如果您使用的是较旧版本的 AI SDK 或 langsmith,请参阅此页面上的基于 OpenTelemetry (OTEL) 的方法。
安装 Vercel AI SDK。本指南使用 Vercel 的 OpenAI 集成进行以下代码片段,但您也可以使用其任何其他选项。
npm install ai @ai-sdk/openai zod

环境配置

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>

# The examples use OpenAI, but you can use any LLM provider of choice
export OPENAI_API_KEY=<your-openai-api-key>

# For LangSmith API keys linked to multiple workspaces, set the LANGSMITH_WORKSPACE_ID environment variable to specify which workspace to use.
export LANGSMITH_WORKSPACE_ID=<your-workspace-id>

基本设置

导入并包装 AI SDK 方法,然后像往常一样使用它们
import { openai } from "@ai-sdk/openai";
import * as ai from "ai";

import { wrapAISDK } from "langsmith/experimental/vercel";

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai);

await generateText({
  model: openai("gpt-5-nano"),
  prompt: "Write a vegetarian lasagna recipe for 4 people.",
});
您应该会在 LangSmith 面板中看到一个追踪,就像这个一样 您还可以追踪带工具调用的运行:
import * as ai from "ai";
import { tool, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

import { wrapAISDK } from "langsmith/experimental/vercel";

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai);

await generateText({
  model: openai("gpt-5-nano"),
  messages: [
    {
      role: "user",
      content: "What are my orders and where are they? My user ID is 123",
    },
  ],
  tools: {
    listOrders: tool({
      description: "list all orders",
      inputSchema: z.object({ userId: z.string() }),
      execute: async ({ userId }) =>
        `User ${userId} has the following orders: 1`,
    }),
    viewTrackingInformation: tool({
      description: "view tracking information for a specific order",
      inputSchema: z.object({ orderId: z.string() }),
      execute: async ({ orderId }) =>
        `Here is the tracking information for ${orderId}`,
    }),
  },
  stopWhen: stepCountIs(5),
});
结果会生成一个追踪,就像这个一样 您可以完全按照通常的方式使用其他 AI SDK 方法。

使用 traceable

您可以将 traceable 调用包装在 AI SDK 调用周围或 AI SDK 工具调用内。如果您想在 LangSmith 中将运行分组在一起,这会很有用
import * as ai from "ai";
import { tool, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

import { traceable } from "langsmith/traceable";
import { wrapAISDK } from "langsmith/experimental/vercel";

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai);

const wrapper = traceable(async (input: string) => {
  const { text } = await generateText({
    model: openai("gpt-5-nano"),
    messages: [
      {
        role: "user",
        content: input,
      },
    ],
    tools: {
      listOrders: tool({
        description: "list all orders",
        inputSchema: z.object({ userId: z.string() }),
        execute: async ({ userId }) =>
          `User ${userId} has the following orders: 1`,
      }),
      viewTrackingInformation: tool({
        description: "view tracking information for a specific order",
        inputSchema: z.object({ orderId: z.string() }),
        execute: async ({ orderId }) =>
          `Here is the tracking information for ${orderId}`,
      }),
    },
    stopWhen: stepCountIs(5),
  });
  return text;
}, {
  name: "wrapper",
});

await wrapper("What are my orders and where are they? My user ID is 123.");
结果追踪将像这样

在无服务器环境中追踪

在无服务器环境中进行追踪时,您必须在环境关闭之前等待所有运行刷新。为此,您可以在包装 AI SDK 方法时传递一个 LangSmith Client 实例,然后调用 await client.awaitPendingTraceBatches()。请确保将其也传递到您创建的任何 traceable 包装器中
import * as ai from "ai";
import { tool, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

import { Client } from "langsmith";
import { traceable } from "langsmith/traceable";
import { wrapAISDK } from "langsmith/experimental/vercel";

const client = new Client();

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai, { client });

const wrapper = traceable(async (input: string) => {
  const { text } = await generateText({
    model: openai("gpt-5-nano"),
    messages: [
      {
        role: "user",
        content: input,
      },
    ],
    tools: {
      listOrders: tool({
        description: "list all orders",
        inputSchema: z.object({ userId: z.string() }),
        execute: async ({ userId }) =>
          `User ${userId} has the following orders: 1`,
      }),
      viewTrackingInformation: tool({
        description: "view tracking information for a specific order",
        inputSchema: z.object({ orderId: z.string() }),
        execute: async ({ orderId }) =>
          `Here is the tracking information for ${orderId}`,
      }),
    },
    stopWhen: stepCountIs(5),
  });
  return text;
}, {
  name: "wrapper",
  client,
});

try {
  await wrapper("What are my orders and where are they? My user ID is 123.");
} finally {
  await client.awaitPendingTraceBatches();
}
如果您正在使用 Next.js,有一个方便的 after 钩子,您可以在其中放置此逻辑
import { after } from "next/server"
import { Client } from "langsmith";


export async function POST(request: Request) {
  const client = new Client();

  ...

  after(async () => {
    await client.awaitPendingTraceBatches();
  });

  return new Response(JSON.stringify({ ... }), {
    status: 200,
    headers: { "Content-Type": "application/json" },
  });
};
有关更多详细信息,包括无服务器环境中速率限制管理的信息,请参阅此页面

传递 LangSmith 配置

您可以在最初包装 AI SDK 方法时以及通过 providerOptions.langsmith 运行它们时,将 LangSmith 特定的配置传递给包装器。这包括元数据(您以后可以使用它在 LangSmith 中过滤运行)、顶级运行名称、标签、自定义客户端实例等。 在包装时传递的配置将应用于您使用包装方法进行的所有未来调用:
import { openai } from "@ai-sdk/openai";
import * as ai from "ai";

import { wrapAISDK } from "langsmith/experimental/vercel";

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai, {
    metadata: {
      key_for_all_runs: "value",
    },
    tags: ["myrun"],
  });

await generateText({
  model: openai("gpt-5-nano"),
  prompt: "Write a vegetarian lasagna recipe for 4 people.",
});
而在运行时通过 providerOptions.langsmith 传递的配置将仅应用于该运行。我们建议导入并将您的配置包装在 createLangSmithProviderOptions 中以确保正确的类型
import { openai } from "@ai-sdk/openai";
import * as ai from "ai";

import {
  wrapAISDK,
  createLangSmithProviderOptions,
} from "langsmith/experimental/vercel";

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai);

const lsConfig = createLangSmithProviderOptions({
  metadata: {
    individual_key: "value",
  },
  name: "my_individual_run",
});

await generateText({
  model: openai("gpt-5-nano"),
  prompt: "Write a vegetarian lasagna recipe for 4 people.",
  providerOptions: {
    langsmith: lsConfig,
  },
});

数据编辑

您可以通过指定自定义输入/输出处理函数来定制 AI SDK 发送给 LangSmith 的输入和输出。如果您正在处理敏感数据,并希望避免将其发送给 LangSmith,这会很有用。 由于输出格式因您使用的 AI SDK 方法而异,我们建议单独定义配置并将其传递给包装方法。您还需要为 AI SDK 调用中的子 LLM 运行提供单独的函数,因为顶级调用 generateText 会在内部调用 LLM,并且可以多次调用。 我们还建议将泛型参数传递给 createLangSmithProviderOptions 以获取输入和输出的正确类型。以下是 generateText 的示例:
import {
  wrapAISDK,
  createLangSmithProviderOptions,
} from "langsmith/experimental/vercel";
import * as ai from "ai";
import { openai } from "@ai-sdk/openai";

const { generateText } = wrapAISDK(ai);

const lsConfig = createLangSmithProviderOptions<typeof generateText>({
  processInputs: (inputs) => {
    const { messages } = inputs;
    return {
      messages: messages?.map((message) => ({
        providerMetadata: message.providerOptions,
        role: "assistant",
        content: "REDACTED",
      })),
      prompt: "REDACTED",
    };
  },
  processOutputs: (outputs) => {
    return {
      providerMetadata: outputs.providerMetadata,
      role: "assistant",
      content: "REDACTED",
    };
  },
  processChildLLMRunInputs: (inputs) => {
    const { prompt } = inputs;
    return {
      messages: prompt.map((message) => ({
        ...message,
        content: "REDACTED CHILD INPUTS",
      })),
    };
  },
  processChildLLMRunOutputs: (outputs) => {
    return {
      providerMetadata: outputs.providerMetadata,
      content: "REDACTED CHILD OUTPUTS",
      role: "assistant",
    };
  },
});

const { text } = await generateText({
  model: openai("gpt-5-nano"),
  prompt: "What is the capital of France?",
  providerOptions: {
    langsmith: lsConfig,
  },
});

// Paris.
console.log(text);
实际返回值将包含原始的、未编辑的结果,但 LangSmith 中的追踪将是编辑过的。这是一个示例 要编辑工具输入/输出,请像这样将您的 execute 方法包装在 traceable 中:
import * as ai from "ai";
import { tool, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

import { Client } from "langsmith";
import { traceable } from "langsmith/traceable";
import { wrapAISDK } from "langsmith/experimental/vercel";

const client = new Client();

const { generateText, streamText, generateObject, streamObject } =
  wrapAISDK(ai, { client });

const { text } = await generateText({
  model: openai("gpt-5-nano"),
  messages: [
    {
      role: "user",
      content: "What are my orders? My user ID is 123.",
    },
  ],
  tools: {
    listOrders: tool({
      description: "list all orders",
      inputSchema: z.object({ userId: z.string() }),
      execute: traceable(
        async ({ userId }) => {
          return `User ${userId} has the following orders: 1`;
        },
        {
          processInputs: (input) => ({ text: "REDACTED" }),
          processOutputs: (outputs) => ({ text: "REDACTED" }),
          run_type: "tool",
          name: "listOrders",
        }
      ) as (input: { userId: string }) => Promise<string>,
    }),
  },
  stopWhen: stepCountIs(5),
});
traceable 返回类型很复杂,这使得强制类型转换成为必要。如果您希望避免强制类型转换,也可以省略 AI SDK tool 包装函数。
以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。
© . This site is unofficial and not affiliated with LangChain, Inc.