跳到主要内容
本页面记录了追踪 AI SDK 运行的旧方法。有关无需 OTEL 设置的更简单、更通用的方法,请参阅新指南
您可以使用 LangSmith 通过 OpenTelemetry (OTEL) 追踪 Vercel AI SDK 的运行。本指南将通过一个示例进行说明。
许多流行的 JavaScript OpenTelemetry 实现目前仍处于实验阶段,在生产环境中可能会出现不稳定行为,尤其是在将 LangSmith 与其他提供商一起进行插桩时。如果您使用的是 AI SDK 5,我们强烈建议使用我们推荐的 AI SDK 运行追踪方法

0. 安装

安装 Vercel AI SDK 和所需的 OTEL 包。我们在下面的代码片段中使用了他们的 OpenAI 集成,但您也可以使用他们的其他任何选项。
npm install ai @ai-sdk/openai zod
npm install @opentelemetry/sdk-trace-base @opentelemetry/exporter-trace-otlp-proto @opentelemetry/context-async-hooks

1. 配置您的环境

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>
export LANGSMITH_OTEL_ENABLED=true

# This example uses OpenAI, but you can use any LLM provider of choice
export OPENAI_API_KEY=<your-openai-api-key>

2. 记录追踪

Node.js

要开始追踪,您需要在代码开头导入并调用 initializeOTEL 方法
import { initializeOTEL } from "langsmith/experimental/otel/setup";

const { DEFAULT_LANGSMITH_SPAN_PROCESSOR } = initializeOTEL();
然后,将 experimental_telemetry 参数添加到您希望追踪的 AI SDK 调用中。
在您的应用程序关闭之前,请不要忘记调用 await DEFAULT_LANGSMITH_SPAN_PROCESSOR.shutdown();,以便将所有剩余的追踪刷新到 LangSmith。
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

let result;
try {
  result = await generateText({
    model: openai("gpt-4.1-nano"),
    prompt: "Write a vegetarian lasagna recipe for 4 people.",
    experimental_telemetry: {
      isEnabled: true,
    },
  });
} finally {
  await DEFAULT_LANGSMITH_SPAN_PROCESSOR.shutdown();
}
您应该在 LangSmith 仪表板中看到这样的追踪 您还可以追踪带有工具调用的运行:
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

await generateText({
  model: openai("gpt-4.1-nano"),
  messages: [
    {
      role: "user",
      content: "What are my orders and where are they? My user ID is 123",
    },
  ],
  tools: {
    listOrders: tool({
      description: "list all orders",
      parameters: z.object({ userId: z.string() }),
      execute: async ({ userId }) =>
        `User ${userId} has the following orders: 1`,
    }),
    viewTrackingInformation: tool({
      description: "view tracking information for a specific order",
      parameters: z.object({ orderId: z.string() }),
      execute: async ({ orderId }) =>
        `Here is the tracking information for ${orderId}`,
    }),
  },
  experimental_telemetry: {
    isEnabled: true,
  },
  maxSteps: 10,
});
结果是这样的追踪

使用 traceable

您可以在 AI SDK 工具调用周围或内部包装 traceable 调用。如果这样做,我们建议您初始化一个 LangSmith client 实例,并将其传递给每个 traceable,然后调用 client.awaitPendingTraceBatches(); 以确保所有追踪都被刷新。如果这样做,您无需手动调用 DEFAULT_LANGSMITH_SPAN_PROCESSOR 上的 shutdown()forceFlush()。以下是一个示例
import { initializeOTEL } from "langsmith/experimental/otel/setup";

initializeOTEL();

import { Client } from "langsmith";
import { traceable } from "langsmith/traceable";
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

const client = new Client();

const wrappedText = traceable(
  async (content: string) => {
    const { text } = await generateText({
      model: openai("gpt-4.1-nano"),
      messages: [{ role: "user", content }],
      tools: {
        listOrders: tool({
          description: "list all orders",
          parameters: z.object({ userId: z.string() }),
          execute: async ({ userId }) => {
            const getOrderNumber = traceable(
              async () => {
                return "1234";
              },
              { name: "getOrderNumber" }
            );
            const orderNumber = await getOrderNumber();
            return `User ${userId} has the following order: ${orderNumber}`;
          },
        }),
      },
      experimental_telemetry: {
        isEnabled: true,
      },
      maxSteps: 10,
    });
    return { text };
  },
  { name: "parentTraceable", client }
);

let result;
try {
  result = await wrappedText("What are my orders?");
} finally {
  await client.awaitPendingTraceBatches();
}
生成的追踪将如下所示

Next.js

首先,安装 @vercel/otel
npm install @vercel/otel
然后,在您的根目录中设置一个 instrumentation.ts 文件。调用 initializeOTEL 并将生成的 DEFAULT_LANGSMITH_SPAN_PROCESSOR 传递到您的 registerOTEL(...) 调用的 spanProcessors 字段中。它应该看起来像这样
import { registerOTel } from "@vercel/otel";
import { initializeOTEL } from "langsmith/experimental/otel/setup";

const { DEFAULT_LANGSMITH_SPAN_PROCESSOR } = initializeOTEL({});

export function register() {
  registerOTel({
    serviceName: "your-project-name",
    spanProcessors: [DEFAULT_LANGSMITH_SPAN_PROCESSOR],
  });
}
最后,在您的 API 路由中,也调用 initializeOTEL 并将 experimental_telemetry 字段添加到您的 AI SDK 调用中
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

import { initializeOTEL } from "langsmith/experimental/otel/setup";

initializeOTEL();

export async function GET() {
  const { text } = await generateText({
    model: openai("gpt-4.1-nano"),
    messages: [{ role: "user", content: "Why is the sky blue?" }],
    experimental_telemetry: {
      isEnabled: true,
    },
  });

  return new Response(text);
}
您还可以将部分代码包装在 traceables 中以获得更细粒度的控制。

Sentry

如果您正在使用 Sentry,您可以将 LangSmith 追踪导出器附加到 Sentry 的默认 OpenTelemetry 插桩中,如下例所示。
在撰写本文时,Sentry 仅支持 OTEL v1 包。LangSmith 支持 v1 和 v2,但您**必须**确保安装 OTEL v1 包才能使插桩正常工作。
npm install @opentelemetry/sdk-trace-base@1.30.1 @opentelemetry/exporter-trace-otlp-proto@0.57.2 @opentelemetry/context-async-hooks@1.30.1
import { initializeOTEL } from "langsmith/experimental/otel/setup";
import { LangSmithOTLPTraceExporter } from "langsmith/experimental/otel/exporter";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { traceable } from "langsmith/traceable";
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
import * as Sentry from "@sentry/node";
import { Client } from "langsmith";

const exporter = new LangSmithOTLPTraceExporter();
const spanProcessor = new BatchSpanProcessor(exporter);

const sentry = Sentry.init({
  dsn: "...",
  tracesSampleRate: 1.0,
  openTelemetrySpanProcessors: [spanProcessor],
});

initializeOTEL({
  globalTracerProvider: sentry?.traceProvider,
});

const wrappedText = traceable(
  async (content: string) => {
    const { text } = await generateText({
      model: openai("gpt-4.1-nano"),
      messages: [{ role: "user", content }],
      experimental_telemetry: {
        isEnabled: true,
      },
      maxSteps: 10,
    });
    return { text };
  },
  { name: "parentTraceable" }
);

let result;
try {
  result = await wrappedText("What color is the sky?");
} finally {
  await sentry?.traceProvider?.shutdown();
}

添加其他元数据

您可以将其他元数据添加到追踪中,以帮助在 LangSmith UI 中组织和过滤它们
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

await generateText({
  model: openai("gpt-4.1-nano"),
  prompt: "Write a vegetarian lasagna recipe for 4 people.",
  experimental_telemetry: {
    isEnabled: true,
    metadata: { userId: "123", language: "english" },
  },
});
元数据将在您的 LangSmith 仪表板中可见,并可用于过滤和搜索特定追踪。请注意,AI SDK 也会在内部子跨度上传播元数据。

自定义运行名称

您可以通过将名为 ls_run_name 的元数据键传递到 experimental_telemetry 中来定制运行名称。
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

await generateText({
  model: openai("gpt-4o-mini"),
  prompt: "Write a vegetarian lasagna recipe for 4 people.",
  experimental_telemetry: {
    isEnabled: true,
    // highlight-start
    metadata: {
      ls_run_name: "my-custom-run-name",
    },
    // highlight-end
  },
});

以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。
© . This site is unofficial and not affiliated with LangChain, Inc.