跳到主要内容
LLM 可观测性处于公开测试阶段,其 API 可能随时更改。
借助 Datadog LLM 可观测性,您可以监控、排查和评估您的 LLM 驱动应用程序,例如聊天机器人。您可以调查问题的根本原因,监控操作性能,并评估您的 LLM 应用程序的质量、隐私和安全性。 这是一个实验性的社区实现,未经 Datadog 官方支持。它基于 Datadog LLM 可观测性 API

设置

有关安装 LangChain 软件包的一般说明,请参阅此部分
npm
npm install @langchain/community @langchain/core

用法

import { OpenAI } from "@langchain/openai";
import { DatadogLLMObsTracer } from "@langchain/community/experimental/callbacks/handlers/datadog";

/**
 * This example demonstrates how to use the DatadogLLMObsTracer with the OpenAI model.
 * It will produce a "llm" span with the input and output of the model inside the meta field.
 *
 * To run this example, you need to have a valid Datadog API key and OpenAI API key.
 */
export const run = async () => {
  const model = new OpenAI({
    model: "gpt-4",
    temperature: 0.7,
    maxTokens: 1000,
    maxRetries: 5,
  });

  const res = await model.invoke(
    "Question: What would be a good company name a company that makes colorful socks?\nAnswer:",
    {
      callbacks: [
        new DatadogLLMObsTracer({
          mlApp: "my-ml-app",
        }),
      ],
    }
  );

  console.log({ res });
};

以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。
© . This site is unofficial and not affiliated with LangChain, Inc.