跳到主要内容
要评估代理的性能,您可以使用 LangSmith 评估。您首先需要定义一个评估器函数来判断代理的结果,例如最终输出或轨迹。根据您的评估技术,这可能涉及或不涉及参考输出
type EvaluatorParams = {
    outputs: Record<string, any>;
    referenceOutputs: Record<string, any>;
};

function evaluator({ outputs, referenceOutputs }: EvaluatorParams) {
    // compare agent outputs against reference outputs
    const outputMessages = outputs.messages;
    const referenceMessages = referenceOutputs.messages;
    const score = compareMessages(outputMessages, referenceMessages);
    return { key: "evaluator_score", score: score };
}
首先,您可以使用 AgentEvals 包中的预构建评估器
npm install agentevals

创建评估器

评估代理性能的常见方法是将其轨迹(调用工具的顺序)与参考轨迹进行比较
import { createTrajectoryMatchEvaluator } from "agentevals/trajectory/match";

const outputs = [
    {
        role: "assistant",
        tool_calls: [
        {
            function: {
            name: "get_weather",
            arguments: JSON.stringify({ city: "san francisco" }),
            },
        },
        {
            function: {
            name: "get_directions",
            arguments: JSON.stringify({ destination: "presidio" }),
            },
        },
        ],
    },
];

const referenceOutputs = [
    {
        role: "assistant",
        tool_calls: [
        {
            function: {
            name: "get_weather",
            arguments: JSON.stringify({ city: "san francisco" }),
            },
        },
        ],
    },
];

// Create the evaluator
const evaluator = createTrajectoryMatchEvaluator({
  // Specify how the trajectories will be compared. `superset` will accept output trajectory as valid if it's a superset of the reference one. Other options include: strict, unordered and subset
  trajectoryMatchMode: "superset", 
});

// Run the evaluator
const result = evaluator({
    outputs: outputs,
    referenceOutputs: referenceOutputs,
});
  1. 指定轨迹将如何比较。如果输出轨迹是参考轨迹的超集,则 superset 将接受输出轨迹为有效。其他选项包括:严格匹配无序匹配子集和超集匹配
下一步,了解如何 自定义轨迹匹配评估器

LLM 作为评判者

您可以使用 LLM 作为评判者的评估器,该评估器使用 LLM 将轨迹与参考输出进行比较并输出分数
import {
    createTrajectoryLlmAsJudge,
    TRAJECTORY_ACCURACY_PROMPT_WITH_REFERENCE,
} from "agentevals/trajectory/llm";

const evaluator = createTrajectoryLlmAsJudge({
    prompt: TRAJECTORY_ACCURACY_PROMPT_WITH_REFERENCE,
    model: "openai:o3-mini",
});

运行评估器

要运行评估器,您首先需要创建一个 LangSmith 数据集。要使用预构建的 AgentEvals 评估器,您需要一个具有以下模式的数据集
  • 输入{"messages": [...]} 用于调用代理的输入消息。
  • 输出{"messages": [...]} 代理输出中预期的消息历史。对于轨迹评估,您可以选择只保留助手消息。
import { Client } from "langsmith";
import { createAgent } from "langchain";
import { createTrajectoryMatchEvaluator } from "agentevals/trajectory/match";

const client = new Client();
const agent = createAgent({...});
const evaluator = createTrajectoryMatchEvaluator({...});

const experimentResults = await client.evaluate(
    (inputs) => agent.invoke(inputs),
    // replace with your dataset name
    { data: "<Name of your dataset>" },
    { evaluators: [evaluator] }
);

以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。
© . This site is unofficial and not affiliated with LangChain, Inc.