跳到主要内容
Agent2Agent (A2A) 是 Google 提出的协议,用于实现对话式 AI 代理之间的通信。LangSmith 实现了 A2A 支持,允许您的代理通过标准化协议与其他 A2A 兼容代理进行通信。 A2A 端点在 LangGraph Server 中可用,位于 /a2a/{assistant_id}

代理卡片发现

每个助手都会自动公开一个 A2A 代理卡片,描述其功能并提供其他代理连接所需的信息。您可以使用以下方式检索任何助手的代理卡片
GET /.well-known/agent-card.json?assistant_id={assistant_id}
代理卡片包含助手的名称、描述、可用技能、支持的输入/输出模式以及用于通信的 A2A 端点 URL。

要求

要使用 A2A,请确保安装了以下依赖项
  • langgraph-api >= 0.4.9
安装方式
pip install "langgraph-api>=0.4.9"

使用概述

启用 A2A
  • 升级到 langgraph-api>=0.4.9。
  • 使用基于消息的状态结构部署您的代理。
  • 使用端点与其他 A2A 兼容代理连接。

创建 A2A 兼容代理

此示例创建了一个 A2A 兼容代理,该代理使用 OpenAI 的 API 处理传入消息并维护对话状态。该代理定义了一个基于消息的状态结构并处理 A2A 协议的消息格式。 为了与 A2A“文本”部分兼容,代理必须在状态中有一个 messages 键。这是一个示例:
"""LangGraph A2A conversational agent.

Supports the A2A protocol with messages input for conversational interactions.
"""

from __future__ import annotations

import os
from dataclasses import dataclass
from typing import Any, Dict, List, TypedDict

from langgraph.graph import StateGraph
from langgraph.runtime import Runtime
from openai import AsyncOpenAI


class Context(TypedDict):
    """Context parameters for the agent."""
    my_configurable_param: str


@dataclass
class State:
    """Input state for the agent.

    Defines the initial structure for A2A conversational messages.
    """
    messages: List[Dict[str, Any]]


async def call_model(state: State, runtime: Runtime[Context]) -> Dict[str, Any]:
    """Process conversational messages and returns output using OpenAI."""
    # Initialize OpenAI client
    client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

    # Process the incoming messages
    latest_message = state.messages[-1] if state.messages else {}
    user_content = latest_message.get("content", "No message content")

    # Create messages for OpenAI API
    openai_messages = [
        {
            "role": "system",
            "content": "You are a helpful conversational agent. Keep responses brief and engaging."
        },
        {
            "role": "user",
            "content": user_content
        }
    ]

    try:
        # Make OpenAI API call
        response = await client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=openai_messages,
            max_tokens=100,
            temperature=0.7
        )

        ai_response = response.choices[0].message.content

    except Exception as e:
        ai_response = f"I received your message but had trouble processing it. Error: {str(e)[:50]}..."

    # Create a response message
    response_message = {
        "role": "assistant",
        "content": ai_response
    }

    return {
        "messages": state.messages + [response_message]
    }


# Define the graph
graph = (
    StateGraph(State, context_schema=Context)
    .add_node(call_model)
    .add_edge("__start__", "call_model")
    .compile()
)

代理到代理通信

一旦您的代理通过 langgraph dev 在本地运行或部署到生产环境,您就可以使用 A2A 协议促进它们之间的通信。 此示例演示了两个代理如何通过向彼此的 A2A 端点发送 JSON-RPC 消息进行通信。该脚本模拟了一个多轮对话,其中每个代理处理另一个代理的响应并继续对话。
#!/usr/bin/env python3
"""Agent-to-Agent conversation simulation using LangGraph A2A protocol."""

import asyncio
import aiohttp
import os

async def send_message(session, port, assistant_id, text):
    """Send a message to an agent and return the response text."""
    url = f"http://127.0.0.1:{port}/a2a/{assistant_id}"
    payload = {
        "jsonrpc": "2.0",
        "id": "",
        "method": "message/send",
        "params": {
            "message": {
                "role": "user",
                "parts": [{"kind": "text", "text": text}]
            },
            "messageId": "",
            "thread": {"threadId": ""}
        }
    }

    headers = {"Accept": "application/json"}
    async with session.post(url, json=payload, headers=headers) as response:
        try:
            result = await response.json()
            return result["result"]["artifacts"][0]["parts"][0]["text"]
        except Exception as e:
            text = await response.text()
            print(f"Response error from port {port}: {response.status} - {text}")
            return f"Error from port {port}: {response.status}"

async def simulate_conversation():
    """Simulate a conversation between two agents."""
    agent_a_id = os.getenv("AGENT_A_ID")
    agent_b_id = os.getenv("AGENT_B_ID")

    if not agent_a_id or not agent_b_id:
        print("Set AGENT_A_ID and AGENT_B_ID environment variables")
        return

    message = "Hello! Let's have a conversation."

    async with aiohttp.ClientSession() as session:
        for i in range(3):
            print(f"--- Round {i + 1} ---")

            # Agent A responds
            message = await send_message(session, 2024, agent_a_id, message)
            print(f"🔵 Agent A: {message}")

            # Agent B responds
            message = await send_message(session, 2025, agent_b_id, message)
            print(f"🔴 Agent B: {message}")
            print()

if __name__ == "__main__":
    asyncio.run(simulate_conversation())

以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。
© . This site is unofficial and not affiliated with LangChain, Inc.