跳到主要内容
LlamaEdge 允许您在本地和通过聊天服务与 GGUF 格式的 LLM 聊天。
  • LlamaEdgeChatService 为开发者提供了一个兼容 OpenAI API 的服务,通过 HTTP 请求与 LLM 聊天。
  • LlamaEdgeChatLocal 使开发者能够在本地与 LLM 聊天(即将推出)。
LlamaEdgeChatServiceLlamaEdgeChatLocal 都运行在 WasmEdge Runtime 驱动的基础设施上,该运行时为 LLM 推理任务提供了轻量级和可移植的 WebAssembly 容器环境。

通过 API 服务聊天

LlamaEdgeChatServicellama-api-server 上运行。按照 llama-api-server 快速入门中的步骤,您可以托管自己的 API 服务,这样只要有互联网,您就可以在任何设备上与您喜欢的任何模型聊天。
from langchain_community.chat_models.llama_edge import LlamaEdgeChatService
from langchain.messages import HumanMessage, SystemMessage

在非流式模式下与 LLM 聊天

# service url
service_url = "https://b008-54-186-154-209.ngrok-free.app"

# create wasm-chat service instance
chat = LlamaEdgeChatService(service_url=service_url)

# create message sequence
system_message = SystemMessage(content="You are an AI assistant")
user_message = HumanMessage(content="What is the capital of France?")
messages = [system_message, user_message]

# chat with wasm-chat service
response = chat.invoke(messages)

print(f"[Bot] {response.content}")
[Bot] Hello! The capital of France is Paris.

在流式模式下与 LLM 聊天

# service url
service_url = "https://b008-54-186-154-209.ngrok-free.app"

# create wasm-chat service instance
chat = LlamaEdgeChatService(service_url=service_url, streaming=True)

# create message sequence
system_message = SystemMessage(content="You are an AI assistant")
user_message = HumanMessage(content="What is the capital of Norway?")
messages = [
    system_message,
    user_message,
]

output = ""
for chunk in chat.stream(messages):
    # print(chunk.content, end="", flush=True)
    output += chunk.content

print(f"[Bot] {output}")
[Bot]   Hello! I'm happy to help you with your question. The capital of Norway is Oslo.

以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。
© . This site is unofficial and not affiliated with LangChain, Inc.