跳到主要内容
Ray Serve 是一个可扩展的模型服务库,用于构建在线推理 API。Serve 特别适合系统组合,使您能够以纯 Python 代码构建一个由多个链和业务逻辑组成的复杂推理服务。

本笔记本的目标

本笔记本展示了一个将 OpenAI 链部署到生产环境的简单示例。您可以将其扩展以部署您自己的自托管模型,您可以轻松定义运行模型在生产中高效所需的硬件资源(GPU 和 CPU)数量。有关可用选项(包括自动扩缩)的更多信息,请参阅 Ray Serve 文档

设置 Ray Serve

使用 pip install ray[serve] 安装 ray。

通用框架

部署服务的一般框架如下:
# 0: Import ray serve and request from starlette
from ray import serve
from starlette.requests import Request


# 1: Define a Ray Serve deployment.
@serve.deployment
class LLMServe:
    def __init__(self) -> None:
        # All the initialization code goes here
        pass

    async def __call__(self, request: Request) -> str:
        # You can parse the request here
        # and return a response
        return "Hello World"


# 2: Bind the model to deployment
deployment = LLMServe.bind()

# 3: Run the deployment
serve.api.run(deployment)
# Shutdown the deployment
serve.api.shutdown()

部署带自定义提示的 OpenAI 链的示例

此处获取 OpenAI API 密钥。运行以下代码时,系统会要求您提供 API 密钥。
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from getpass import getpass

OPENAI_API_KEY = getpass()
@serve.deployment
class DeployLLM:
    def __init__(self):
        # We initialize the LLM, template and the chain here
        llm = OpenAI(openai_api_key=OPENAI_API_KEY)
        template = "Question: {question}\n\nAnswer: Let's think step by step."
        prompt = PromptTemplate.from_template(template)
        self.chain = LLMChain(llm=llm, prompt=prompt)

    def _run_chain(self, text: str):
        return self.chain(text)

    async def __call__(self, request: Request):
        # 1. Parse the request
        text = request.query_params["text"]
        # 2. Run the chain
        resp = self._run_chain(text)
        # 3. Return the response
        return resp["text"]
现在我们可以绑定部署了。
# Bind the model to deployment
deployment = DeployLLM.bind()
我们可以在运行部署时分配端口号和主机。
# Example port number
PORT_NUMBER = 8282
# Run the deployment
serve.api.run(deployment, port=PORT_NUMBER)
现在服务已部署在端口 localhost:8282 上,我们可以发送 POST 请求以获取结果。
import requests

text = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
response = requests.post(f"https://:{PORT_NUMBER}/?text={text}")
print(response.content.decode())

以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。
© . This site is unofficial and not affiliated with LangChain, Inc.