跳到主要内容
Eden AI 通过整合顶尖的 AI 提供商,正在彻底改变 AI 格局,使用户能够释放无限可能,挖掘人工智能的真正潜力。凭借一个全面的、轻松的一体化平台,它允许用户以闪电般的速度将 AI 功能部署到生产环境,通过一个单一的 API 轻松访问 AI 的全部功能。(网站:edenai.co/ 本示例将介绍如何使用 LangChain 与 Eden AI 模型进行交互。
访问 EDENAI 的 API 需要一个 API 密钥, 您可以通过在 app.edenai.run/user/register 创建账户并前往 app.edenai.run/admin/account/settings 获取。 获取密钥后,我们需要通过运行以下命令将其设置为环境变量:
export EDENAI_API_KEY="..."
如果您不想设置环境变量,可以直接通过 `edenai_api_key` 命名参数传入密钥, 在初始化 `EdenAI LLM` 类时:
from langchain_community.llms import EdenAI
llm = EdenAI(edenai_api_key="...", provider="openai", temperature=0.2, max_tokens=250)

调用模型

EdenAI API 汇集了各种提供商,每个提供商都提供多种模型。 要访问特定模型,您只需在实例化时添加“model”即可。 例如,让我们探索 OpenAI 提供的模型,例如 GPT3.5。

文本生成

from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate

llm = EdenAI(
    feature="text",
    provider="openai",
    model="gpt-3.5-turbo-instruct",
    temperature=0.2,
    max_tokens=250,
)

prompt = """
User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?
Assistant:
"""

llm(prompt)

图像生成

import base64
from io import BytesIO

from PIL import Image


def print_base64_image(base64_string):
    # Decode the base64 string into binary data
    decoded_data = base64.b64decode(base64_string)

    # Create an in-memory stream to read the binary data
    image_stream = BytesIO(decoded_data)

    # Open the image using PIL
    image = Image.open(image_stream)

    # Display the image
    image.show()
text2image = EdenAI(feature="image", provider="openai", resolution="512x512")
image_output = text2image("A cat riding a motorcycle by Picasso")
print_base64_image(image_output)

带回调的文本生成

from langchain_community.llms import EdenAI
from langchain_core.callbacks import StreamingStdOutCallbackHandler

llm = EdenAI(
    callbacks=[StreamingStdOutCallbackHandler()],
    feature="text",
    provider="openai",
    temperature=0.2,
    max_tokens=250,
)
prompt = """
User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?
Assistant:
"""
print(llm.invoke(prompt))

链式调用

from langchain.chains import LLMChain, SimpleSequentialChain
from langchain_core.prompts import PromptTemplate
llm = EdenAI(feature="text", provider="openai", temperature=0.2, max_tokens=250)
text2image = EdenAI(feature="image", provider="openai", resolution="512x512")
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)

chain = LLMChain(llm=llm, prompt=prompt)
second_prompt = PromptTemplate(
    input_variables=["company_name"],
    template="Write a description of a logo for this company: {company_name}, the logo should not contain text at all ",
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)
third_prompt = PromptTemplate(
    input_variables=["company_logo_description"],
    template="{company_logo_description}",
)
chain_three = LLMChain(llm=text2image, prompt=third_prompt)
# Run the chain specifying only the input variable for the first chain.
overall_chain = SimpleSequentialChain(
    chains=[chain, chain_two, chain_three], verbose=True
)
output = overall_chain.run("hats")
# print the image
print_base64_image(output)

以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。
© . This site is unofficial and not affiliated with LangChain, Inc.