Friendli 通过可扩展、高效的部署选项,专为高需求 AI 工作负载量身定制,提升 AI 应用程序性能并优化成本节省。本教程将指导您如何将
Friendli 与 LangChain 集成。
设置
请确保已安装langchain_community 和 friendli-client。
复制
向 AI 提问
pip install -U langchain-community friendli-client
FRIENDLI_TOKEN 环境变量。
复制
向 AI 提问
import getpass
import os
if "FRIENDLI_TOKEN" not in os.environ:
os.environ["FRIENDLI_TOKEN"] = getpass.getpass("Friendi Personal Access Token: ")
meta-llama-3.1-8b-instruct。您可以在 friendli.ai/docs 查看可用模型。
复制
向 AI 提问
from langchain_community.llms.friendli import Friendli
llm = Friendli(model="meta-llama-3.1-8b-instruct", max_tokens=100, temperature=0)
用法
Friendli 支持 LLM 的所有方法,包括异步 API。 您可以使用 invoke、batch、generate 和 stream 功能。复制
向 AI 提问
llm.invoke("Tell me a joke.")
复制
向 AI 提问
" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come"
复制
向 AI 提问
llm.batch(["Tell me a joke.", "Tell me a joke."])
复制
向 AI 提问
[" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come",
" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come"]
复制
向 AI 提问
llm.generate(["Tell me a joke.", "Tell me a joke."])
复制
向 AI 提问
LLMResult(generations=[[Generation(text=" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come")], [Generation(text=" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come")]], llm_output={'model': 'meta-llama-3.1-8b-instruct'}, run=[RunInfo(run_id=UUID('ee97984b-6eab-4d40-a56f-51d6114953de')), RunInfo(run_id=UUID('cbe501ea-a20f-4420-9301-86cdfcf898c0'))], type='LLMResult')
复制
向 AI 提问
for chunk in llm.stream("Tell me a joke."):
print(chunk, end="", flush=True)
ainvoke、abatch、agenerate 和 astream。
复制
向 AI 提问
await llm.ainvoke("Tell me a joke.")
复制
向 AI 提问
" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come"
复制
向 AI 提问
await llm.abatch(["Tell me a joke.", "Tell me a joke."])
复制
向 AI 提问
[" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come",
" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come"]
复制
向 AI 提问
await llm.agenerate(["Tell me a joke.", "Tell me a joke."])
复制
向 AI 提问
LLMResult(generations=[[Generation(text=" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come")], [Generation(text=" I need a laugh.\nHere's one: Why couldn't the bicycle stand up by itself?\nBecause it was two-tired!\nI hope that made you laugh! Do you want to hear another one? I have a million of 'em! (Okay, maybe not a million, but I have a few more where that came from!) What kind of joke are you in the mood for? A pun, a play on words, or something else? Let me know and I'll try to come")]], llm_output={'model': 'meta-llama-3.1-8b-instruct'}, run=[RunInfo(run_id=UUID('857bd88e-e68a-46d2-8ad3-4a282c199a89')), RunInfo(run_id=UUID('a6ba6e7f-9a7a-4aa1-a2ac-c8fcf48309d3'))], type='LLMResult')
复制
向 AI 提问
async for chunk in llm.astream("Tell me a joke."):
print(chunk, end="", flush=True)
以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。