跳到主要内容
Disclaimer: `LangChain decorators` is not created by the LangChain team and is not supported by it.
LangChain 装饰器是 LangChain 上层的一个封装,为编写自定义 LangChain 提示和链提供了语法糖 🍭。 如需反馈、问题、贡献,请在此处提出问题:ju-bezdek/langchain-decorators
主要原则和优点
  • pythonic的代码编写方式
  • 编写多行提示,不会因为缩进而破坏代码流
  • 利用 IDE 内置的提示类型检查带有文档的弹出窗口支持,快速查看函数以了解提示、它消耗的参数等。
  • 利用 🦜🔗 LangChain 生态系统的所有强大功能
  • 增加对可选参数的支持
  • 通过将参数绑定到一个类,轻松地在提示之间共享参数
这是用 LangChain 装饰器 ✨ 编写的一个简单代码示例
@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers")->str:
    """
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
    """
    return

# run it naturally
write_me_short_post(topic="starwars")
# or
write_me_short_post(topic="starwars", platform="redit")

快速开始

安装

pip install langchain_decorators

示例

一个好的开始方式是查看这里的示例

定义其他参数

在这里,我们只是用llm_prompt装饰器将一个函数标记为一个提示,使其有效地成为一个 LLMChain。而不是运行它 标准的 LLMchain 需要比 inputs_variables 和 prompt 更多的初始化参数……这里的实现细节隐藏在装饰器中。下面是它的工作原理:
  1. 使用全局设置
# define global settings for all prompty (if not set - chatGPT is the current default)
from langchain_decorators import GlobalSettings

GlobalSettings.define_settings(
    default_llm=ChatOpenAI(temperature=0.0), this is default... can change it here globally
    default_streaming_llm=ChatOpenAI(temperature=0.0,streaming=True), this is default... can change it here for all ... will be used for streaming
)
  1. 使用预定义的提示类型
#You can change the default prompt types
from langchain_decorators import PromptTypes, PromptTypeSettings

PromptTypes.AGENT_REASONING.llm = ChatOpenAI()

# Or you can just define your own ones:
class MyCustomPromptTypes(PromptTypes):
    GPT4=PromptTypeSettings(llm=ChatOpenAI(model="gpt-4"))

@llm_prompt(prompt_type=MyCustomPromptTypes.GPT4)
def write_a_complicated_code(app_idea:str)->str:
    ...

  1. 直接在装饰器中定义设置
from langchain_openai import OpenAI

@llm_prompt(
    llm=OpenAI(temperature=0.7),
    stop_tokens=["\nObservation"],
    ...
    )
def creative_writer(book_title:str)->str:
    ...

传递记忆和/或回调

要传递这些参数中的任何一个,只需在函数中声明它们(或使用 kwargs 传递任何内容)

@llm_prompt()
async def write_me_short_post(topic:str, platform:str="twitter", memory:SimpleMemory = None):
    """
    {history_key}
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
    """
    pass

await write_me_short_post(topic="old movies")

简化流式传输

如果我们想利用流式传输
  • 我们需要将提示定义为异步函数
  • 在装饰器上打开流式传输,或者我们可以定义一个打开流式传输的 PromptType
  • 使用 StreamingContext 捕获流
这样,我们只需标记哪些提示应该流式传输,而无需考虑我们应该使用哪个 LLM,将创建的流式处理器传递并分发到我们链的特定部分……只需在提示/提示类型上打开/关闭流式传输…… 流式传输只会在流式传输上下文中调用时发生……在那里我们可以定义一个简单的函数来处理流
# this code example is complete and should run as it is

from langchain_decorators import StreamingContext, llm_prompt

# this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers)
# note that only async functions can be streamed (will get an error if it's not)
@llm_prompt(capture_stream=True)
async def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
    """
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
    """
    pass



# just an arbitrary  function to demonstrate the streaming... will be some websockets code in the real world
tokens=[]
def capture_stream_func(new_token:str):
    tokens.append(new_token)

# if we want to capture the stream, we need to wrap the execution into StreamingContext...
# this will allow us to capture the stream even if the prompt call is hidden inside higher level method
# only the prompts marked with capture_stream will be captured here
with StreamingContext(stream_to_stdout=True, callback=capture_stream_func):
    result = await run_prompt()
    print("Stream finished ... we can distinguish tokens thanks to alternating colors")


print("\nWe've captured",len(tokens),"tokens🎉\n")
print("Here is the result:")
print(result)

提示声明

默认情况下,提示是整个函数文档,除非你标记你的提示

记录你的提示

我们可以通过指定带有<prompt>语言标签的代码块来指定文档的哪一部分是提示定义
@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
    """
    Here is a good way to write a prompt as part of a function docstring, with additional documentation for devs.

    It needs to be a code block, marked as a `<prompt>` language
    ```<prompt>
    Write me a short header for my post about {topic} for {platform} platform.
    It should be for {audience} audience.
    (Max 15 words)
现在,只有上面的代码块将用作提示,文档字符串的其余部分将用作开发人员的描述。(它还有一个好处,即 IDE(如 VS Code)将正确显示提示(不会尝试将其解析为 markdown,因此不会正确显示换行符))""" return

## Chat messages prompt

For chat models is very useful to define prompt as a set of message templates... here is how to do it:

``` python
@llm_prompt
def simulate_conversation(human_input:str, agent_role:str="a pirate"):
    """
    ## System message
     - note the `:system` suffix inside the <prompt:_role_> tag


    ```<prompt:system>
    You are a {agent_role} hacker. You mus act like one.
    You reply always in code, using python or javascript code block...
    for example:

    ... do not reply with anything else.. just with code - respecting your role.

人类消息

(我们使用的是 LLM(GPT 支持 system、assistant、user)强制执行的真实角色)
Helo, who are you
一个回复
\``` python <<- escaping inner code block with \ that should be part of the prompt
def hello():
    print("Argh... hello you pesky pirate")
\```
我们还可以使用占位符添加一些历史记录
{history}
{human_input}
现在,只有上面的代码块将用作提示,文档字符串的其余部分将用作开发人员的描述。(它还有一个好处,即 IDE(如 VS Code)将正确显示提示(不会尝试将其解析为 markdown,因此不会正确显示换行符))""" pass

the roles here are model native roles (assistant, user, system for chatGPT)



# Optional sections
- you can define a whole sections of your prompt that should be optional
- if any input in the section is missing, the whole section won't be rendered

the syntax for this is as follows:

``` python
@llm_prompt
def prompt_with_optional_partials():
    """
    this text will be rendered always, but

    {? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "")   ?}

    you can also place it in between the words
    this too will be rendered{? , but
        this  block will be rendered only if {this_value} and {this_value}
        is not empty?} !
    """

输出解析器

  • llm_prompt 装饰器根据输出类型原生尝试检测最佳输出解析器。(如果未设置,它将返回原始字符串)
  • 列表、字典和 pydantic 输出也原生(自动)支持
# this code example is complete and should run as it is

from langchain_decorators import llm_prompt

@llm_prompt
def write_name_suggestions(company_business:str, count:int)->list:
    """ Write me {count} good name suggestions for company that {company_business}
    """
    pass

write_name_suggestions(company_business="sells cookies", count=5)

更复杂的结构

对于字典/pydantic,你需要指定格式化说明……这可能很繁琐,这就是为什么你可以让输出解析器根据模型(pydantic)为你生成说明。
from langchain_decorators import llm_prompt
from pydantic import BaseModel, Field


class TheOutputStructureWeExpect(BaseModel):
    name:str = Field (description="The name of the company")
    headline:str = Field( description="The description of the company (for landing page)")
    employees:list[str] = Field(description="5-8 fake employee names with their positions")

@llm_prompt()
def fake_company_generator(company_business:str)->TheOutputStructureWeExpect:
    """ Generate a fake company that {company_business}
    {FORMAT_INSTRUCTIONS}
    """
    return

company = fake_company_generator(company_business="sells cookies")

# print the result nicely formatted
print("Company name: ",company.name)
print("company headline: ",company.headline)
print("company employees: ",company.employees)

将提示绑定到对象

from pydantic import BaseModel
from langchain_decorators import llm_prompt

class AssistantPersonality(BaseModel):
    assistant_name:str
    assistant_role:str
    field:str

    @property
    def a_property(self):
        return "whatever"

    def hello_world(self, function_kwarg:str=None):
        """
        We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method
        """


    @llm_prompt
    def introduce_your_self(self)->str:
        """
        ``` <prompt:system>
        You are an assistant named {assistant_name}.
        Your role is to act as {assistant_role}
Introduce your self (in less than 20 words)
""" personality = AssistantPersonality(assistant_name="John", assistant_role="a pirate") print(personality.introduce_your_self(personality))


# More examples:

- these and few more examples are also available in the [colab notebook here](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk)
- including the [ReAct Agent re-implementation](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=3bID5fryE2Yp) using purely langchain decorators

---

<Callout icon="pen-to-square" iconType="regular">
    [Edit the source of this page on GitHub.](https://github.com/langchain-ai/docs/edit/main/src/oss/python/integrations/providers/langchain_decorators.mdx)
</Callout>
<Tip icon="terminal" iconType="regular">
    [Connect these docs programmatically](/use-these-docs) to Claude, VSCode, and more via MCP for    real-time answers.
</Tip>
© . This site is unofficial and not affiliated with LangChain, Inc.