跳到主要内容
Neo4j 是由 Neo4j, Inc 开发的图数据库管理系统。
Neo4j 存储的数据元素是节点、连接它们的边以及节点和边的属性。其开发者将其描述为具有原生图存储和处理功能的 ACID 兼容事务型数据库,Neo4j 提供非开源的“社区版”,该版本采用 GNU 通用公共许可证的修改版本授权,并提供在线备份和高可用性扩展,这些扩展根据闭源商业许可证授权。Neo 还根据闭源商业条款授权 Neo4j 及其这些扩展。
本笔记本展示了如何使用大型语言模型(LLM)为图数据库提供自然语言界面,您可以使用 Cypher 查询语言查询该数据库。
Cypher 是一种声明式图查询语言,允许在属性图中进行富有表现力且高效的数据查询。

设置

您需要有一个正在运行的 Neo4j 实例。一种选择是在他们的 Aura 云服务中创建一个免费的 Neo4j 数据库实例。您也可以使用Neo4j 桌面应用程序在本地运行数据库,或者运行一个 docker 容器。您可以通过执行以下脚本来运行本地 docker 容器
docker run \
    --name neo4j \
    -p 7474:7474 -p 7687:7687 \
    -d \
    -e NEO4J_AUTH=neo4j/password \
    -e NEO4J_PLUGINS=\[\"apoc\"\]  \
    neo4j:latest
如果您使用的是 docker 容器,您需要等待几秒钟才能启动数据库。
from langchain_neo4j import GraphCypherQAChain, Neo4jGraph
from langchain_openai import ChatOpenAI
graph = Neo4jGraph(url="bolt://:7687", username="neo4j", password="password")
本指南中我们默认使用 OpenAI 模型。
import getpass
import os

if "OPENAI_API_KEY" not in os.environ:
    os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")

数据库初始化

假设您的数据库是空的,您可以使用 Cypher 查询语言填充它。以下 Cypher 语句是幂等的,这意味着如果您运行一次或多次,数据库信息将保持不变。
graph.query(
    """
MERGE (m:Movie {name:"Top Gun", runtime: 120})
WITH m
UNWIND ["Tom Cruise", "Val Kilmer", "Anthony Edwards", "Meg Ryan"] AS actor
MERGE (a:Actor {name:actor})
MERGE (a)-[:ACTED_IN]->(m)
"""
)
[]

刷新图模式信息

如果数据库的模式发生变化,您可以刷新生成 Cypher 语句所需的模式信息。
graph.refresh_schema()
print(graph.schema)
Node properties:
Movie {runtime: INTEGER, name: STRING}
Actor {name: STRING}
Relationship properties:

The relationships:
(:Actor)-[:ACTED_IN]->(:Movie)

增强模式信息

选择增强模式版本可以使系统自动扫描数据库中的示例值并计算一些分布指标。例如,如果节点属性的独立值少于 10 个,我们将返回模式中所有可能的值。否则,每个节点和关系属性只返回一个示例值。
enhanced_graph = Neo4jGraph(
    url="bolt://:7687",
    username="neo4j",
    password="password",
    enhanced_schema=True,
)
print(enhanced_graph.schema)
Node properties:
- **Movie**
  - `runtime`: INTEGER Min: 120, Max: 120
  - `name`: STRING Available options: ['Top Gun']
- **Actor**
  - `name`: STRING Available options: ['Tom Cruise', 'Val Kilmer', 'Anthony Edwards', 'Meg Ryan']
Relationship properties:

The relationships:
(:Actor)-[:ACTED_IN]->(:Movie)

查询图

我们现在可以使用图 Cypher 问答链来询问图的问题。
chain = GraphCypherQAChain.from_llm(
    ChatOpenAI(temperature=0), graph=graph, verbose=True, allow_dangerous_requests=True
)
chain.invoke({"query": "Who played in Top Gun?"})
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name
Full Context:
[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]

> Finished chain.
{'query': 'Who played in Top Gun?',
 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}

限制结果数量

您可以使用 top_k 参数限制 Cypher QA 链的结果数量。默认值为 10。
chain = GraphCypherQAChain.from_llm(
    ChatOpenAI(temperature=0),
    graph=graph,
    verbose=True,
    top_k=2,
    allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name
Full Context:
[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}]

> Finished chain.
{'query': 'Who played in Top Gun?',
 'result': 'Tom Cruise, Val Kilmer played in Top Gun.'}

返回中间结果

您可以使用 return_intermediate_steps 参数从 Cypher QA 链返回中间步骤
chain = GraphCypherQAChain.from_llm(
    ChatOpenAI(temperature=0),
    graph=graph,
    verbose=True,
    return_intermediate_steps=True,
    allow_dangerous_requests=True,
)
result = chain.invoke({"query": "Who played in Top Gun?"})
print(f"Intermediate steps: {result['intermediate_steps']}")
print(f"Final answer: {result['result']}")
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name
Full Context:
[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]

> Finished chain.
Intermediate steps: [{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\nWHERE m.name = 'Top Gun'\nRETURN a.name"}, {'context': [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]}]
Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.

返回直接结果

您可以使用 return_direct 参数从 Cypher QA 链返回直接结果
chain = GraphCypherQAChain.from_llm(
    ChatOpenAI(temperature=0),
    graph=graph,
    verbose=True,
    return_direct=True,
    allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name

> Finished chain.
{'query': 'Who played in Top Gun?',
 'result': [{'a.name': 'Tom Cruise'},
  {'a.name': 'Val Kilmer'},
  {'a.name': 'Anthony Edwards'},
  {'a.name': 'Meg Ryan'}]}

在 Cypher 生成提示中添加示例

您可以定义希望大型语言模型(LLM)针对特定问题生成的 Cypher 语句
from langchain_core.prompts.prompt import PromptTemplate

CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.
Instructions:
Use only the provided relationship types and properties in the schema.
Do not use any other relationship types or properties that are not provided.
Schema:
{schema}
Note: Do not include any explanations or apologies in your responses.
Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.
Do not include any text except the generated Cypher statement.
Examples: Here are a few examples of generated Cypher statements for particular questions:
# How many people played in Top Gun?
MATCH (m:Movie {{name:"Top Gun"}})<-[:ACTED_IN]-()
RETURN count(*) AS numberOfActors

The question is:
{question}"""

CYPHER_GENERATION_PROMPT = PromptTemplate(
    input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE
)

chain = GraphCypherQAChain.from_llm(
    ChatOpenAI(temperature=0),
    graph=graph,
    verbose=True,
    cypher_prompt=CYPHER_GENERATION_PROMPT,
    allow_dangerous_requests=True,
)
chain.invoke({"query": "How many people played in Top Gun?"})
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (m:Movie {name:"Top Gun"})<-[:ACTED_IN]-()
RETURN count(*) AS numberOfActors
Full Context:
[{'numberOfActors': 4}]

> Finished chain.
{'query': 'How many people played in Top Gun?',
 'result': 'There were 4 actors in Top Gun.'}

对 Cypher 和答案生成使用单独的 LLM

您可以使用 cypher_llmqa_llm 参数来定义不同的 LLM
chain = GraphCypherQAChain.from_llm(
    graph=graph,
    cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
    qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"),
    verbose=True,
    allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name
Full Context:
[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]

> Finished chain.
{'query': 'Who played in Top Gun?',
 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}

忽略指定的节点和关系类型

您可以使用 include_typesexclude_types 在生成 Cypher 语句时忽略图模式的一部分。
chain = GraphCypherQAChain.from_llm(
    graph=graph,
    cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
    qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"),
    verbose=True,
    exclude_types=["Movie"],
    allow_dangerous_requests=True,
)
# Inspect graph schema
print(chain.graph_schema)
Node properties are the following:
Actor {name: STRING}
Relationship properties are the following:

The relationships are the following:

验证生成的 Cypher 语句

您可以使用 validate_cypher 参数验证并纠正生成的 Cypher 语句中的关系方向。
chain = GraphCypherQAChain.from_llm(
    llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
    graph=graph,
    verbose=True,
    validate_cypher=True,
    allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name
Full Context:
[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]

> Finished chain.
{'query': 'Who played in Top Gun?',
 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}

提供来自数据库结果的上下文作为工具/函数输出

您可以使用 use_function_response 参数将数据库结果中的上下文作为工具/函数输出传递给 LLM。这种方法提高了答案的准确性和相关性,因为 LLM 会更密切地遵循提供的上下文。您需要使用支持原生函数调用的 LLM 才能使用此功能
chain = GraphCypherQAChain.from_llm(
    llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
    graph=graph,
    verbose=True,
    use_function_response=True,
    allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name
Full Context:
[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]

> Finished chain.
{'query': 'Who played in Top Gun?',
 'result': 'The main actors in Top Gun are Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan.'}
当使用函数响应功能时,您可以通过提供 function_response_system 来提供自定义系统消息,以指导模型如何生成答案。 请注意,当使用 use_function_response 时,qa_prompt 将不起作用
chain = GraphCypherQAChain.from_llm(
    llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
    graph=graph,
    verbose=True,
    use_function_response=True,
    function_response_system="Respond as a pirate!",
    allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name
Full Context:
[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]

> Finished chain.
{'query': 'Who played in Top Gun?',
 'result': "Arrr matey! In the film Top Gun, ye be seein' Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan sailin' the high seas of the sky! Aye, they be a fine crew of actors, they be!"}

以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。
© . This site is unofficial and not affiliated with LangChain, Inc.