Kinetica 是一个数据库,集成了对向量相似性搜索的支持。它支持
- 精确和近似最近邻搜索
- L2 距离、内积和余弦距离
Kinetica)。 这需要一个 Kinetica 实例,可以按照此处的说明轻松设置 - 安装说明。复制
向 AI 提问
# Pip install necessary package
pip install -qU langchain-openai langchain-community
pip install "gpudb>=7.2.2.0"
pip install -qU tiktoken
OpenAIEmbeddings,所以我们必须获取OpenAI API密钥。
复制
向 AI 提问
import getpass
import os
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
复制
向 AI 提问
## Loading environment variables
from dotenv import load_dotenv
load_dotenv()
复制
向 AI 提问
False
复制
向 AI 提问
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import (
Kinetica,
KineticaSettings,
)
from langchain_openai import OpenAIEmbeddings
复制
向 AI 提问
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
复制
向 AI 提问
# Kinetica needs the connection to the database.
# This is how to set it up.
HOST = os.getenv("KINETICA_HOST", "http://127.0.0.1:9191")
USERNAME = os.getenv("KINETICA_USERNAME", "")
PASSWORD = os.getenv("KINETICA_PASSWORD", "")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "")
def create_config() -> KineticaSettings:
return KineticaSettings(host=HOST, username=USERNAME, password=PASSWORD)
复制
向 AI 提问
from uuid import uuid4
from langchain_core.documents import Document
document_1 = Document(
page_content="I had chocolate chip pancakes and scrambled eggs for breakfast this morning.",
metadata={"source": "tweet"},
)
document_2 = Document(
page_content="The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.",
metadata={"source": "news"},
)
document_3 = Document(
page_content="Building an exciting new project with LangChain - come check it out!",
metadata={"source": "tweet"},
)
document_4 = Document(
page_content="Robbers broke into the city bank and stole $1 million in cash.",
metadata={"source": "news"},
)
document_5 = Document(
page_content="Wow! That was an amazing movie. I can't wait to see it again.",
metadata={"source": "tweet"},
)
document_6 = Document(
page_content="Is the new iPhone worth the price? Read this review to find out.",
metadata={"source": "website"},
)
document_7 = Document(
page_content="The top 10 soccer players in the world right now.",
metadata={"source": "website"},
)
document_8 = Document(
page_content="LangGraph is the best framework for building stateful, agentic applications!",
metadata={"source": "tweet"},
)
document_9 = Document(
page_content="The stock market is down 500 points today due to fears of a recession.",
metadata={"source": "news"},
)
document_10 = Document(
page_content="I have a bad feeling I am going to get deleted :(",
metadata={"source": "tweet"},
)
documents = [
document_1,
document_2,
document_3,
document_4,
document_5,
document_6,
document_7,
document_8,
document_9,
document_10,
]
uuids = [str(uuid4()) for _ in range(len(documents))]
使用欧几里得距离(默认)进行相似性搜索
复制
向 AI 提问
# The Kinetica Module will try to create a table with the name of the collection.
# So, make sure that the collection name is unique and the user has the permission to create a table.
COLLECTION_NAME = "langchain_example"
connection = create_config()
db = Kinetica(
connection,
embeddings,
collection_name=COLLECTION_NAME,
)
db.add_documents(documents=documents, ids=uuids)
复制
向 AI 提问
['05e5a484-0273-49d1-90eb-1276baca31de',
'd98b808f-dc0b-4328-bdbf-88f6b2ab6040',
'ba0968d4-e344-4285-ae0f-f5199b56f9d6',
'a25393b8-6539-45b5-993e-ea16d01941ec',
'804a37e3-1278-4b60-8b02-36b159ee8c1a',
'9688b594-3dc6-41d2-a937-babf8ff24c2f',
'40f7b8fe-67c7-489a-a5a5-7d3965e33bba',
'b4fc1376-c113-41e9-8f16-f9320517bedd',
'4d94d089-fdde-442b-84ab-36d9fe0670c8',
'66fdb79d-49ce-4b06-901a-fda6271baf2a']
复制
向 AI 提问
# query = "What did the president say about Ketanji Brown Jackson"
# docs_with_score = db.similarity_search_with_score(query)
复制
向 AI 提问
print()
print("Similarity Search")
results = db.similarity_search(
"LangChain provides abstractions to make working with LLMs easy",
k=2,
filter={"source": "tweet"},
)
for res in results:
print(f"* {res.page_content} [{res.metadata}]")
print()
print("Similarity search with score")
results = db.similarity_search_with_score(
"Will it be hot tomorrow?", k=1, filter={"source": "news"}
)
for res, score in results:
print(f"* [SIM={score:3f}] {res.page_content} [{res.metadata}]")
复制
向 AI 提问
Similarity Search
* Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]
* LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]
Similarity search with score
* [SIM=0.945397] The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees. [{'source': 'news'}]
使用向量存储
上面我们从头创建了一个向量存储。然而,我们经常希望使用现有的向量存储。为此,我们可以直接初始化它。复制
向 AI 提问
store = Kinetica(
collection_name=COLLECTION_NAME,
config=connection,
embedding_function=embeddings,
)
添加文档
我们可以将文档添加到现有的向量存储中。复制
向 AI 提问
store.add_documents([Document(page_content="foo")])
复制
向 AI 提问
['68c4c679-c4d9-4f2d-bf01-f6c4f2181503']
复制
向 AI 提问
docs_with_score = db.similarity_search_with_score("foo")
复制
向 AI 提问
docs_with_score[0]
复制
向 AI 提问
(Document(metadata={}, page_content='foo'), 0.0015394920483231544)
复制
向 AI 提问
docs_with_score[1]
复制
向 AI 提问
(Document(metadata={'source': 'tweet'}, page_content='Building an exciting new project with LangChain - come check it out!'),
1.2609431743621826)
覆盖向量存储
如果您有现有集合,可以通过执行from_documents 并设置 pre_delete_collection = True 来覆盖它
复制
向 AI 提问
db = Kinetica.from_documents(
documents=documents,
embedding=embeddings,
collection_name=COLLECTION_NAME,
config=connection,
pre_delete_collection=True,
)
复制
向 AI 提问
docs_with_score = db.similarity_search_with_score("foo")
复制
向 AI 提问
docs_with_score[0]
复制
向 AI 提问
(Document(metadata={'source': 'tweet'}, page_content='Building an exciting new project with LangChain - come check it out!'),
1.260920763015747)
将向量存储用作检索器
复制
向 AI 提问
retriever = store.as_retriever()
复制
向 AI 提问
print(retriever)
复制
向 AI 提问
tags=['Kinetica', 'OpenAIEmbeddings'] vectorstore=<langchain_community.vectorstores.kinetica.Kinetica object at 0x7a48142b2230> search_kwargs={}
以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。