复制
向 AI 提问
from langchain_community.embeddings import QuantizedBiEncoderEmbeddings
model_name = "Intel/bge-small-en-v1.5-rag-int8-static"
encode_kwargs = {"normalize_embeddings": True} # set True to compute cosine similarity
model = QuantizedBiEncoderEmbeddings(
model_name=model_name,
encode_kwargs=encode_kwargs,
query_instruction="Represent this sentence for searching relevant passages: ",
)
复制
向 AI 提问
loading configuration file inc_config.json from cache at
INCConfig {
"distillation": {},
"neural_compressor_version": "2.4.1",
"optimum_version": "1.16.2",
"pruning": {},
"quantization": {
"dataset_num_samples": 50,
"is_static": true
},
"save_onnx_model": false,
"torch_version": "2.2.0",
"transformers_version": "4.37.2"
}
Using `INCModel` to load a TorchScript model will be deprecated in v1.15.0, to load your model please use `IPEXModel` instead.
复制
向 AI 提问
question = "How many people live in Berlin?"
复制
向 AI 提问
documents = [
"Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.",
"Berlin is well known for its museums.",
]
复制
向 AI 提问
doc_vecs = model.embed_documents(documents)
复制
向 AI 提问
Batches: 100%|██████████| 1/1 [00:00<00:00, 4.18it/s]
复制
向 AI 提问
query_vec = model.embed_query(question)
复制
向 AI 提问
import torch
复制
向 AI 提问
doc_vecs_torch = torch.tensor(doc_vecs)
复制
向 AI 提问
query_vec_torch = torch.tensor(query_vec)
复制
向 AI 提问
query_vec_torch @ doc_vecs_torch.T
复制
向 AI 提问
tensor([0.7980, 0.6529])
以编程方式连接这些文档到 Claude、VSCode 等,通过 MCP 获取实时答案。