使用 LangChain 进行 RAG¶
步骤 | 技术 | 执行 |
---|---|---|
Embedding | Hugging Face / Sentence Transformers | 💻 本地 |
向量存储 | Milvus | 💻 本地 |
生成式 AI | Hugging Face 推理 API | 🌐 远程 |
本示例利用了 LangChain Docling 集成,以及 Milvus 向量存储和 sentence-transformers embeddings。
提供的 DoclingLoader
组件使您能够
- 在您的 LLM 应用程序中轻松快速地使用各种文档类型,并且
- 利用 Docling 的丰富格式进行高级的、文档原生的基础。
DoclingLoader
支持两种不同的导出模式
ExportType.MARKDOWN
:如果您想将每个输入文档捕获为单独的 LangChain 文档,或者ExportType.DOC_CHUNKS
(默认):如果您想将每个输入文档分块,然后将每个单独的分块捕获为下游的单独 LangChain 文档。
本示例允许通过参数 EXPORT_TYPE
探索这两种模式;根据设置的值,相应的示例流水线也会随之设置。
设置¶
- 👉 为了获得最佳转换速度,尽可能使用 GPU 加速;例如,如果在 Colab 上运行,请使用支持 GPU 的运行时。
- Notebook 使用 HuggingFace 的推理 API;为了增加 LLM 配额,可以通过环境变量
HF_TOKEN
提供令牌。 - 可以通过以下方式安装所需依赖(
--no-warn-conflicts
适用于 Colab 预设的 Python 环境;如需更严格的使用,请随意移除)
输入 [1]
已复制!
%pip install -q --progress-bar off --no-warn-conflicts langchain-docling langchain-core langchain-huggingface langchain_milvus langchain python-dotenv
%pip install -q --progress-bar off --no-warn-conflicts langchain-docling langchain-core langchain-huggingface langchain_milvus langchain python-dotenv
Note: you may need to restart the kernel to use updated packages.
输入 [2]
已复制!
import os
from pathlib import Path
from tempfile import mkdtemp
from dotenv import load_dotenv
from langchain_core.prompts import PromptTemplate
from langchain_docling.loader import ExportType
def _get_env_from_colab_or_os(key):
try:
from google.colab import userdata
try:
return userdata.get(key)
except userdata.SecretNotFoundError:
pass
except ImportError:
pass
return os.getenv(key)
load_dotenv()
# https://github.com/huggingface/transformers/issues/5486:
os.environ["TOKENIZERS_PARALLELISM"] = "false"
HF_TOKEN = _get_env_from_colab_or_os("HF_TOKEN")
FILE_PATH = ["https://arxiv.org/pdf/2408.09869"] # Docling Technical Report
EMBED_MODEL_ID = "sentence-transformers/all-MiniLM-L6-v2"
GEN_MODEL_ID = "mistralai/Mixtral-8x7B-Instruct-v0.1"
EXPORT_TYPE = ExportType.DOC_CHUNKS
QUESTION = "Which are the main AI models in Docling?"
PROMPT = PromptTemplate.from_template(
"Context information is below.\n---------------------\n{context}\n---------------------\nGiven the context information and not prior knowledge, answer the query.\nQuery: {input}\nAnswer:\n",
)
TOP_K = 3
MILVUS_URI = str(Path(mkdtemp()) / "docling.db")
import os from pathlib import Path from tempfile import mkdtemp from dotenv import load_dotenv from langchain_core.prompts import PromptTemplate from langchain_docling.loader import ExportType def _get_env_from_colab_or_os(key): try: from google.colab import userdata try: return userdata.get(key) except userdata.SecretNotFoundError: pass except ImportError: pass return os.getenv(key) load_dotenv() # https://github.com/huggingface/transformers/issues/5486: os.environ["TOKENIZERS_PARALLELISM"] = "false" HF_TOKEN = _get_env_from_colab_or_os("HF_TOKEN") FILE_PATH = ["https://arxiv.org/pdf/2408.09869"] # Docling Technical Report EMBED_MODEL_ID = "sentence-transformers/all-MiniLM-L6-v2" GEN_MODEL_ID = "mistralai/Mixtral-8x7B-Instruct-v0.1" EXPORT_TYPE = ExportType.DOC_CHUNKS QUESTION = "Which are the main AI models in Docling?" PROMPT = PromptTemplate.from_template( "Context information is below.\n---------------------\n{context}\n---------------------\nGiven the context information and not prior knowledge, answer the query.\nQuery: {input}\nAnswer:\n", ) TOP_K = 3 MILVUS_URI = str(Path(mkdtemp()) / "docling.db")
文档加载¶
现在我们可以实例化我们的加载器并加载文档了。
输入 [3]
已复制!
from langchain_docling import DoclingLoader
from docling.chunking import HybridChunker
loader = DoclingLoader(
file_path=FILE_PATH,
export_type=EXPORT_TYPE,
chunker=HybridChunker(tokenizer=EMBED_MODEL_ID),
)
docs = loader.load()
from langchain_docling import DoclingLoader from docling.chunking import HybridChunker loader = DoclingLoader( file_path=FILE_PATH, export_type=EXPORT_TYPE, chunker=HybridChunker(tokenizer=EMBED_MODEL_ID), ) docs = loader.load()
Token indices sequence length is longer than the specified maximum sequence length for this model (1041 > 512). Running this sequence through the model will result in indexing errors
注意:在这种情况下,可以忽略显示
"Token indices sequence length is longer than the specified maximum sequence length..."
的消息 — 详情请参见此处。
确定分块
输入 [4]
已复制!
if EXPORT_TYPE == ExportType.DOC_CHUNKS:
splits = docs
elif EXPORT_TYPE == ExportType.MARKDOWN:
from langchain_text_splitters import MarkdownHeaderTextSplitter
splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=[
("#", "Header_1"),
("##", "Header_2"),
("###", "Header_3"),
],
)
splits = [split for doc in docs for split in splitter.split_text(doc.page_content)]
else:
raise ValueError(f"Unexpected export type: {EXPORT_TYPE}")
if EXPORT_TYPE == ExportType.DOC_CHUNKS: splits = docs elif EXPORT_TYPE == ExportType.MARKDOWN: from langchain_text_splitters import MarkdownHeaderTextSplitter splitter = MarkdownHeaderTextSplitter( headers_to_split_on=[ ("#", "Header_1"), ("##", "Header_2"), ("###", "Header_3"), ], ) splits = [split for doc in docs for split in splitter.split_text(doc.page_content)] else: raise ValueError(f"Unexpected export type: {EXPORT_TYPE}")
检查一些示例分块
输入 [5]
已复制!
for d in splits[:3]:
print(f"- {d.page_content=}")
print("...")
for d in splits[:3]: print(f"- {d.page_content=}") print("...")
- d.page_content='arXiv:2408.09869v5 [cs.CL] 9 Dec 2024' - d.page_content='Docling Technical Report\nVersion 1.0\nChristoph Auer Maksym Lysak Ahmed Nassar Michele Dolfi Nikolaos Livathinos Panos Vagenas Cesar Berrospi Ramis Matteo Omenetti Fabian Lindlbauer Kasper Dinkla Lokesh Mishra Yusik Kim Shubham Gupta Rafael Teixeira de Lima Valery Weber Lucas Morin Ingmar Meijer Viktor Kuropiatnyk Peter W. J. Staar\nAI4K Group, IBM Research R¨uschlikon, Switzerland' - d.page_content='Abstract\nThis technical report introduces Docling , an easy to use, self-contained, MITlicensed open-source package for PDF document conversion. It is powered by state-of-the-art specialized AI models for layout analysis (DocLayNet) and table structure recognition (TableFormer), and runs efficiently on commodity hardware in a small resource budget. The code interface allows for easy extensibility and addition of new features and models.' ...
摄取¶
输入 [6]
已复制!
import json
from pathlib import Path
from tempfile import mkdtemp
from langchain_huggingface.embeddings import HuggingFaceEmbeddings
from langchain_milvus import Milvus
embedding = HuggingFaceEmbeddings(model_name=EMBED_MODEL_ID)
milvus_uri = str(Path(mkdtemp()) / "docling.db") # or set as needed
vectorstore = Milvus.from_documents(
documents=splits,
embedding=embedding,
collection_name="docling_demo",
connection_args={"uri": milvus_uri},
index_params={"index_type": "FLAT"},
drop_old=True,
)
import json from pathlib import Path from tempfile import mkdtemp from langchain_huggingface.embeddings import HuggingFaceEmbeddings from langchain_milvus import Milvus embedding = HuggingFaceEmbeddings(model_name=EMBED_MODEL_ID) milvus_uri = str(Path(mkdtemp()) / "docling.db") # or set as needed vectorstore = Milvus.from_documents( documents=splits, embedding=embedding, collection_name="docling_demo", connection_args={"uri": milvus_uri}, index_params={"index_type": "FLAT"}, drop_old=True, )
RAG¶
输入 [7]
已复制!
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_huggingface import HuggingFaceEndpoint
retriever = vectorstore.as_retriever(search_kwargs={"k": TOP_K})
llm = HuggingFaceEndpoint(
repo_id=GEN_MODEL_ID,
huggingfacehub_api_token=HF_TOKEN,
)
def clip_text(text, threshold=100):
return f"{text[:threshold]}..." if len(text) > threshold else text
from langchain.chains import create_retrieval_chain from langchain.chains.combine_documents import create_stuff_documents_chain from langchain_huggingface import HuggingFaceEndpoint retriever = vectorstore.as_retriever(search_kwargs={"k": TOP_K}) llm = HuggingFaceEndpoint( repo_id=GEN_MODEL_ID, huggingfacehub_api_token=HF_TOKEN, ) def clip_text(text, threshold=100): return f"{text[:threshold]}..." if len(text) > threshold else text
Note: Environment variable`HF_TOKEN` is set and is the current active token independently from the token you've just configured.
输入 [8]
已复制!
question_answer_chain = create_stuff_documents_chain(llm, PROMPT)
rag_chain = create_retrieval_chain(retriever, question_answer_chain)
resp_dict = rag_chain.invoke({"input": QUESTION})
clipped_answer = clip_text(resp_dict["answer"], threshold=200)
print(f"Question:\n{resp_dict['input']}\n\nAnswer:\n{clipped_answer}")
for i, doc in enumerate(resp_dict["context"]):
print()
print(f"Source {i + 1}:")
print(f" text: {json.dumps(clip_text(doc.page_content, threshold=350))}")
for key in doc.metadata:
if key != "pk":
val = doc.metadata.get(key)
clipped_val = clip_text(val) if isinstance(val, str) else val
print(f" {key}: {clipped_val}")
question_answer_chain = create_stuff_documents_chain(llm, PROMPT) rag_chain = create_retrieval_chain(retriever, question_answer_chain) resp_dict = rag_chain.invoke({"input": QUESTION}) clipped_answer = clip_text(resp_dict["answer"], threshold=200) print(f"问题:\n{resp_dict['input']}\n\n回答:\n{clipped_answer}") for i, doc in enumerate(resp_dict["context"]): print() print(f"来源 {i + 1}:") print(f" 文本:{json.dumps(clip_text(doc.page_content, threshold=350))}") for key in doc.metadata: if key != "pk": val = doc.metadata.get(key) clipped_val = clip_text(val) if isinstance(val, str) else val print(f" {key}:{clipped_val}")
Question: Which are the main AI models in Docling? Answer: Docling initially releases two AI models, a layout analysis model and TableFormer. The layout analysis model is an accurate object-detector for page elements, and TableFormer is a state-of-the-art tab... Source 1: text: "3.2 AI models\nAs part of Docling, we initially release two highly capable AI models to the open-source community, which have been developed and published recently by our team. The first model is a layout analysis model, an accurate object-detector for page elements [13]. The second model is TableFormer [12, 9], a state-of-the-art table structure re..." dl_meta: {'schema_name': 'docling_core.transforms.chunker.DocMeta', 'version': '1.0.0', 'doc_items': [{'self_ref': '#/texts/50', 'parent': {'$ref': '#/body'}, 'children': [], 'label': 'text', 'prov': [{'page_no': 3, 'bbox': {'l': 108.0, 't': 405.1419982910156, 'r': 504.00299072265625, 'b': 330.7799987792969, 'coord_origin': 'BOTTOMLEFT'}, 'charspan': [0, 608]}]}], 'headings': ['3.2 AI models'], 'origin': {'mimetype': 'application/pdf', 'binary_hash': 11465328351749295394, 'filename': '2408.09869v5.pdf'}} source: https://arxiv.org/pdf/2408.09869 Source 2: text: "3 Processing pipeline\nDocling implements a linear pipeline of operations, which execute sequentially on each given document (see Fig. 1). Each document is first parsed by a PDF backend, which retrieves the programmatic text tokens, consisting of string content and its coordinates on the page, and also renders a bitmap image of each page to support ..." dl_meta: {'schema_name': 'docling_core.transforms.chunker.DocMeta', 'version': '1.0.0', 'doc_items': [{'self_ref': '#/texts/26', 'parent': {'$ref': '#/body'}, 'children': [], 'label': 'text', 'prov': [{'page_no': 2, 'bbox': {'l': 108.0, 't': 273.01800537109375, 'r': 504.00299072265625, 'b': 176.83799743652344, 'coord_origin': 'BOTTOMLEFT'}, 'charspan': [0, 796]}]}], 'headings': ['3 Processing pipeline'], 'origin': {'mimetype': 'application/pdf', 'binary_hash': 11465328351749295394, 'filename': '2408.09869v5.pdf'}} source: https://arxiv.org/pdf/2408.09869 Source 3: text: "6 Future work and contributions\nDocling is designed to allow easy extension of the model library and pipelines. In the future, we plan to extend Docling with several more models, such as a figure-classifier model, an equationrecognition model, a code-recognition model and more. This will help improve the quality of conversion for specific types of ..." dl_meta: {'schema_name': 'docling_core.transforms.chunker.DocMeta', 'version': '1.0.0', 'doc_items': [{'self_ref': '#/texts/76', 'parent': {'$ref': '#/body'}, 'children': [], 'label': 'text', 'prov': [{'page_no': 5, 'bbox': {'l': 108.0, 't': 322.468994140625, 'r': 504.00299072265625, 'b': 259.0169982910156, 'coord_origin': 'BOTTOMLEFT'}, 'charspan': [0, 543]}]}, {'self_ref': '#/texts/77', 'parent': {'$ref': '#/body'}, 'children': [], 'label': 'text', 'prov': [{'page_no': 5, 'bbox': {'l': 108.0, 't': 251.6540069580078, 'r': 504.00299072265625, 'b': 198.99200439453125, 'coord_origin': 'BOTTOMLEFT'}, 'charspan': [0, 402]}]}], 'headings': ['6 Future work and contributions'], 'origin': {'mimetype': 'application/pdf', 'binary_hash': 11465328351749295394, 'filename': '2408.09869v5.pdf'}} source: https://arxiv.org/pdf/2408.09869
输入 [ ]
已复制!