Skip to content

katanaml/llm-ollama-invoice-cpu

Repository files navigation

Invoice data processing LLM RAG on CPU with Ollama and ChromaDB

Easy-to-Follow RAG Pipeline Tutorial: Invoice Processing with ChromaDB & LangChain

Secure and Private: On-Premise Invoice Processing with LangChain and Ollama RAG


Quickstart

RAG runs offline on local CPU

  1. Install the requirements:
pip install -r requirements.txt
  1. Install Ollama and pull LLM model specified in config.yml

  2. Copy text PDF files to the data folder.

  3. Run the script, to convert text to vector embeddings and save in Chroma vector storage:

python ingest.py
  1. Run the script, to process data with LLM RAG and return the answer:
python main.py "What is the invoice number value?"

Releases

No releases published

Packages

No packages published

Languages