RAG CLI built upon 🦜🔗LangChain and 🦙Ollama to chat with your private PDF documents.
- Prepare your dataset: put all the PDF documents used for training in the
./data/dataset
directory. - Start the Ollama container:
docker compose up -d
(you can update theOLLAMA_MODEL
environment variable defined in.env
in order to pull the model of your choice). - Create the embeddings and query your custom LLM:
poetry install && poetry run main