Skip to content

Mistral + Haystack: build RAG pipelines that rock ๐Ÿค˜

Notifications You must be signed in to change notification settings

anakin87/mistral-haystack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

22 Commits
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿ“Œ mistral-haystack collection

Mistral + Haystack Collection: build RAG pipelines that rock ๐Ÿค˜

Collection of notebooks and resources to build Retrieval Augmented Generation pipelines using:

  • Mistral models ๐Ÿค–
  • Haystack LLM orchestration framework ๐Ÿ—๏ธ.

๐Ÿ’ป For other great Haystack Notebooks, check out the ๐Ÿ‘ฉ๐Ÿปโ€๐Ÿณ Haystack Cookbook

๐Ÿ““ Notebooks

Model Haystack version Link Details Author
Mistral-7B-Instruct-v0.1 1.x ๐ŸŽธ Notebook RAG on a collection of Rock music resources, using the free Hugging Face Inference API @anakin87
Mixtral-8x7B-Instruct-v0.1 1.x ๐Ÿ“„๐Ÿš€ Notebook RAG on a PDF File, using the free Hugging Face Inference API (using the free Hugging Face Inference API) @AlessandroDiLauro
Mixtral-8x7B-Instruct-v0.1 1.x ๐Ÿ›’ Notebook
๐Ÿ“Š๐Ÿ” Blog post
RAG from CSV, Product description analysis @AlessandroDiLauro
Mixtral-8x7B-Instruct-v0.1 2.x ๐Ÿ•ธ๏ธ๐Ÿ’ฌ Notebook RAG on the Web, using the free Hugging Face Inference API @TuanaCelik
Zephyr-7B Beta 2.x ๐Ÿช Article and notebook Article on how make this great model (fine-tuned from Mistral) run locally on Colab @TuanaCelik @anakin87
Mixtral-8x7B-Instruct-v0.1 2.x ๐Ÿฉบ๐Ÿ’ฌ Article and notebook Healthcare chatbot with Mixtral, Haystack, and PubMed @annthurium
Mixtral-8x7B-Instruct-v0.1 2.x ๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฌ๐Ÿ‡ง๐ŸŽง Notebook Multilingual RAG from a podcast @anakin87
Mixtral-8x7B-Instruct-v0.1 2.x ๐Ÿ“ฐ Notebook Building a Hacker News Top Stories TL;DR @TuanaCelik

๐Ÿ“š Resources

  • Mixture of Experts Explained

    Great and deep blog post by Hugging Face on the MoE architecture, which is the basis of Mistral 8x7B.

  • Zephyr: Direct Distillation of LM Alignment

    Technical report by the Hugging Face H4 team. They explain how they trained Zephyr, a strong 7B model fine-tuned from Mistral.

    The main topic is: โš—๏ธ how to effectively distill the capabilities of GPT-4 into smaller models? The report is insightful and well worth reading. I have summarized it here.