Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
-
Updated
Mar 13, 2024 - Rust
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
AubAI brings you on-device gen-AI capabilities, including offline text generation and more, directly within your app.
Social and customizable AI writing assistant! ✍️
Run gguf LLM models in Latest Version TextGen-webui
An experimental local web search engine assistant frontend and CLI for ollama and llama.cpp with a focus on being extremely lightweight and easy to run. The goal is to provide something along the lines of a minimalist Perplexity.
Summarize emails received by Thunderbird mail client extension via locally run LLM. Early development.
Add a description, image, and links to the localllama topic page so that developers can more easily learn about it.
To associate your repository with the localllama topic, visit your repo's landing page and select "manage topics."