Skip to content

0ssamaak0/SiriLLama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Siri LLama

Siri LLama is apple shortcut that access locally running LLMs through Siri or the shortcut UI on any apple device connected to the same network of your host machine. It uses Langchain 🦜🔗 and supports open source models from both Ollama 🦙 or Fireworks AI 🎆

Demo Video🎬

🆕 Multimodal support 🎬

Getting Started

Ollama Installation🦙

  1. Install Ollama for your machine, you have to run ollama serve in the terminal to start the server

  2. pull the models you want to use, for example

ollama run llama3 # chat model
ollama run llava # multimodal
  1. Install Langchain and Flask
pip install --upgrade --quiet  langchain langchain-community
pip install flask
  1. in ollama_models.py set chat_model and vchat_model to the models you pulled from Ollama

Fireworks AI Installation🎆

1.Install Langchain and Flask

pip install --upgrade --quiet  langchain langchain-fireworks
pip install flask
  1. get your Fireworks API Key and put it in fireworks_models.py

  2. in fireworks_models.py set chat_model and vchat_model to the models you want to use from Fireworks AI

Running SiriLlama 🟣🦙

  1. after setting the provider (Ollama / Fireworks) Run the flask app using
python3 app.py
  1. On your Apple device, Download the shortcut from here

  2. Run the shortcut through Siri or the shortcut UI, in first time you run the shortcut you will be asked to enter your IP address and the port number

Common Issues 🐞

  • Even we access the flask app (not Ollama server directly), Some windows users who have Ollama installed using WSL have to make sure ollama servere is exposed to the network, Check this issue for more details

About

Use locally running LLMs directly from Siri 🦙🟣

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages