Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Ollama #646

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft

Add Ollama #646

wants to merge 3 commits into from

Conversation

jtpio
Copy link
Member

@jtpio jtpio commented Feb 16, 2024

Fixes #482

Ollama seems to be getting popular for running models locally, and looks like it would be good to have in Jupyter AI by default.

  • Add OllamaProvider
  • Add OllamaEmbeddingsProvider
  • Expand the list of available models
  • Add documentation
  • Mark as experimental, like for GPT4All?

image

@jtpio jtpio added the enhancement New feature or request label Feb 16, 2024
@jtpio
Copy link
Member Author

jtpio commented Feb 16, 2024

I guess it would also be fine to have it as a custom model provider in a separate package, if we don't want to maintain it here in this repo.

@dlqqq
Copy link
Collaborator

dlqqq commented Feb 17, 2024

Jeremy, thank you very much for working on this PR and for keeping up with the open source LLM ecosystem. I lack the time to engage there as much as I would like.

Let me know when this PR is ready, and I will approve & merge it once I run it locally. 👍

@lalanikarim
Copy link

Looking forward to this integration.
Is it feasible to have the list of supported models as configurable given that there are numerous finetunes out there that would be viable candidates with Ollama?

@jtpio
Copy link
Member Author

jtpio commented Feb 20, 2024

Thanks @dlqqq and @lalanikarim!

Yeah I'll try to finish the PR soon. It was originally opened early to see if there was interest to have built-in support for Ollama in jupyter-ai.

Is it feasible to have the list of supported models as configurable given that there are numerous finetunes out there that would be viable candidates with Ollama?

Currently jupyter_ai seems to have support for allowing and blocking models:

allowed_models = List(
Unicode(),
default_value=None,
help="""
Language models to allow, as a list of global model IDs in the format
`<provider>:<local-model-id>`. If `None`, all are allowed. Defaults to
`None`.
Note: Currently, if `allowed_providers` is also set, then this field is
ignored. This is subject to change in a future non-major release. Using
both traits is considered to be undefined behavior at this time.
""",
allow_none=True,
config=True,
)

Although not sure if that would be enough to use models that have not been added to the list of models in jupyter_ai.

@startakovsky
Copy link
Contributor

startakovsky commented Feb 24, 2024

🙇 @jtpio excited for this

@lalanikarim
Copy link

Since Ollama sports an OpenAI compatible Rest API, and jupyter-ai supports setting a base url to override the default OpenAI url, I created new local models based on models supported by Ollama and named them to match OpenAI models.

$ cat mistral-gpt-4.modelfile
FROM mistral

$ ollama create gpt-4 -f mistral-gpt-4.modelfile

This hack currently allows me to use Ollama with jupyter-ai.

Looking forward to integration to support local models hosted with Ollama.

Screenshot 2024-03-03 at 11 05 10 AM Screenshot 2024-03-03 at 11 08 55 AM

@triinity2221
Copy link

@lalanikarim - I am also replicating a similar setup. Just curious are you able to use the /learn, /generate commands in the chat

@lalanikarim
Copy link

@lalanikarim - I am also replicating a similar setup. Just curious are you able to use the /learn, /generate commands in the chat

I haven't had any luck with /generate. I run into pydantic errors.

pydantic.v1.error_wrappers.ValidationError: 1 validation error for Outline
sections
  field required (type=value_error.missing)

I haven't tried /learn yet.

@lalanikarim
Copy link

@jtpio I am wondering if it would make sense to provide model name in a free form text field for Ollama models and to require %%ai magic code to include model name for ollama models and limit models to a predefined list.

%%ai ollama:mistral

Thoughts?

@jtpio
Copy link
Member Author

jtpio commented Mar 5, 2024

@jtpio I am wondering if it would make sense to provide model name in a free form text field for Ollama models and to require %%ai magic code to include model name for ollama models and limit models to a predefined list.

Yeah I guess given the number of available models it would be very difficult to pick just a few here in the jupyter-ai package. And that would also require updating Jupyter AI regularly, increasing maintenance load.

So yes having a way to let users configure the list of models sounds like a good idea.

@siliconberry
Copy link

Since Ollama sports an OpenAI compatible Rest API, and jupyter-ai supports setting a base url to override the default OpenAI url, I created new local models based on models supported by Ollama and named them to match OpenAI models.

$ cat mistral-gpt-4.modelfile
FROM mistral

$ ollama create gpt-4 -f mistral-gpt-4.modelfile

This hack currently allows me to use Ollama with jupyter-ai.

Looking forward to integration to support local models hosted with Ollama.

Screenshot 2024-03-03 at 11 05 10 AM Screenshot 2024-03-03 at 11 08 55 AM

Hi, Thanks for the hack. Its working. But curious.. how/what did you set the OPENAI API key ? Its working inside the chat box (Jupyternaut), but not inside the jupyter notebook. and its asking for OPEN_API_KEY. can u pl help ?

@Mrjaggu
Copy link

Mrjaggu commented Mar 13, 2024

Anything update on this pr for adding ollama to jupyter ai..

@jtpio
Copy link
Member Author

jtpio commented Mar 13, 2024

I'll try to finish this PR soon, and provide a way to configure the list of models (+ docs).

@dannongruver
Copy link

@jtpio: Thank you for this ollama integration Jeremy! Also look forward to configurable models. Let me know where I can buy you a coffee.

@lalanikarim
Copy link

Hi, Thanks for the hack. Its working. But curious.. how/what did you set the OPENAI API key ? Its working inside the chat box (Jupyternaut), but not inside the jupyter notebook. and its asking for OPEN_API_KEY. can u pl help ?

@siliconberry You can put any value for OPANAI_API_KEY. It is needed for this hack to work but will be ignored by ollama.

@dannongruver
Copy link

@lalanikarim in answer to @siliconberry question, have you gotten both the chat/jupyternaut as well as notebook cell (ie magic %%ai) working with ollama?

I’m struggling now to get notebook working. When I set the OPENAI_API_KEY environment variable to a dummy key (sk-abcd1234), the notebook gives an error indicating that ChatGPT doesn’t accept the key. Jupyternaut chat works fine. It seems like either jupyter_ai’s notebook is not using the config.json like Jupyternaut or ollama does not support all ChatGPT api…? Idk, maybe I’m missing something

@dannongruver
Copy link

Nevermind @lalanikarim @siliconberry. The magic %%ai does work.

Note: The magic doesn’t use the config.json but uses environment vars for key and url.

@cboettig
Copy link

Just joining the chorus of folks who would be excited to see this, especially given the rate at which new open-weights models are regularly appearing for ollama (llama3, phi3, wizardlm2). ollama's use of local GPU seems a lot smoother too than gpt4all too.

@orkutmuratyilmaz
Copy link

I'm so excited for this! 😍😍😍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ollama
10 participants