New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support of custom endpoints or local LLM endpoints #198
Comments
@jagad89 we don't support this out of the box. The plan is to enable local endpoints if their APIs are compatible with OpenAI's. Then you could create a provider with the base URL set to your endpoint, in a similar way like below, and change the providers of your models in the Settings as well. Is this meeting your requirement? Could you kindly tell more about what local LLMs you are using and what scenarios you plan to use DevChat for? In any event, we will let you know when the above feature is ready. TODOs:
|
I will try the settings with few custom and local endpoints. I will keep posted in same thread. Basically we will able to use local LLM over intranet without internet connection. |
I would like to use this extension with a Local LLM, so I don't have to sign up for a paid service. Ollama is a great way to set up a local endpoint on a server or developer workstation. |
No description provided.
The text was updated successfully, but these errors were encountered: