Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Koboldcpp's OpenAI compatible API not working #556

Open
toxicer01 opened this issue Apr 22, 2024 · 10 comments
Open

Koboldcpp's OpenAI compatible API not working #556

toxicer01 opened this issue Apr 22, 2024 · 10 comments

Comments

@toxicer01
Copy link

I try to use Koboldcpp's OpenAI compatible API in the Custom Local (OpenAI format) section, but it is not working. I input the model name, protocol and the port number. Please let me know if you need any more information. Thank you in advance.

@brianpetro
Copy link
Owner

Hi @toxicer01

It would be helpful if you could open the developer console, disable and re-enable Smart Connections, then try to use the Smart Chat again, and screenshot the logs, which should show us some errors.

Before doing this, please turn on the "Debug at startup time" setting in the Obsidian Community plugin settings. This will help make sure the logs are as detailed as possible.

Thanks for your help in figuring this out,
🌴

@toxicer01
Copy link
Author

I attached a screenshot and the log. Thanks for the help.

Log
obsidian.md-1713833167511.log

@brianpetro
Copy link
Owner

@toxicer01 thanks

Unfortunately I can't see what might be going on just from that.
Please also screenshot your custom local model settings so I can see if anything stands out there.

Thanks for your help in solving this
🌴

@toxicer01
Copy link
Author

Screenshots attached. Thanks.

SharedScreenshot_1
SharedScreenshot

@brianpetro
Copy link
Owner

Thanks. I should've specified the settings in Smart Chat
Screenshot 2024-04-22 at 10 00 49 PM

@toxicer01
Copy link
Author

toxicer01 commented Apr 23, 2024

After see your screenshot, I changed my settings, but still not working (I try both api & v1 for the path).

SharedScreenshot_1
SharedScreenshot

@Webslug
Copy link

Webslug commented May 11, 2024

I have the same problem. It just refuses to work with Kobold, I guess I will try TextGenUI next. This is disappointing so I've decided to write my own working plugin.

@brianpetro
Copy link
Owner

brianpetro commented May 11, 2024

@Webslug if you have the ability to create a whole plugin, you could have simply contributed an adapter to the open-source smart-chat-model much easier 🤷‍♂️

Maybe I haven't reiterated this enough lately:

My focus is on creating new abilities, UI & UX, for individuals to utilize AI. I expect the user community to help in contributing adapters for lesser known platforms, as there are simply too many options that would take away from my time to innovate on new things that will benefit everyone.

By providing access to the most popular platforms, and easy ways to contribute adapters to additional platforms (adapters), I can help the most people with my open-source work.

🌴

@dicksensei69
Copy link

dicksensei69 commented May 12, 2024

Just want to let you know that I'm also having trouble pointing this at my local/network instance in Oobabooga Text Generation WebUI. It's just a complete clone of the openai interface so I don't get the issue but it's all good. I looked though some of the code and tried to hard point it at my local instance and it didn't work. Gonna keep digging into it, maybe I'll figure out a way to get it working.
I tried just about every way I could think of in the settings for both the custom api and the local api. It only showed that I actually hit the Ooba instance when I put and https in and it cause and invalid request because it required http, but I could see it trying to get to it. Somebody is gonna figure it out soon, if llama 3 high context comes out and is better than any tier of claude it will be very useful. Llama 3 already writers well in markdown format :)
If anyone from llmstudio has gotten it to work I'd love to see their settings because they are very similar to ooba.

Edit:
So I actually got this working. Inside of your vault down in .obsidian/plugins/smart-connections/main.js. I found under var require_platforms the openai endpoint and changed that to look like endpoint: "http://192.168.1.128:5000/v1/chat/completions", I then found the listing for the gpt-3.5-turbo model and changed that to the required context for my llama3. I also changed the other openai endpoint the checks for the models but I don't think that matters for this. The program now defaults to gpt-3.5-turbo and i'm talking to my local model. Very Sweet! Slick software Brian, you the man!

@toxicer01
Copy link
Author

@dicksensei69 I tried your solution, but it still does not work for Koboldcpp. I'm really not handy with all the codes, so I guess I'll just wait for someone smarter than me to figure it out. In the meantime, I'll just keep using the Gemini Pro instead. Thanks anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants