Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update llama_cpp_python to fix issues in mac #131

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

DaramG
Copy link

@DaramG DaramG commented Nov 16, 2023

There was some internal server error when running run-mac.sh on Mac.
To fix this, I updated llama_cpp_python version to the latest version.
This will resolve #73 and #95 .

@henriquezago
Copy link

It didn't solve my issue (#95).

@adevart
Copy link

adevart commented Apr 14, 2024

This worked for me, thanks. I was having the same issue as #95. I updated the version number then restarted the server and it loaded the model ok.

I get a similar error when loading the 7b chat model but that's due to it being in .bin format instead of .gguf like code-7b and gives the following error, which shows the 500 loading error in the UI:
gguf_init_from_file: invalid magic characters tjgg.
error loading model: llama_model_loader: failed to load model from ./models/llama-2-7b-chat.bin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

M2 macbook air Internal Error
3 participants