Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WizardCoder llama assert failure #417

Open
jacohend opened this issue Aug 28, 2023 · 3 comments
Open

WizardCoder llama assert failure #417

jacohend opened this issue Aug 28, 2023 · 3 comments

Comments

@jacohend
Copy link

Trying to run a variety of ggml models from TheBloke leads to this error:
GGML_ASSERT: llama-cpp/ggml.c:6270: ggml_nelements(a) == ne0*ne1*ne2

Wondering if anyone else is experiencing this, and what the issue might be?

@jacohend
Copy link
Author

Related: ggerganov/llama.cpp#2445 (comment)

@jacohend jacohend changed the title WizardCoder llama assert WizardCoder llama assert failure Aug 28, 2023
@LLukas22
Copy link
Contributor

Probably another issue with the currently used ggml version, a re-sync with the current main branch of llama.cpp is probably needed.

@jacohend
Copy link
Author

jacohend commented Aug 28, 2023

I actually did that and found a failure on the same assert line. The linked comment said rolling the version back worked best.

I'm wondering if this assert is assuming constant layer sizes, so any modification like The Bloke does might be causing the failure?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants