Skip to content

Converting exported models to GGUF #706

Answered by psinger
cemremengu asked this question in Q&A
Discussion options

You must be logged in to vote

Is this happening for all models or only specific ones?

It seems this is a known issue for llama3:
ggerganov/llama.cpp#6747 (comment)

Best to research directly in llama.cpp as it does not seem related to LLM Studio.

But seems like just adding --vocab-type bpe to the convert script might solve it.

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@cemremengu
Comment options

@sunilswain
Comment options

@cemremengu
Comment options

Answer selected by cemremengu
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants