Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert to GGUF #159

Open
yanxon opened this issue May 10, 2024 · 0 comments
Open

Convert to GGUF #159

yanxon opened this issue May 10, 2024 · 0 comments

Comments

@yanxon
Copy link

yanxon commented May 10, 2024

Can you please convert this to gguf?

I tried to use llama.cpp convert.py with the following command:

python convert.py pythia-12b/ --outfile pythia-12b/pythia-12b-f16.gguf --outtype f16

It gives me this error:

Loading model file ../pythia/pythia-hf/pytorch_model-00001-of-00003.bin
Traceback (most recent call last):
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 1483, in <module>
    main()
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 1419, in main
    model_plus = load_some_model(args.model)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 1278, in load_some_model
    models_plus.append(lazy_load_file(path))
                       ^^^^^^^^^^^^^^^^^^^^
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 887, in lazy_load_file
    return lazy_load_torch_file(fp, path)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 843, in lazy_load_torch_file
    model = unpickler.load()
            ^^^^^^^^^^^^^^^^
  File "/home/hyanxo/projects/llama.cpp/convert.py", line 832, in find_class
    return self.CLASSES[(module, name)]
           ~~~~~~~~~~~~^^^^^^^^^^^^^^^^
KeyError: ('torch', 'ByteStorage')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant