Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Playground does not work for whisper #1013

Open
rrbanda opened this issue Apr 28, 2024 · 1 comment
Open

Playground does not work for whisper #1013

rrbanda opened this issue Apr 28, 2024 · 1 comment

Comments

@rrbanda
Copy link

rrbanda commented Apr 28, 2024

Screenshot 2024-04-28 at 11 30 55 AM
@cdrage
Copy link

cdrage commented Apr 29, 2024

Screenshot 2024-04-28 at 11 30 55 AM

You can see on the top right that it says "Model Serivce not running"

I have a feeling the Whisper gguf is not working anymore?

If I go to the pod controlling the ai lab, I get this error:

gguf_init_from_file: invalid magic characters 'lmgg'
llama_model_load: error loading model: llama_model_loader: failed to load model from /models/ggml-small.bin

llama_load_model_from_file: failed to load model
Traceback (most recent call last):
  File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 88, in <module>
    main()
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 74, in main
    app = create_app(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 138, in create_app
    set_llama_proxy(model_settings=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 75, in set_llama_proxy
    _llama_proxy = LlamaProxy(models=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 31, in __init__
    self._current_model = self.load_llama_from_model_settings(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 138, in load_llama_from_model_settings
    _model = create_fn(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/llama.py", line 314, in __init__
    self._model = _LlamaModel(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/_internals.py", line 55, in __init__
    raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file: /models/ggml-small.bin
gguf_init_from_file: invalid magic characters 'lmgg'
llama_model_load: error loading model: llama_model_loader: failed to load model from /models/ggml-small.bin

llama_load_model_from_file: failed to load model
Traceback (most recent call last):
  File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 88, in <module>
    main()
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 74, in main
    app = create_app(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 138, in create_app
    set_llama_proxy(model_settings=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 75, in set_llama_proxy
    _llama_proxy = LlamaProxy(models=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 31, in __init__
    self._current_model = self.load_llama_from_model_settings(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 138, in load_llama_from_model_settings
    _model = create_fn(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/llama.py", line 314, in __init__
    self._model = _LlamaModel(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/_internals.py", line 55, in __init__
    raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file: /models/ggml-small.bin
gguf_init_from_file: invalid magic characters 'lmgg'
llama_model_load: error loading model: llama_model_loader: failed to load model from /models/ggml-small.bin

llama_load_model_from_file: failed to load model
Traceback (most recent call last):
  File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 88, in <module>
    main()
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/__main__.py", line 74, in main
    app = create_app(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 138, in create_app
    set_llama_proxy(model_settings=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/app.py", line 75, in set_llama_proxy
    _llama_proxy = LlamaProxy(models=model_settings)
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 31, in __init__
    self._current_model = self.load_llama_from_model_settings(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/server/model.py", line 138, in load_llama_from_model_settings
    _model = create_fn(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/llama.py", line 314, in __init__
    self._model = _LlamaModel(
  File "/usr/local/lib64/python3.9/site-packages/llama_cpp/_internals.py", line 55, in __init__
    raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file: /models/ggml-small.bin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants