Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when deploying in Kubernetes: Model path does not exist #153

Open
brpaz opened this issue Mar 24, 2024 · 0 comments
Open

Error when deploying in Kubernetes: Model path does not exist #153

brpaz opened this issue Mar 24, 2024 · 0 comments

Comments

@brpaz
Copy link

brpaz commented Mar 24, 2024

Hello.

I am trying to run llama-gpt on Kubernetes, with manifests based on the ones provided in the repo, but I can´t get the api container to start.

It fails with error │ ValueError: Model path does not exist: /models/llama-2-7b-chat.bin

I am supposed to download the model manually?

│                                                                                                                                                                                                                                                                                     │
│ Using /usr/local/lib/python3.11/site-packages                                                                                                                                                                                                                                       │
│ Finished processing dependencies for llama-cpp-python==0.1.77                                                                                                                                                                                                                       │
│ Initializing server with:                                                                                                                                                                                                                                                           │
│ Batch size: 2096                                                                                                                                                                                                                                                                    │
│ Number of CPU threads: 8                                                                                                                                                                                                                                                            │
│ Number of GPU layers: 0                                                                                                                                                                                                                                                             │
│ Context window: 4096                                                                                                                                                                                                                                                                │
│ /usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:126: UserWarning: Field "model_alias" has conflict with protected namespace "model_".                                                                                                                         │
│                                                                                                                                                                                                                                                                                     │
│ You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ('settings_',)`.                                                                                                                                                                         │
│   warnings.warn(                                                                                                                                                                                                                                                                    │
│ Traceback (most recent call last):                                                                                                                                                                                                                                                  │
│   File "<frozen runpy>", line 198, in _run_module_as_main                                                                                                                                                                                                                           │
│   File "<frozen runpy>", line 88, in _run_code                                                                                                                                                                                                                                      │
│   File "/app/llama_cpp/server/__main__.py", line 46, in <module>                                                                                                                                                                                                                    │
│     app = create_app(settings=settings)                                                                                                                                                                                                                                             │
│           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                                                                                             │
│   File "/app/llama_cpp/server/app.py", line 313, in create_app                                                                                                                                                                                                                      │
│     llama = llama_cpp.Llama(                                                                                                                                                                                                                                                        │
│             ^^^^^^^^^^^^^^^^                                                                                                                                                                                                                                                        │
│   File "/app/llama_cpp/llama.py", line 308, in __init__                                                                                                                                                                                                                             │
│     raise ValueError(f"Model path does not exist: {model_path}")                                                                                                                                                                                                                    │
│ ValueError: Model path does not exist: /models/llama-2-7b-chat.bin                                                                                                                                                                                                                                          │
│ Exception ignored in: <function Llama.__del__ at 0x7ff10d6a6ca0>                                                                                                                                                                                                                    │
│ Traceback (most recent call last):                                                                                                                                                                                                                                                  │
│   File "/app/llama_cpp/llama.py", line 1507, in __del__                                                                                                                                                                                                                             │
│     if self.model is not None:                                                                                                                                                                                                                                                      │
│        ^^^^^^^^^^                                                                                                                                                                                                                                                                   │
│ AttributeError: 'Llama' object has no attribute 'model' 
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant