Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local Model not working as Expected #39

Closed
haseeb-heaven opened this issue Mar 22, 2024 · 2 comments · Fixed by #49
Closed

Local Model not working as Expected #39

haseeb-heaven opened this issue Mar 22, 2024 · 2 comments · Fixed by #49

Comments

@haseeb-heaven
Copy link

I am trying to run local models using Ollama and i have code-llama,Gemma,Phi-2 models available i got these error while running.

Selected Model: codellama:7b-code (7B - Q4_O)

haseeb-mir@Haseebs-MacBook-Pro devika % python devika.py
24.03.22 11:14:19: root: INFO   : Booting up... This may take a few seconds
24.03.22 11:14:19: root: INFO   : Initializing Devika...
24.03.22 11:14:19: root: INFO   : Initializing Prerequisites Jobs...
24.03.22 11:14:19: root: INFO   : Loading sentence-transformer BERT models...
24.03.22 11:14:22: root: INFO   : BERT model loaded successfully.
 * Serving Flask app 'devika'
 * Debug mode: on
24.03.22 11:14:28: root: INFO   : Booting up... This may take a few seconds
24.03.22 11:14:28: root: INFO   : Initializing Devika...
24.03.22 11:14:28: root: INFO   : Initializing Prerequisites Jobs...
24.03.22 11:14:28: root: INFO   : Loading sentence-transformer BERT models...
24.03.22 11:14:32: root: INFO   : BERT model loaded successfully.
Token usage: 321
Model id: 7B - Q4_0
Exception in thread Thread-588 (<lambda>):
Traceback (most recent call last):
  File "/opt/homebrew/Caskroom/miniforge/base/envs/heaven-env/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
    self.run()
  File "/opt/homebrew/Caskroom/miniforge/base/envs/heaven-env/lib/python3.11/threading.py", line 975, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/haseeb-mir/Documents/Code/Python/devika/devika.py", line 49, in <lambda>
    target=lambda: Agent(base_model=base_model).execute(prompt, project_name)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/haseeb-mir/Documents/Code/Python/devika/src/agents/agent.py", line 264, in execute
    plan = self.planner.execute(prompt)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/haseeb-mir/Documents/Code/Python/devika/src/agents/planner/planner.py", line 70, in execute
    response = self.llm.inference(prompt)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/haseeb-mir/Documents/Code/Python/devika/src/llm/llm.py", line 54, in inference
    model = self.model_id_to_enum_mapping()[self.model_id]
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
KeyError: '7B - Q4_0'
@kertL
Copy link

kertL commented Mar 22, 2024

update the selectModel method in ui/src/components/ControlPanel.svelte as below (1 to 0) can get rid of the error. But I still got stuck in the following steps.

function selectModel(model) {
selectedModel = ${model[0]} (${model[1]});
localStorage.setItem("selectedModel", model[0]);
}

@haseeb-heaven
Copy link
Author

update the selectModel method in ui/src/components/ControlPanel.svelte as below (1 to 0) can get rid of the error. But I still got stuck in the following steps.

function selectModel(model) { selectedModel = ${model[0]} (${model[1]}); localStorage.setItem("selectedModel", model[0]); }

did that solved the issue for you?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants