Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolving LLM Configuration Error - .env file config - LM Studio, local model (MAC)[Question]: #307

Open
nic0711 opened this issue Mar 28, 2024 · 1 comment
Labels
question Further information is requested

Comments

@nic0711
Copy link

nic0711 commented Mar 28, 2024

What is your question?

After reading thello,
somehow I can't get any further... especially since it worked once before - but not since a few updates.

I always get the following error message and I guess it has to do (among other things) with the settings in the .env.

(base) USER@MBP-von-USER fabric % fabric --listmodels 
Traceback (most recent call last):
  File "/opt/homebrew/bin/fabric", line 6, in <module>
    sys.exit(cli())
             ^^^^^
  File "/Users/USER/AI/fabric/installer/client/cli/fabric.py", line 101, in main
    standalone = Standalone(args, args.pattern)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/USER/AI/fabric/installer/client/cli/utils.py", line 56, in __init__
    sorted_gpt_models, ollamaList, claudeList = self.fetch_available_models()
                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/USER/AI/fabric/installer/client/cli/utils.py", line 297, in fetch_available_models
    if "/" in models[0] or "\\" in models[0]:
              ~~~~~~^^^
IndexError: list index out of range
(base) USER@MBP-von-USER fabric % fabric --listmodels
Error: Connection error. trying to access /models: ("Request URL is missing an 'http://' or 'https://' protocol.",)

My actual .env looks like this:

OPENAI_API_KEY=lmstudio
OPENAI_BASE_URL=http://localhost:1234/v1

DEFAULT_MODEL=lmstudio

YOUTUBE_API_KEY=AI

Can someone tell me please what the .env should look like that primarily accesses the local LLM (via LM Studio) - in the future also Claude.

Thank you!he documentation, I am still not clear how to get X working. I tried this, this, and that.

@nic0711 nic0711 added the question Further information is requested label Mar 28, 2024
@randomBullets
Copy link

randomBullets commented Mar 31, 2024

What is your question?

After reading thello, somehow I can't get any further... especially since it worked once before - but not since a few updates.

I always get the following error message and I guess it has to do (among other things) with the settings in the .env.

(base) USER@MBP-von-USER fabric % fabric --listmodels 
Traceback (most recent call last):
  File "/opt/homebrew/bin/fabric", line 6, in <module>
    sys.exit(cli())
             ^^^^^
  File "/Users/USER/AI/fabric/installer/client/cli/fabric.py", line 101, in main
    standalone = Standalone(args, args.pattern)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/USER/AI/fabric/installer/client/cli/utils.py", line 56, in __init__
    sorted_gpt_models, ollamaList, claudeList = self.fetch_available_models()
                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/USER/AI/fabric/installer/client/cli/utils.py", line 297, in fetch_available_models
    if "/" in models[0] or "\\" in models[0]:
              ~~~~~~^^^
IndexError: list index out of range
(base) USER@MBP-von-USER fabric % fabric --listmodels
Error: Connection error. trying to access /models: ("Request URL is missing an 'http://' or 'https://' protocol.",)

My actual .env looks like this:

OPENAI_API_KEY=lmstudio
OPENAI_BASE_URL=http://localhost:1234/v1

DEFAULT_MODEL=lmstudio

YOUTUBE_API_KEY=AI

Can someone tell me please what the .env should look like that primarily accesses the local LLM (via LM Studio) - in the future also Claude.

Thank you!he documentation, I am still not clear how to get X working. I tried this, this, and that.

Ok first I am very novice at any of this. But I finally got it I think by removing the "default model"

this is what my .env is looking like:

OPENAI_API_KEY=lmstudio

OPENAI_BASE_URL=http://localhost:1234/v1/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants