Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: When using llamaIndex to access a Ollama local LLM service in windows, there will be error . #13526

Open
cnrevol opened this issue May 16, 2024 · 5 comments
Labels
bug Something isn't working ollama triage Issue needs to be triaged/prioritized

Comments

@cnrevol
Copy link

cnrevol commented May 16, 2024

Bug Description

In a Windows environment, when the OS system proxy is set, using llamaIndex to access the local Ollama environment LLM , It will results in an access error.
Httpx.HTTPStatusError: Client error '403 Forbidden' for url 'http://localhost:11434/api/chat'

Version

0.10.33

Steps to Reproduce

When using llamaIndex to access a local LLM, I encountered a minor issue, and I found a solution for it.

Problem Description:
In a Windows environment, when the OS system proxy is set, using llamaIndex to access the local environment LLM (such as a local LLM running on Ollama) results in an access error.

Reason:
llamaIndex uses httpx to access the LLM's URL.
The httpx library in Python uses the system proxy settings by default to access the LLM's web URL.
Therefore, the Windows system defaults to using the proxy to access the localhost service provided by Ollama, which causes an error.

Solution:
I solved this by creating a subclass that inherits from the CustomLLM class (llma_index.core.llms.custom.CustomLLM),
and added [trust_env=false] to the parameters when calling httpx.client.

There might be better solutions available.

I hope that when encapsulating and inheriting the CustomLLM subclass, [trust_env=false] can be included as an initialization parameter or retained during the call, which also seems to solve this issue.

Relevant Logs/Tracbacks

No response

@cnrevol cnrevol added bug Something isn't working triage Issue needs to be triaged/prioritized labels May 16, 2024
Copy link

dosubot bot commented May 16, 2024

Hey there @cnrevol! 👋 I'm Dosu, your friendly neighborhood bot, here to lend a hand with bugs, answer your queries, and guide you through contributions while we wait for a human to join the convo. Just wanted to say hi and let you know I'm diving into your issue with the LlamaIndex and the Ollama local LLM service. Hang tight, I'll get back to you with some insights shortly!

@logan-markewich
Copy link
Collaborator

I don't think this is a llamaindex issue. More of an ollama/windows issue?

You can try changing the base_url in the class to point to a more windows friendly http://127.0.0.1

Copy link

dosubot bot commented May 16, 2024

Changing the base_url in your llamaIndex configuration to http://127.0.0.1 instead of using http://localhost is a practical solution to the issue you're encountering with accessing the local LLM service on Windows. This approach should help bypass the system proxy settings that are causing the access error, without needing to modify the library's source code or implement a workaround involving trust_env=false.

Your initial solution of creating a subclass and setting trust_env=false in the httpx.client parameters effectively addresses the proxy issue by preventing httpx from using the system's proxy settings. However, this requires modifying the llamaIndex library or your implementation, which might not be ideal for all users.

In summary, adjusting the base_url to http://127.0.0.1 is a straightforward and effective way to resolve the access issue without the need for additional configuration or code changes to handle proxy settings.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@cnrevol
Copy link
Author

cnrevol commented May 17, 2024

Thank you,@dosu,@logan-markewich

I tried modifying base_url to http://127.0.0.1, but it seems I still encounter the same error,
Httpx.HTTPStatusError: Client error '403 Forbidden' for url 'http://127.0.0.1:11434/api/chat'.

Is there something else I don't understand? Any errors?

Or could this indeed be a Windows issue?

I need to further investigate the differences in Windows proxy settings control for external IPs, local IPs, and localhost...

Thank you.

Copy link

dosubot bot commented May 17, 2024

If you're still encountering a 403 Forbidden error after changing the base_url to http://127.0.0.1, it suggests the issue might not be solely related to proxy settings or the specific address used. This error typically indicates that the server understood the request but refuses to authorize it, which can be due to several reasons beyond just network configuration.

Here are some focused steps to troubleshoot and potentially resolve the issue:

  1. Check CORS Configuration: Ensure your application allows requests from the same origin, especially if accessed via a browser. This is crucial for local development environments.

  2. Middleware and Security Settings: Review any middleware or security settings in your application that might be rejecting requests, including API keys, authentication requirements, or IP whitelisting.

  3. Firewall and Network Configuration: Verify that your Windows firewall or any network security software isn't blocking connections on the port your service is running on.

  4. Application Routing: Make sure your application routes are correctly set up to respond to /api/chat. If the route is not properly defined, it could lead to a 403 Forbidden error.

  5. Debugging and Logs: Use logging to capture more details about incoming requests and the reasons they might be rejected. This can offer specific insights into the cause of the error.

It's also important to ensure that any application-specific configurations, such as those for interacting with local LLM services, are correctly set up. If after checking these areas the issue persists, providing more information about your setup, including specific error messages from your application logs and any changes made to default settings, would be helpful for further diagnosis.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ollama triage Issue needs to be triaged/prioritized
Projects
None yet
Development

No branches or pull requests

2 participants