Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openai.error.APIConnectionError: Error communicating with OpenAI #8

Open
YaxinFAN1 opened this issue Jul 31, 2023 · 3 comments
Open

Comments

@YaxinFAN1
Copy link

Hello, I encountered the following error.

0
1 retry left...
Traceback (most recent call last):
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\aiohttp\connector.py", line 980, in _wrap_create_connection
    return await self._loop.create_connection(*args, **kwargs)  # type: ignore[return-value]  # noqa
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\asyncio\base_events.py", line 1070, in create_connection
    raise exceptions[0]
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\asyncio\base_events.py", line 1054, in create_connection
    sock = await self._connect_sock(
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\asyncio\base_events.py", line 963, in _connect_sock
    await self.sock_connect(sock, address)
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\asyncio\proactor_events.py", line 709, in sock_connect
    return await self._proactor.connect(sock, address)
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\asyncio\windows_events.py", line 821, in _poll
    value = callback(transferred, key, ov)
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\asyncio\windows_events.py", line 608, in finish_connect
    ov.getresult()
OSError: [WinError 121] 信号灯超时时间已到

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\openai\api_requestor.py", line 672, in arequest_raw
    result = await session.request(**request_kwargs)
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\aiohttp\client.py", line 536, in _request
    conn = await self._connector.connect(
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\aiohttp\connector.py", line 540, in connect
    proto = await self._create_connection(req, traces, timeout)
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\aiohttp\connector.py", line 901, in _create_connection
    _, proto = await self._create_direct_connection(req, traces, timeout)
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\aiohttp\connector.py", line 1206, in _create_direct_connection
    raise last_exc
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\aiohttp\connector.py", line 1175, in _create_direct_connection
    transp, proto = await self._wrap_create_connection(
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\aiohttp\connector.py", line 988, in _wrap_create_connection
    raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host api.openai.com:443 ssl:default [信号灯超时时间已到]

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "c:\Users\FanYaxin\OneDrive\桌面\factool_test.py", line 23, in <module>
    response_list = factool_instance.run(inputs)
  File "c:\users\fanyaxin\onedrive\文档\vscode\2023\factool\factool\factool.py", line 55, in run
    batch_results = asyncio.run(
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\asyncio\base_events.py", line 649, in run_until_complete
    return future.result()
  File "c:\users\fanyaxin\onedrive\文档\vscode\2023\factool\factool\knowledge_qa\pipeline.py", line 100, in run_with_tool_api_call
    claims_in_responses, queries_in_responses, evidences_in_responses, verifications_in_responses = await self.run_with_tool_live(responses[batch_start:batch_end])
  File "c:\users\fanyaxin\onedrive\文档\vscode\2023\factool\factool\knowledge_qa\pipeline.py", line 63, in run_with_tool_live
    claims_in_responses = await self._claim_extraction(responses)
  File "c:\users\fanyaxin\onedrive\文档\vscode\2023\factool\factool\knowledge_qa\pipeline.py", line 38, in _claim_extraction
    return await self.chat.async_run(messages_list, List)
  File "c:\users\fanyaxin\onedrive\文档\vscode\2023\factool\factool\utils\openai_wrapper.py", line 109, in async_run
    predictions = await self.dispatch_openai_requests(
  File "c:\users\fanyaxin\onedrive\文档\vscode\2023\factool\factool\utils\openai_wrapper.py", line 96, in dispatch_openai_requests
    return await asyncio.gather(*async_responses)
  File "c:\users\fanyaxin\onedrive\文档\vscode\2023\factool\factool\utils\openai_wrapper.py", line 66, in _request_with_retry
    response = await openai.ChatCompletion.acreate(
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\openai\api_resources\chat_completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\openai\api_requestor.py", line 372, in arequest
    result = await self.arequest_raw(
  File "C:\Users\FanYaxin\AppData\Local\anaconda3\lib\site-packages\openai\api_requestor.py", line 689, in arequest_raw
    raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI

I run the following code and it's ok.

import openai

messages =  [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}]

response = openai.ChatCompletion.create(model='gpt-3.5-turbo', messages=messages, max_tokens=2000, temperature=0.5)

print(response)

I also tried the following solutions and they all failed.
https://zhuanlan.zhihu.com/p/611080662
https://blog.csdn.net/weixin_43937790/article/details/131121974

So, how to solve this problem?

@EthanC111
Copy link
Collaborator

@YaxinFAN1, thank you for your interest in our work!!!

I just modified the openai_wrapper to catch openai.error.APIConnectionError.

If this doesn't work, the issue is likely still related to the proxy, as I mentioned in this isse.

We are working diligently to enable users to use open source models rather than the OpenAI API. Stay tuned!

Let me know if you have any more questions!

Thank you!

@YaxinFAN1
Copy link
Author

Hi, thank you for your reply.

You are right, it is indeed the proxy problem.

I didn't set the proxies successfully before.

When I set the proxies as follows, it worked.


import openai



proxies = {'http': "http://127.0.0.1:7890",
          'https': "http://127.0.0.1:7890"}
openai.proxy = proxies



# Initialize a Factool instance with the specified keys. foundation_model could be either "gpt-3.5-turbo" or "gpt-4"

factool_instance = Factool("gpt-3.5-turbo")

inputs = [
            {
                "prompt": "Introduce Graham Neubig",
                "response": "Graham Neubig is a professor at MIT",
                "category": "kbqa"
            },
]
response_list = factool_instance.run(inputs)

print(response_list)

Thanks for your awesome contributions!!!

@krrishdholakia
Copy link

Hey @YaxinFAN1 @EthanC111 - why proxy the openai base?

for supporting local models like llama2 wouldn't they need to be deployed on GPUs / deployment providers - which have their own client libraries?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants