-
-
Notifications
You must be signed in to change notification settings - Fork 6.1k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with Concurrency Request: High Response time #5846
Comments
A few of questions
|
Hi @jgould22 The test script is executed from another server. The service fastApi is executed in a dedicated server. Thanks |
Hi @Pazzeo Did you check around run_in_threadpool ? Openning thread is slow and too much thread can lead to a crash. |
Can you try the same with sync route and see if same issue exist? Usually I face such issues when I mix Sync and Async functions |
Also very likely the default any io Capacity limiter is 40 iirc. Here is the call https://github.com/encode/starlette/blob/master/starlette/concurrency.py#L35 and the docs indicate that if it is None https://anyio.readthedocs.io/en/stable/api.html#anyio.to_thread.run_sync then the default is used. |
First, thanks a lot for the help. In these days, I have tried what you proposed, but I don't observe too much improvements. I don't know how I can improve my code to have better response time with concurrent requests. Paz |
The problem on your benchmark is on the client side. Use a single AsyncClient and pass it to the task that is going to run multiple times. |
Hi @Kludex Indeed, I have removed the run_in_threadpool and the service is working better. Paz |
in your code you create 1000 instances of the httpx.AsyncClient, but you should create one and let that one make the 1000 requests like this
|
a full example of how using and TEST a fastapi endpoint with a httpx client -> https://github.com/raphaelauv/fastAPI-httpx-example |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
First Check
Commit to Help
Example Code
main.py:
Description
Hello,
I'm facing an issue when I'm testing my fastAPI service simulating concurrenty requests (for example 1000 req/s).
The issue observed is on high response time, around 3s when normally the service is answering in less 0.025s.
The service is quite simple: it should generate a AWS token. So, I'm suspecting that I don't have define some fuctions in a proper way.
The service is deployed in a docker image, I'm using the tiangolo uvicorn-gunicorn-fastapi docker python 3.9.
The file I posted it is the main.py which is inside the directory app.
The app directory has the following structure:
the directory model/ contains the file ModelUserToken.py:
the directory lib/ has the files:
UserID.py
the file tools.py
Then, I'm using this script to test the performance with 1000 req/s
During the execution, I'm observing that all requests have a time response between 1s and 2.5s.
Could you help me where the problem could be ?
Thanks in advance,
PAz
Operating System
Linux
Operating System Details
Docker Image uvicorn-gunicorn-fastapi with gunicorn 20.1.0 and 8 workers
FastAPI Version
0.88.0
Python Version
3.9.16
Additional Context
No response
The text was updated successfully, but these errors were encountered: