New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sometime use over 100% cpus #126
Comments
Which one are you using, async or sync session? |
sync session |
I guess it depends on the quality of your proxies and the number of concurrent requests. If too many ongoing requests are stuck, CPU will be hogged, eventually a 100% situation occurs. By lowering the timeout option, the amount of wasted CPU time is reduced, other methods may be using better proxies or using less threads if possible. |
I think it is not simple multi-threaded access that causes the CPU usage to exceed 100%, because once that error is triggered, subsequent requests in that session will time out and report errors. And can see in the log that after abnormal session times out, all other threads will resume running and become blocked again after the abnormal session makes another request. The number of threads is relatively small and the machine's performance is sufficient, it may only require 3% of the CPU usage on a daily basis, but once that exception occurs, it will soar to 100%. It's a pity that I can't take the initiative to trigger the bug. I will update the comments on this issue after new discoveries are made. |
How did you share session across threads? One thread per session or all threads share the same session? When CPU usage is high, did you observe a memory leak? |
One thread per session. When CPU over 100%, memory about 20%, I think memory should work in normal. |
Now there are some new findings: Version 0.6.4 |
Can you reproduce this situation when proxies are not used? |
When send some requests(use proxy), will pose following error by continuous. Then cpu will keeping using over 100% cpus and almost let system cant work with everything.
**Failed to perform, ErrCode: 28, Reason: 'Operation timed out after 30001 milliseconds with 0 bytes received'. This may be a libcurl error, See https://curl.se/libcurl/c/libcurl-errors.html first for more details.
Failed to perform, ErrCode: 28, Reason: 'Operation timed out after 30001 milliseconds with 0 bytes received'. This may be a libcurl error, See https://curl.se/libcurl/c/libcurl-errors.html first for more details.
Failed to perform, ErrCode: 16, Reason: ''. This may be a libcurl error, See https://curl.se/libcurl/c/libcurl-errors.html first for more details.**
In the process of occupying a large amount of CPU, the code of other threads in the python program will be suspended. It seems that the global lock has been occupied by some behavior all the time. Every time the 30s timeout expires, other code will resume running.
Once occur this error, any request made using this session will be reported as an error, even if it is normal or even does not use the proxy.
Btw: This will not happen after creating a new session and transferring the cookies of the old session to the new session. new session work normally.
At present, I haven't found the code that can reproduce this error condition, because it always happens occasionally (in the case of a large number of concurrent cases, it may occur 5-6 times in a month), but I seem to find that setting timeout can reduce the abnormal occupation of CPU when this situation occurs.
I hope the situations I have discovered above can help you locate the problem.
The text was updated successfully, but these errors were encountered: