-
-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large file (2.2 GB) gets stuck when trying to upload #269
Comments
Hi @jorgetamayo21 ! Did the container fail? How long was it processing until you declared it as stale? Uploading such a big file is expected to take a long time. Btw did you try https://jtlreporter.site/docs/integrations/locust#continuous-results-uploading ? |
For about 40 minutes it was processing then the jtl-reporter page crashed, I restarted the machine and it raised the page but the item is still stuck at "item in progress" I will implement that continuous upload method better, but I will still report the problem to see if I can rescue the results of my test |
I managed to upload the file using https://github.com/ludeknovy/jtl-reporter/blob/main/scripts/upload_jtl.py, the problem occurred because the machine had only 8 GB of memory and jtl-reporter at the time of processing the file exceeded those 8 GB. I increased it and it processed without problems. The only detail is that the failed processing still appears stuck as "item in progress", a way should be added to "cancel" files that failed to be processed |
The stale item can be changed only directly in the DB for now. The only mechanism I can think of to delete the failed processing is to set a time threshold (eg: 4 or 8 hours) after which the item would be declared as failed and its status would get changed. |
Also, this seems to be the same problem as #259 |
That the failed process expires in 8 hours seems perfect to me. Regarding the memory problem, I think that if the running machine has enough memory it should not cause problems. |
@jorgetamayo21 I've created new FR: #271 |
Was it processing more files at once during that time? The 8GB of RAM should be enough to process one 2.2 GB file. As mentioned, I made some changes to the part of streaming the file into the DB. Do you think you could remove any sensitive information from your file and share it with me? I don't have such a big file at my disposal, and I would like to perform more tests and see if there's still a problem I could fix with memory management. |
Give me your email and I'll send you the file |
Please send it at ludek @ jtlreporter.site |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Closing this issue, the further steps can be followed here: #259 |
Describe the bug
I am using the integration with Locust and at the end of a 2 hour and 30 minute test it started uploading but stalled. The file weighs 2.2 GB and I have not found a way to upload it, in the Web UI it was stuck as "item in progress"
To Reproduce
Run a test that generates a 2.2 GB result file
Expected behavior
That the file uploads without problems and can be viewed in reporter
Screenshots
Additional context
upload log at the end of the Locust test
uploading
{"itemId":"33118836-eb29-47fd-a51c-6bdd7893dd79"}
uploading
{"itemId":"3517c50d-9f1d-4138-ae53-f790199becf3"}
The text was updated successfully, but these errors were encountered: