New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saving data #150
Comments
The exact problem that Alvaro is talking about is the following: We have a queue of 3 different tasks, which are actually just one long experiment. In some cases we have to end the experiment prematurely in the first task of the experiment. We use window.location.href = '/finish'; to skip the rest of the queue and finish the experiment, however the data of part 1 are not saved in this case. How could the data be saved and the rest of the queue skipped? |
Did you mean to post this to https://github.com/expfactory/expfactory or are you using the Docker compose setup here? |
We are using the docker setup. |
Not Docker compose? I’m trying to understand if you are using the second version of expfactory, or the first one here. |
We deploy ExpFact using docker run ... quay.io/vanessa/expfactory-builder build |
and we use a docker nginx-proxy to manage several instances on one server. |
Okay this is the wrong repo then, it’s the newer version of expfactory as I suspected from your description. I’m going to transfer the issue. |
So I’m not sure why you opened a new issue - is this not the same as #148? |
And we had this discussion last time about the repo - please remember for next time! My graduate school lab maintains the expfactory-Docker repo (the first version) and I suspect they could be sometimes annoyed by the extra noise. I’m happy to help you here. :) |
Could your saving issues be related to having three deployed on one server? |
I would also check the size of the data failing to post - it just might be too big: https://stackoverflow.com/questions/2880722/can-http-post-be-limitless if this is the issue if you want to share what the data looks like we could discuss a method for an incremental save. |
We have about 10 instances running. Since the tasks don't consume much resources, it's better to host them on just one machine. Most of the instances are working fine with this setup, it's just this last one making troubles :-( I thought about the upload size limit and increased client_max_body_size in the nginx-files inside expfactory/script/ but that didn't work... @SaschaFroelich how big do your files get? The limit is now 200mb. |
I also think the file size was too large because when I didn't use the command window.location.href = '/next';, the experiment got stuck with a 413 server error. I don't know how big the data files get, since I wasn't able to store the data before. But I think it shouldn't be more than 10 MB actually. @AlvaroAguilera: What was the limit before? |
Does the solution to #134 help? |
413 is indeed "Payload too large" so if that is the error you are facing, the server is still not accepting it. I do think we need to support additional endpoints for uploading data in pieces, e.g.,
And then you would probably still use |
Is there a way to probe your |
Indeed this is a good question - the default nginx in the container has only 20M https://github.com/expfactory/expfactory/blob/master/script/nginx.gunicorn.conf. @earcanal if this fixes the issue for them, it might make sense to have this be the default, or expose it as a variable for the builder. The experiments are rather chonky! |
We fixed the problem by adding/increasing the client_max_body_size in both the nginx-config files of expfact and in the nginx-proxy. Thank you for your helpful input as usual. |
Dear Vanessa et al.
we are experiencing problems with long tasks from Lab.js. Sometimes the files with the results are not stored or the automatic forwarding to the next task in the queue doesn't work. How can we go about this? We have increased all the timeout values we could find, but this doesn't solve the problem and there is no usable info in the logs. Have you experience this behavior before? Do you have a code snippet for manually saving date with /save?
Thanks
The text was updated successfully, but these errors were encountered: