Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory wise / chunked file upload #1480

Open
mikehouse opened this issue Jul 28, 2021 · 2 comments
Open

Memory wise / chunked file upload #1480

mikehouse opened this issue Jul 28, 2021 · 2 comments

Comments

@mikehouse
Copy link

Describe the bug
Out of memory error when uploading large files.

Expected behavior
A file of any size can be uploaded.

Actual Behavior
Out of memory error.

To Reproduce
Try create the http handler using nng_http_handler_alloc_file function for some large file (size should be more than RAM available) and serve it to some http download request.

** Environment Details **

  • 1.5.1
  • All

Hi. I found this in the source code inside static void http_handle_file(nni_aio *aio) function

	// This is a very simplistic file server, suitable only for small
	// files.  In the future we can use an AIO based file read, where
	// we read files a bit at a time, or even mmap them, and serve
	// them up chunkwise.  Applications could even come up with their own
	// caching version of the http handler.

I've tried different aio based ways to read file and write then to response, but it didn't work. Is there any example where I can look at to learn how to write data to nni_http_res object by buffer using nni_aio object ? I'd like to make a fork of your library to modify that function (nng_http_handler_alloc_file) to make it work for large files, but I cannot find a working solution for this.

I've tried something like this and other ways with no luck

        nni_http_conn *   conn= nni_aio_get_input(aio, 2);
	size_t buf_size = 8192;
	uint8_t buffer[buf_size];
	
	FILE *file = fopen(hf->path, "r");
	size_t read;
	nng_iov iov;
	nng_aio *aio2 = NULL;

	nng_aio_alloc(&aio2, NULL, NULL);
//	nng_stream *stream = nng_aio_get_output(aio, 0);
	nng_stream *stream = nng_http_conn_get_stream(conn);
	while ((read = fread(buffer, 1, buf_size, file)) > 0) {
		iov.iov_len = read;
		iov.iov_buf = buffer;
		nng_aio_set_iov(aio2, 1, &iov);
		nng_stream_send(stream, aio2);
		nni_aio_wait(aio2);
	}

	fclose(file);

        nng_aio_stop(aio2);
	nng_aio_free(aio2);

	nni_aio_set_output(aio, 0, res);
	nni_aio_finish(aio, 0, 0);

I'm not c/c++ developer and some things look unfamiliar to me and hard to follow. This is one left unsolved problem and I cannot figure out how to approach to it. The link to the function

http_handle_file(nni_aio *aio)

I will appreciate any help.
Thank you very much for your work!

@gdamore
Copy link
Contributor

gdamore commented Jul 31, 2021

First, I assume you want to download, not upload. (Meaning NNG is the server, and is sending the file an HTTP user agent client.)

You'll have to replace http_handle_file entirely to provide streaming file support, and you'll need to implement a callback based scheme to do this properly.

I don't think you can just get the stream and send to it, because there is important work relating to ensuring that the headers are already sent out properly, and you need to send the ultimate file size apriori in a Content-Length header first.

All of this sounds somewhat complicated, and it is, which is why I opted not to do it initially. But maybe I'll find some time to fix it properly so that this works regardless of file size.

@mikehouse
Copy link
Author

First, I assume you want to download, not upload. (Meaning NNG is the server, and is sending the file an HTTP user agent client.)

Yes, I mixed it up.

Thank you for such quick feedback on the issue.

I don't think you can just get the stream and send to it, because there is important work relating to ensuring that the headers are already sent out properly, and you need to send the ultimate file size apriori in a Content-Length header first.
All of this sounds somewhat complicated, and it is ...

Yes, sounds almost impossible to me to implement these all myself )

But maybe I'll find some time to fix it properly so that this works regardless of file size.

Thank you, definitely will follow the project changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants