-
Notifications
You must be signed in to change notification settings - Fork 573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sending a zip created on-the-fly via Plug.Conn.chunk fails after around 200Mb #1220
Comments
It is hard to say what is the root cause. At first, can you include versions for Erlang, Elixir, Plug and the Web Server you are using (Cowboy, Bandit)? I also believe you are supposed to retain the connection returned from |
Also try downloading it on other browsers and on curl, to rule out it being a Chrome issue. |
Sorry, I forgot to add the libraries versions: erlang: 26.2.1 I was not using the conn =
stream
|> Enum.reduce_while(conn, fn chunk, conn ->
case Plug.Conn.chunk(conn, chunk) do
{:ok, conn} ->
IO.puts("got here!!!")
{:cont, conn}
_ ->
{:halt, conn}
end
end) I then tried again in Chrome but got the same error. Firefox also fails, and curl gives me the following error:
|
I cannot reproduce it:
followed by |
Thanks for the help so far. Tomorrow I will try to create a small project that reproduces the issue and post here |
So, I changed your script to something that I think is more similar to what my code is doing to trigger the issue: Mix.install([:plug, :plug_cowboy, :zstream])
defmodule MyPlug do
import Plug.Conn
def init(options) do
# initialize options
options
end
def call(conn, _opts) do
conn =
conn
|> put_resp_content_type("text/plain")
|> send_chunked(200)
# Fake 5MB binary representing some image
fake_binary = :crypto.strong_rand_bytes(5242880)
1..150
|> Enum.map(fn index ->
{:ok, pid} = StringIO.open(fake_binary)
stream = IO.binstream(pid, :line) |> Stream.map(fn chunk ->
Process.sleep(1)
chunk
end)
Zstream.entry("#{index}.jpg", stream)
end)
|> Zstream.zip()
|> Enum.reduce_while(conn, fn chunk, conn ->
case Plug.Conn.chunk(conn, chunk) do
{:ok, conn} ->
{:cont, conn}
{:error, :closed} ->
{:halt, conn}
end
end)
end
end
require Logger
webserver = {Plug.Cowboy, plug: MyPlug, scheme: :http, options: [port: 4000]}
{:ok, _} = Supervisor.start_link([webserver], strategy: :one_for_one)
Logger.info("Plug now running on localhost:4000")
Process.sleep(:infinity) As you can see, I created a random binary with around 5Mb in size since that is the mean size of the images I'm putting into the zip file. Then, for each entry in the zip, I added a sleep, the reason for that is that I download each image from S3 in a stream in my code, and that, of course, creates some latency, so I'm using |
Thank you. I was able to reproduce the issue with Cowboy but I could not reproduce it with Bandit. This makes me believe this is not a Plug issue, but rather a web server issue. You can try Bandit in your application OR isolate this report directly on top of Erlang+Cowboy and file it upstream (if doing so, I'd start by first reproducing the issue without zstream). |
using Bandit worked great for me, thanks! I will try my hands in Erlang+Cowboy to create a small example that triggers the issue and create a new issue in their repo. |
I'm having problem when using
Plug.Conn.chunk/2
to send a stream of a zip file generated on-the-fly in Chrome. I'm not sure if I'm doing something wrong, but all the examples I see seem pretty similar to mine.Here is my controller module:
As you can see, I get some images, create a stream that generates a zip from it and send via
Plug.Conn.chunk/2
.When I request this from chrome, the download will start and it will show the download as
resuming
:When the download is around 200mb, the download will fail and I will get a Network Error in chrome:
I also do not get any error message in the backend, it will just stop sending chunks:
The text was updated successfully, but these errors were encountered: