Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Range-Requests aren't properly handled #83

Open
gsauthof opened this issue May 25, 2019 · 0 comments · Fixed by klauspost/compress#383
Open

Range-Requests aren't properly handled #83

gsauthof opened this issue May 25, 2019 · 0 comments · Fixed by klauspost/compress#383

Comments

@gsauthof
Copy link

When wrapping a handler that supports HTTP Range-Requests (e.g. http.FileServer), gziphandler relays them as-is, thus violating the HTTP standard and breaking clients.

That means, currently, gziphandler compresses ranges returned by the wrapped handler instead of returning ranges of the compressed output (of the complete wrapped content).

The HTTP standard basically says that Content-Length is the size of the Content-Encoding output and that range requests specify ranges into the Content-Encoding encoded output, as well.

Expected behavior: Either (a) gziphandler filters out the Accept-Ranges: bytes headers in wrapped handler responses (and any Range: headers in passed requests), or, (b) it handles Range-Requests on its own and doesn't pass them down to the wrapped handler.

Note that implementing (b) would be kind of complicated, e.g. a range that is smaller than the configured minSize would have to trigger a compression up to the range end.

How to reproduce:

Create a handler like this:

gz_handler, err := gziphandler.NewGzipLevelAndMinSize(gzip.BestCompression, 100)
if err !=  nil {
    log.Fatal(err)
}
http.Handle("/static/", http.StripPrefix("/static/",
        gz_handler(http.FileServer(http.Dir("static")))))

Request a full compressed page:

$ curl --header 'Accept-Encoding: gzip' -v -o f http://localhost:8080/static/page.css
[..]
< HTTP/1.1 200 OK
< Accept-Ranges: bytes
< Content-Encoding: gzip
< Content-Type: text/css; charset=utf-8
< Vary: Accept-Encoding
< Content-Length: 373
[..]
$ file f
f: gzip compressed data, max compression, original size 804

Note how the Content-Length is the size of the compressed content, as expected.

Getting a range:

$ curl --header 'Accept-Encoding: gzip' --header 'Range: bytes=0-300' -v -o a http://localhost:8080/static/page.css
[..]
< HTTP/1.1 206 Partial Content
< Accept-Ranges: bytes
< Content-Encoding: gzip
< Content-Range: bytes 0-300/804
< Content-Type: text/css; charset=utf-8
< Content-Length: 201
[..]

Note the following issues:

  1. The Content-Length is wrong: 201 vs. 301
  2. The Content-Range information doesn't match the Content-Length in the previous request: 373 vs. 804 (the size of the uncompressed content)

And the range isn't a prefix of the previous result:

$ cmp f a
f a differ: byte 11, line 1

Another range request:

$ curl --header 'Accept-Encoding: gzip' --header 'Range: bytes=301-372' -v -o b http://localhost:8080/static/page.css
[..]
< HTTP/1.1 206 Partial Content
< Accept-Ranges: bytes
< Content-Length: 72
< Content-Range: bytes 301-372/804
< Content-Type: text/css; charset=utf-8
[... no Content-Encoding header ...]

Similar issues as before and the range isn't gzip-compressed.

klauspost added a commit to klauspost/compress that referenced this issue Jun 2, 2021
Fixes nytimes/gziphandler#83

Adds `KeepAcceptRanges()` if for whatever reason you would want to keep them.
klauspost added a commit to klauspost/compress that referenced this issue Jun 2, 2021
Fork and clean up+extend the dead `nytimes/gziphandler` project.

* Adds `http.Transport` wrapper.
* Includes nytimes/gziphandler#106 as well as support for stateless encoding.
* Implements a variant of nytimes/gziphandler#81
* Fixes nytimes/gziphandler#103
* Strip "Accept-Ranges" on compressed content. Fixes nytimes/gziphandler#83
* Removes testify from deps.
* Constructors boiled down.
* Defaults to this gzip package.
* Allocations reduced.

Default settings comparison:
```
λ benchcmp before.txt after.txt                                        
benchmark                         old ns/op     new ns/op     delta    
BenchmarkGzipHandler_S2k-32       51302         25554         -50.19%  
BenchmarkGzipHandler_S20k-32      301426        174900        -41.98%  
BenchmarkGzipHandler_S100k-32     1546203       912349        -40.99%  
BenchmarkGzipHandler_P2k-32       3973          2116          -46.74%  
BenchmarkGzipHandler_P20k-32      20319         12237         -39.78%  
BenchmarkGzipHandler_P100k-32     96079         57348         -40.31%  
                                                                       
benchmark                         old MB/s     new MB/s     speedup    
BenchmarkGzipHandler_S2k-32       39.92        80.14        2.01x      
BenchmarkGzipHandler_S20k-32      67.94        117.10       1.72x      
BenchmarkGzipHandler_S100k-32     66.23        112.24       1.69x      
BenchmarkGzipHandler_P2k-32       515.44       967.76       1.88x      
BenchmarkGzipHandler_P20k-32      1007.92      1673.55      1.66x      
BenchmarkGzipHandler_P100k-32     1065.79      1785.58      1.68x      
                                                                       
benchmark                         old allocs     new allocs     delta  
BenchmarkGzipHandler_S2k-32       22             19             -13.64%
BenchmarkGzipHandler_S20k-32      25             22             -12.00%
BenchmarkGzipHandler_S100k-32     28             25             -10.71%
BenchmarkGzipHandler_P2k-32       22             19             -13.64%
BenchmarkGzipHandler_P20k-32      25             22             -12.00%
BenchmarkGzipHandler_P100k-32     27             24             -11.11%
```

Client Transport:

Speed compared to standard library for an approximate 127KB payload:

```
BenchmarkTransport

Single core:
BenchmarkTransport/gzhttp-32         	    1995	    609791 ns/op	 214.14 MB/s	   10129 B/op	      73 allocs/op
BenchmarkTransport/stdlib-32         	    1567	    772161 ns/op	 169.11 MB/s	   53950 B/op	      99 allocs/op

Multi Core:
BenchmarkTransport/gzhttp-par-32     	   29113	     36802 ns/op	3548.27 MB/s	   11061 B/op	      73 allocs/op
BenchmarkTransport/stdlib-par-32     	   16114	     66442 ns/op	1965.38 MB/s	   54971 B/op	      99 allocs/op
```

This includes both serving the http request, parsing requests and decompressing.
CAFxX added a commit to CAFxX/httpcompression that referenced this issue Jan 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant