Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Post IDCT rounding needs to go - it induces noise & generational loss and takes extra time to compute #1782

Open
br3aker opened this issue Oct 19, 2021 · 29 comments

Comments

@br3aker
Copy link
Contributor

br3aker commented Oct 19, 2021

New AAN IDCT implementation which I'm working on as part of jpeg decoder optimization/refactoring provides a faster and more accurate IDCT calculation.

Current conversion pipeline has this "conform to libjpeg" step:

// To conform better to libjpeg we actually NEED TO loose precision here.
// This is because they store blocks as Int16 between all the operations.
// To be "more accurate", we need to emulate this by rounding!
block.NormalizeColorsAndRoundInPlace(maximumValue);

This does a huge quality hit (well, not really 'huge' if we are not talking about a couple of re-encodings) with noise. This is especially noticeable for so called 'generational loss'. Below there's a comparison of rounding vs no-rounding jpeg re-encoding without any change to the original image - no quality or subsampling changed.

Original image
tree

5 generations
gen5_comparison

*Bottom row is a difference between original image and re-encoded one

25 generations
gen25_comparison


Of course users won't re-encode their images 5 or even 25 times but this example shows the uselessness of IDCT rounding. Right now ImageSharp has a slower but more accurate IDCT which should be a 'feature', not a 'downside' which must be hidden. And on top of that it takes extra time to compute so it's a win-win :P

Issues like #245 can't be solved simply because JFIF is a lossy image container - 1/255 pixel miss is an expected trade-off.

Update
Same applies to downscaling decoding from #2076.

No-rounding code provides slightly better PSNR results but linked PR would provide IDCT algorithms with rounding before this issue is resolved.

@JimBobSquarePants @antonfirsov

@JimBobSquarePants
Copy link
Member

@br3aker Absolutely agree. I wasn't aware how detrimental the rounding was to quality. Feel free to remove it as part of your optimization (which I'm very existed about!)

@antonfirsov
Copy link
Member

antonfirsov commented Oct 20, 2021

I'm generally fine sacrificing some of our libjpeg conformance in favor of performance and actual quality, but I would prefer to see some quantitative data to understand the impact better.

Our reference PNG-s were originally created by resaving libjpeg-turbo output, and we should keep doing that. The questions we should ask: What is the average change in decoder """"accuracy"""" in percents (master VS your branch)? Does any of our decoder tests fail with the current tolerance settings after the change?

@antonfirsov
Copy link
Member

Related:

I just noticed #1694 (comment) and 2e5b0ad. As far as I remember, I used ImageMagick to generate the output, which uses libjpeg-turbo internally. Does this mean that the bug is actually not in our decoder but in libjpeg-turbo? Does photoshop maintain it's own codec?

@JimBobSquarePants
Copy link
Member

I wonder whether it was an issue with png optimization?

@br3aker
Copy link
Contributor Author

br3aker commented Oct 21, 2021

Does this mean that the bug is actually not in our decoder but in libjpeg-turbo?

They have 3 different implementations for IDCT:

  1. 'slow accurate' aka float AAN with intrinsics
  2. 'fast integer' aka 16-bit integer with intrinsics
  3. 'accurate integer' aka 16-bit integer with intrinsics with more accurate but slower algorithm

16 bit integer is a quality tradeoff for performance, if you didn't specify accurate float IDCT switch manually then ImageMagick should had generated inaccurate results.

Oh and that test image had bad edges between colors, I guess it was saved via 4:2:0 subsampling which led to averaged colors.

Does photoshop maintain it's own codec?

Most likely yes but we can only guess.

Does any of our decoder tests fail with the current tolerance settings after the change?

No

I would prefer to see some quantitative data to understand the impact better

Current pixel comparison tests (master -> no rounding branch):

badeof.jpg                0,3756% -> 0,4222%
Calliphora.jpg            0,0000% -> 0,1084%
cmyk.jpg                  0,0000% -> 0,0167%
jpeg400jfif               0,0000% -> 0,0001%
jpeg420small.jpg          0,2863% -> 0,3807%
jpeg444.jpg               0% -> 0,0511%
MultiScanBaselineCMYK.jpg 0% -> 0,0084%
testorig.jpg              0,3754% -> 0,4219%

I would actually try to dig into it more because there's clearly some extra noise with no-rounding mode, maybe it's even caused by encoder rounding errors. Problem with current rounding master implementation is that it produces a bit less noise but leaves some contrast stains which are then multiplied during multiple re-encodings.

master_gen1

Compare to more distributed noise from no-rounding mode:

no_rounding_gen1

Which stays the same during further encoding generations.

*Difference shown above is normalized, those are not spottable by human eye in normal conditions.

@br3aker
Copy link
Contributor Author

br3aker commented Oct 24, 2021

I found a problem in current tests - spoiled 'reference' PNG images.

Here are some difference examples for original jpeg and reference png:
progress_difference
testorig_difference

Those are not normalized so you can see that difference is very much human-eye spottable. Can I have a permissions to overwrite all of jpeg reference images with photoshop lossless png save? @antonfirsov

@JimBobSquarePants
Copy link
Member

There's a strong chance those images were spoilt by bad optimization. Since we don't store the images in the repository directly anymore I have no issue updating the output references if you have found issues with them.

That's assuming these are encoding references yeah?

@br3aker
Copy link
Contributor Author

br3aker commented Oct 24, 2021

That's assuming these are encoding references yeah?

Nope, decoder. I didn't check all of them but every image I did check has lossy compression artifacts.

@JimBobSquarePants
Copy link
Member

Ah sorry, I misread. So those are the references we generate by decoding jpeg and saving as png. Sure go ahead then

@antonfirsov
Copy link
Member

There's a strong chance those images were spoilt by bad optimization.

If this is the case, it should be a bug in optipng, which is very unlikely IMO, since I haven't seen an issue in any other case. I think what we see is likely a difference between proprietary PhotoShop codec being accurate VS libjpeg-turbo on default (fast integer?) settings.

@br3aker can you check for some example image if there is a difference between resaving with photoshop VS resaving with chrome? The latter definitely uses libjpeg-turbo with defaults.

Unless it causes unreasonable pain finishing the optimization work, I would prefer to create reference images with an open tool any contributor has access to if needed instead of PhotoShop. Maybe we can utilize libjpeg-turbo with "slow accurate" IDCT? @dlemstra is there a way to achive this with ImageMagick?

@br3aker
Copy link
Contributor Author

br3aker commented Oct 25, 2021

I have a strong feeling that those PNG's were saved with lossy compressions before entering optipng. You can see the difference without any amplifying. Look for these images for example:

.\ImageSharp\tests\Images\Input\Jpg\baseline\testorig.jpg
.\ImageSharp\tests\Images\External\ReferenceOutput\JpegDecoderTests\DecodeBaselineJpeg_testorig.png

They have quite a difference, resaving initial jpeg to png via photoshop yield no difference.

can you check for some example image if there is a difference between resaving with photoshop VS resaving with chrome? The latter definitely uses libjpeg-turbo with defaults.

But reference png(s) are spoiled, not jpeg(s). Chrome/PS would produce different results due to internal quality settings. But we want reference pixels for testing which is done via lossless png images which have a diff compared to initial jpegs.

Photoshop lossless png matters even on current master. Test results for comparing pixels after jpeg decoding:

Current png reference
*** Jpg/baseline/testorig.jpg ***
Difference: 0,3754%
PS jpeg resave to lossless png
*** Jpg/baseline/testorig.jpg ***
Difference: 0,1936%

Test results on my branch with new IDCT and no rounding:

Original png reference
*** Jpg/baseline/testorig.jpg ***
Difference: 0,4219%
PS jpeg resave to lossless png
*** Jpg/baseline/testorig.jpg ***
Difference: 0,2683%

Unless it causes unreasonable pain finishing the optimization work

It does, I didn't check all the tests with comparison and some of them actually fail without rounding. So we have three options: leave rounding in place (I would prefer not to), make test pass difference threshold higher or re-encode reference images.

@JimBobSquarePants
Copy link
Member

If this is the case, it should be a bug in optipng, which is very unlikely IMO, since I haven't seen an issue in any other case.

Putting my hand up here. That could have been me. I have ran a different tool at times (https://css-ig.net/pinga) which I may have used incorrectly. The variance would not have been noticeable enough during our current testing but noticeable enough now we're changing things.

@br3aker
Copy link
Contributor Author

br3aker commented Oct 25, 2021

What's the point of these tests?
image

They decode image and compare pixels, this test group does the same thing, even with same images. Do we really need them?
image

@antonfirsov
Copy link
Member

What's the point of these tests?

They look like local debugging tests to report difference file-by-file, we can delete them.

@br3aker
Copy link
Contributor Author

br3aker commented Oct 25, 2021

Tests from DecodeBaselineJpeg group do the same but they only log the difference if it's over the threshold.

@antonfirsov
Copy link
Member

Tests from DecodeBaselineJpeg group do the same but they only log the difference if it's over the threshold.

That's what we need normally, I probably added Decoder_PixelBufferComparison to have something that prints the difference for every image and didn't delete it. We don't really need it in the CI suite, we can Skip = "Debug only", or even delete them.

@antonfirsov
Copy link
Member

leave rounding in place (I would prefer not to), make test pass difference threshold higher or re-encode reference images.

I agree now that we should re-encode the images with a codec run that uses accurate DCT. I'm only arguing about what tool to use. If we could avoid doing it by a proprietary tool, and establish some standard to make reference image creation reproducible (eg. with ImageMagick CLI or a custom .exe), that would be great.

This is not critical though at a level to block or slow down the optimization PR significantly. We can use PhotoShop there, and keep reference image creation tooling as a debt with a tracking issue documenting it.

Putting my hand up here. That could have been me. I have ran a different tool at times (https://css-ig.net/pinga) which I may have used incorrectly.

That doesn't explain the difference for testorig.jpg, that difference has been there from the very beginning. I probably made a mistake creating the original file, since even a resave with ImageMagick Display reports a difference to the PNG in the repo.

Here are re-encoding results with various other tools, I wonder how do they compare to PhotoShop output:

ImageMagickDisplay-LIBJPEG-LIBPNG:
testorig-ImageMagickDisplay-LIBJPEG

MsPaint-WIC:
testorig-MsPaint-WIC

@br3aker
Copy link
Contributor Author

br3aker commented Oct 25, 2021

Photoshop png
*** Jpg/baseline/testorig.jpg ***
Difference: 0,2683%
ImageMagickDisplay-LIBJPEG-LIBPNG
*** Jpg/baseline/testorig.jpg ***
Difference: 0,4220%
MsPaint-WIC
*** Jpg/baseline/testorig.jpg ***
Difference: 0,4220%

Paint diff number is strange, image is clearly a bit lighter that ImageMagick one, hmmmm...

PS shows there's a difference between MsPaint and ImageMagick:
image

Normalized diff:
Diff

But it has same diff in tests against jpeg decoder? That is some dark magic right there.

@br3aker
Copy link
Contributor Author

br3aker commented Oct 25, 2021

They have same artifacts compared to initial jpeg according to the photoshop:
diff_jpeg

@antonfirsov can you do diff command on ImageMagick with initial testorig.jpeg and generated png from ImageMagick?

@antonfirsov
Copy link
Member

antonfirsov commented Oct 25, 2021

can you do diff command on ImageMagick with initial testorig.jpeg and generated png from ImageMagick?

I don't know how to do diffs with ImageMagick, so there should be a re-encoding of the output before I feed it to a diff tool. If I compare the BMP output with the PNG output they are identical, which excludes PNG encoder bug IMO.

@br3aker
Copy link
Contributor Author

br3aker commented Oct 25, 2021

Funny enough different online services produce different diff's of current reference pngs. I guess we simply can't get a confirmed result here. Question is, are you comfortable with raising percentage thresholds a bit?

@antonfirsov
Copy link
Member

Raising threshold can be an acceptable temporary solution, though I also consider it to be a tech debt. The long term thing would be to have a reference image generator (ImageMagick script? small C++ app?) that uses libjpeg-turbo with accurate IDCT settings.

@br3aker
Copy link
Contributor Author

br3aker commented Oct 27, 2021

Adding a little bit more fuel to the fire of my jpeg misunderstanding. This was taken directly from itu spec:
image

It is explicitly stated that rounding is only necessray for FDCT -> Quantization but not for Dequantization -> IDCT.

The only 'note' about dequantization is that we need to clamp dequantized values:
image

But I still don't understand why no-rounding approach produces worse results in the first couple of images but wins in the long run :/

@JimBobSquarePants
Copy link
Member

I guess we don't really have a choice then but to follow the spec ☹️

I'll admit. I'm not sure why rounding would be considered beneficial since it's destructive.

@br3aker
Copy link
Contributor Author

br3aker commented Oct 29, 2021

I'd say we should wait a bit with this. I found a couple of bottnecks and will push fixes in smaller PRs without drastical changes like rounding.

@antonfirsov
Copy link
Member

But I still don't understand why no-rounding approach produces worse results in the first couple of images

Wait, aren't results "worse" only in compared to libjpeg and WIC runs with integer IDCT, and good when you compare to floating point PhotoShop codec run?

@br3aker
Copy link
Contributor Author

br3aker commented Oct 29, 2021

There's no way to determine what IDCT photoshop uses. All info I gathered is that ImageMagick has no options to set jpeg decoder/encoder to float DCT mode.

I've posted comparison results with ps/libjpeg reference and with/without rounding somewhere above:

Current master

Current png reference
*** Jpg/baseline/testorig.jpg ***
Difference: 0,3754%
PS jpeg resave to lossless png
*** Jpg/baseline/testorig.jpg ***
Difference: 0,1936%

No rounding & new IDCT

Original png reference
*** Jpg/baseline/testorig.jpg ***
Difference: 0,4219%
PS jpeg resave to lossless png
*** Jpg/baseline/testorig.jpg ***
Difference: 0,2683%

@antonfirsov
Copy link
Member

I see, missed that removal of rounding also increases difference to PS reference.

I won't exclude than that PS also uses integer algorithms, and would accumulate some generational loss the same way our current rounding codec does.

I agree that potential removal of rounding is something that we should keep on our radar instead of addressing it immediately. I can contribute by creating a reference image generator app that directly uses libjpeg and libpng but can't commit to any timeline before finishing my memory PR and a bunch of other things.

@br3aker
Copy link
Contributor Author

br3aker commented Oct 29, 2021

I can contribute by creating a reference image generator app that directly uses libjpeg and libpng but can't commit to any timeline before finishing my memory PR and a bunch of other things.

This would be awesome! Don't rush, memory thing is a lot more important than this. My main concern was about generational loss, removing rounding does improve performance but it's too small to spend time on it right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants