Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lodepng::decode - bug with Greyscale 16 bit #149

Open
Mikez2015 opened this issue May 25, 2021 · 3 comments
Open

lodepng::decode - bug with Greyscale 16 bit #149

Mikez2015 opened this issue May 25, 2021 · 3 comments

Comments

@Mikez2015
Copy link

Mikez2015 commented May 25, 2021

Hello.

I generate perlin noise and save him to PNG:

std::vector<u_char> pixels;
lodepng::encode("noise.png", pixels, width, height, LCT_GREY, 16);

It works correctly, I look in Photoshop and see that everything is fine.

Then I load this fPNG:

std::vector<u_char> pixels;
lodepng::decode(pixels, width, height, "noise.png", 16);

And this is where the problem comes in. It looks like the lodepng::decode is confuses the high and low bytes.
If I manually change the high and low bytes like this:

	for (u_int i = 0; i < pixels.size(); i += 2) {
		std::swap(pixels[i], pixels[i + 1]);
	}

Then the result is correct and everything works fine.
Is this a bug? If not, what am I doing wrong?

@cosinekitty
Copy link
Contributor

I just wrote a quick test and lodepng seems to be working correctly in this case. I suspect you might have a bug in how you create your buffer pixels. Also, the line of code you have for decoding does not compile, because there is no version of the decode function that takes that combination of parameters.

lodepng::decode(pixels, width, height, "noise.png", 16);

I suspect this is what you meant:

lodepng::decode(pixels, width, height, "noise.png", LCT_GREY, 16);

Here is a test program I wrote that confirms the byte order between encoding and decoding is consistent. I hope this helps. It prints the output PASS when I run it.

/*
    issue_149.cpp  -  Don Cross  <cosinekitty@gmail.com>
*/

#include <cstdio>
#include <vector>
#include "lodepng.h"

unsigned func(unsigned x, unsigned y)
{
    return (x * y) & 0xffff;
}

int main()
{
    const unsigned width = 600;
    const unsigned height = 400;
    const char * const filename = "image.png";

    // Generate a reference image in 'pixels'
    std::vector<u_char> pixels;
    for (unsigned y = 0; y < height; ++y)
    {
        for (unsigned x = 0; x < width; ++x)
        {
            unsigned p = func(x, y);
            pixels.push_back(p & 0xff);
            pixels.push_back((p >> 8) & 0xff);
        }
    }

    // Encode to output PNG file.
    lodepng::encode(filename, pixels, width, height, LCT_GREY, 16);

    // Read the image back in to a different buffer 'check'.
    std::vector<u_char> check;
    unsigned check_width, check_height;
    lodepng::decode(check, check_width, check_height, filename, LCT_GREY, 16);

    // Compare 'check' against 'pixels'.
    unsigned index = 0;
    for (unsigned y = 0; y < height; ++y)
    {
        for (unsigned x = 0; x < width; ++x)
        {
            unsigned p = func(x, y);
            unsigned q = check[index] | (check[index+1] << 8);
            if (p != q)
            {
                printf("FAIL: p=0x%x, q=0x%x, x=%d, y=%d\n", p, q, x, y);
                return 1;
            }
            index += 2;
        }
    }

    printf("PASS\n");
    return 0;
}

@Mikez2015
Copy link
Author

Mikez2015 commented May 25, 2021

I just wrote a quick test and lodepng seems to be working correctly in this case. I suspect you might have a bug in how you create your buffer pixels.

Thanks for the answer. Your test is also working correctly. Then, it turns out, the problem in the other - Photoshop PNG 16bit Greyscale uses the opposite order of bytes.
Maybe you can tell where the code can be changed in the library so that the lodepng::decode has used the opposite order of bytes so that I can use the PNG-files made in Photoshop without changing bytes in places manually?

@cosinekitty
Copy link
Contributor

I don't know much about the details of lodepng, but I did find the following comments inside lodepng.h. It looks like what you are encountering is an unfortunate consequence of using a big-endian file format on a little-endian computer. A similar unpleasant reality happens with network programming, where it is common to have to swap byte orders between network traffic and locally used multi-byte integers.

6.4. A note about 16-bits per channel and endianness

LodePNG uses unsigned char arrays for 16-bit per channel colors too, just like
for any other color format. The 16-bit values are stored in big endian (most
significant byte first) in these arrays. This is the opposite order of the
little endian used by x86 CPU's.

LodePNG always uses big endian because the PNG file format does so internally.
Conversions to other formats than PNG uses internally are not supported by
LodePNG on purpose, there are myriads of formats, including endianness of 16-bit
colors, the order in which you store R, G, B and A, and so on. Supporting and
converting to/from all that is outside the scope of LodePNG.

This may mean that, depending on your use case, you may want to convert the big
endian output of LodePNG to little endian with a for loop. This is certainly not
always needed, many applications and libraries support big endian 16-bit colors
anyway, but it means you cannot simply cast the unsigned char* buffer to an
unsigned short* buffer on x86 CPUs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants