Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Averaging pixels across the input files #201

Open
nathanm210 opened this issue Jun 18, 2020 · 8 comments
Open

Feature request: Averaging pixels across the input files #201

nathanm210 opened this issue Jun 18, 2020 · 8 comments

Comments

@nathanm210
Copy link

As I understand it HDRMerge selects the pixels to use from one of the RAW LDR images used as input, so that each pixel of the final output came from one of the input files.

It would be very useful to also allow averaging across the LDR images. This helps greatly with noise reduction. The final HDR file would be a pixel by pixel weighted average of the input images.

A simple average (mean) is one approach. Some things work better with median, so ideally both could be supported.

A typical HDR scenario has N files taken at different exposures (say N = 3, and at -2 , 0, +2 EV).

In the scenario I am asking for, you would take M * N files, which are just doing the HDR sequence M times. So, for example 3 shots at ISO 400 which are averaged are is usually superior with respect to noise to 1 shot at ISO 100.

@fanckush
Copy link
Contributor

Hello, I'm not sure i understand the benefit of averaging the pixels. For each pixel, HDRMerge chooses the one with least noise possible, so if this pixel value is averaged with the rest then it will end up being more noisy. Example: N = 3

Pixel px-1 px-2 px-3 Average
Noise 20% 40% 70% > 20%

In this case hdrmerge will pick px-1 for the output file which would be less noisy than taking the average

Could you explain in other words what you mean by your last example of M * N?
If N = no. of exposures (say dark normal bright)
what is M? same bracket at diff ISO?

@nathanm210
Copy link
Author

There are several issues here.

First, the "zero" noise approach is not actually zero noise - it means "optimum noise for the ISO"
Even ISO 100 (or 50 for cameras that support it) has considerable noise. The noise might be tolerable but NOT zero.

For any ISO, the amount of noise depends on the exposure - in general the brighter the area, the lower the noise. The "zero noise" approach to HDR that HDRMerge does attempts to make the shadow areas of a picture have a similar SNR (signal to noise ratio) as the brightest parts of an ISO 100 picture.

if you want really low noise you must average multiple shots. If you have 2 shots at ISO 100, then the SNR will be higher by a factor of about 1.4 - the square root of 2.

Averaging 4 shots drops noise by a factor of 2. Averaging 16 shots by a factor of 4.

So if you have a simple HDR that has 5 brackets - at EV -2, -1, 0, +1, +2, suppose that you take 4 such brackets. So now you have four shots at EV -2, four shots at EV - 1 and so forth.

In my previous post N = number of brackets in the HDR, M = number of copies, so this would be N = 5, M = 4.

If you average the shots at each EV level, i.e. average the EV -2 shots, average the EV -1 shots, then feed these into HDRMerge (in its current form) then you wind up with a HDR with a factor of 2 less noise than if you did only one of them.

So, you might say - well the average is a pre-processing step but you still want to pick the "best" pixel.

But it actually is slightly more complicated than that

Your example of px-1, px-2, px-3 has a problem. It is not correct to say that the average will be always be > 20%. Random errors do not average that way.

Suppose that you have px-1, px-2, px-3 with the same noise levels you assume 20%, 40%, 70%.

Formally speaking, I assume that by 20% you mean that the standard deviation of the noise is 0.2.

In this case you will get the lowest noise level by making a weighted average of px-1, px-2, px-3 with weights proportional to 1/V, where V is the variance which is the square of standard deviation.

1/V in your case = 25, 6.25, 2.041, so the weights are 1/V divided by the total.

weights = {0.751, 0.188, 0.0613}

So, the average becomes the sum 0.751* px-1 + 0.188* px-2 + 0.613*px-3

In that case the noise (standard deviation) of the average is about 0.167, which is lower than just picking the px-1 pixels, which are at 0.2

There is considerable academic work on HDR that bears this out. The lowest noise case is NOT obtained by picking the lowest noise pixels. It is an admirably simple strategy but it is not mathematically the way to get the lowest noise.

This paper http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.380.8576&rep=rep1&type=pdf analyzes different schemes, including the "pick the best pixel" approach that HDRMerge does.

See their Table 1. The case of RAW files is essentially the "linear camera" case discussed in this paper. They show that the variance based weighting is the best.

@nathanm210
Copy link
Author

Whoops, I didn't mean to close the topic

@nathanm210 nathanm210 reopened this Jun 20, 2020
@fanckush
Copy link
Contributor

Thanks for the thorough explanation. unfortunately my brain stopped working after this line

So, you might say - well the average is a pre-processing step but you still want to pick the "best" pixel.

when i read that i thought "yes exactly!"

From what i understood, the main key here is "random" when you said " Random errors do not average that way." and that makes sense. So averaging 4 images with the same EV level will result in higher SNR because the noise (error) is variant and will be "canceled out", that's probably an inaccurate way to put it but that's how i got it, after all that's why we use averages.. to remove noise/error.

I don't agree however on applying the logic above to images of different EVs. Before we do any averaging, shouldn't the images first have the same EV? i would scale the dark image by scalar S to have the same exposure and only then does it make sense (for me) to do any averages.

I suppose this is where i am wrong and that's where weighted averages come in handy right?

By definition, pixels in the darker images of a scene say -2EV will have a lower SNR than the brighter images. Yes, maybe that won't always be the case 100% of the time since noise is random but still.. i find it hard to understand how averaging a dark noisy pixel and its bright correspondent pixel that has less noise is better than just using the bright pixel in the 1st place

This is just me not getting it but i'm sure that you are correct and indeed doing a weighted average is a better approach than simply picking the brightest non-clipped pixels. Hopefully it will come to me :)

@nathanm210
Copy link
Author

Your intuition that the random erros "cancel out" is roughly speaking correct.

When we talk about a pixel having noise, what we mean is that we believe that there is a true correct value for that pixel (the zero noise value) plus some noise from the camera. Every time we measure a pixel the true value is the same but the random noise is different. If we measure it many times (i.e. many different frames), then when we average the measurements we will get a lower noise estimate of the true value.

Yes this works even if there is a difference in EV.

In that case each exposure at a different EV is still a measurement of the value of the pixel. It may have more or less noise, but it is still an estimate.

Simply picking the value you expect is best is not actually the way to get the lowest noise as the paper shows.

It is better to do a weighted average - multiply each image (or each pixel) by a weighting factor, which depends on various attributes you can calculate - the paper at the link shows that using the estimated variance as the weight works best.

@kmilos
Copy link

kmilos commented Aug 28, 2020

It would indeed be better (more formal) to do some sort of a weighted average, but I don't think the paper mentioned provides an OOB solution for HDRMerge unfortunately. I think some of the assumptions the authors make like constant shutter time and constant read noise across the frames do not universally stand in real life applications and "modern" CMOS sensors.

For starters, an accurate noise model must be used, and one could start with the numbers provided by the input DNGs (see the NoiseProfile tag in the spec), which I think are based on this approach: http://www.cs.tut.fi/~foi/papers/Foi-PoissonianGaussianClippedRaw-2007-IEEE_TIP.pdf (note that both the signal dependent and signal independent part are a function of analog gain, i.e. ISO).

But since HDRMerge most of time deals with camera vendor raw files instead, this information is unfortunately not usually exposed AFAIK, so an accurate calculation of noise in each pixel is not readily possible. That is not to say a set of some other assumptions could not improve the current binary scheme ;)

@kmilos
Copy link

kmilos commented Aug 28, 2020

@fanckush Noise adds in quadrature, so yes, it is sometimes possible to mix different variances and end up with one smaller than the constituents. Of course, if they are extremely different, you end up with most/all the weight assigned to the smallest one and you cannot improve much.

For example, let's consider two pixels from a same location in two exposures (assuming neither is clipping of course), one carrying noise of std1=1 (let's normalize for simplicity), and the other std2=2 (after exposure compensation). The weighted average results in noise std=sqrt((std1*w)^2 + (std2*(1-w))^2). For w~0.8, you get std~0.89 (the exact min can be derived of course). This translates to an additional boost of ~12% in SNR and dynamic range, or ~0.16 EV. Subtle, yes, but maybe important for some.

For std2=4 we're already at best std~0.97, so maybe we are talking about diminishing returns here... A more rigorous exploration of realistic std1 vs std2 ratios is of course needed, as is bringing in and optimizing for more frames.

A good first step for HDRMerge would be to offer a basic feature of a simple average for multiple frames of same exposure (sqrt(N) benefit in SNR).

@nathanm210
Copy link
Author

nathanm210 commented Aug 28, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants