Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Film Negative - Improper assumption of mapping between raw values and transmission coefficient #7063

Open
Entropy512 opened this issue Apr 29, 2024 · 11 comments
Assignees

Comments

@Entropy512
Copy link
Contributor

Entropy512 commented Apr 29, 2024

Short description
The film negative tool has a tendency to have a black level offset for negatives that had a darker exposure. This has been previously described as intended behavior and consistent with Wikipedia's article at https://en.wikipedia.org/wiki/Photographic_film#Film_basics , but it is not.

Specifically, Wikipedia states: "so the transmission coefficient of the developed film is proportional to a power of the reciprocal of the brightness of the original exposure, e.g.:
light = (1/t)^p = t^-p
where t is the transmission coefficient of the film

The film negative tool instead implements:
light = (1/v)^p = v^-p

Where v is the raw value after white balance. This will only equal t if:

  • White balance was disabled and the backlight and camera exposure are adjusted so that the orange mask is EXACTLY the WhitePoint of the camera, or
  • White balance is enabled and the orange mask is a value of (1.0,1.0,1.0) after white balance multipliers are applied

Steps to reproduce
Invert a film negative with significant dark regions (for example, pure orange mask). Observe that the output values are not black but grey

Expected behavior
Negatives that are captured with the value of the orange mask not at exactly the camera white point have the orange mask be black after inversion, not grey

Additional information
Applies to current git master, I'll add an example image if desired

The inversion tool needs to bring back sampling of the orange mask in addition to the two neutral points, so that input to the inversion algorithm can be scaled such that the values of the orange mask are (1.0, 1.0, 1.0)

This could be achieved in the low level processing by adding the following to doProcess in filmnegproc.cc :

rscale = 1.0/(1-rmask);
gscale = 1.0/(1-gmask);
bscale = 1.0/(1-bmask);

Where rmask, gmask, and bmask are the sampled values of an orange mask location

followed by changing the actual implementation code to:

            rlineout[j] = CLIP(rmult * pow_F(rlinein[j]*rscale, rexp));
            glineout[j] = CLIP(gmult * pow_F(glinein[j]*gscale, gexp));
            blineout[j] = CLIP(bmult * pow_F(blinein[j]*bscale, bexp));

I can easily make the processing code changes, but I'm pretty horrible at GUI changes

Entropy512 added a commit to Entropy512/RawTherapee that referenced this issue Apr 29, 2024
A raw white point or white-balanced raw data do not correspond to a transmission coefficient of 1

Prescale all input data so that the sampled values of an orange mask map to full scale.

Fixes issue Beep6581#7063

NOT COMPLETE:  Need to add text for UI elements
@Entropy512
Copy link
Contributor Author

Entropy512 commented Apr 30, 2024

I bodged together a proof of concept, and realized that my prescaling was constantly fighting against exposure compensation.

This is a rough duplicate of what I just edited into my post on Pixls, but even Wikipedia themselves put information immediately to the right of their "simple model" formula that shows that the formula is not valid in the shadows. Unlike the idealized model, real film has a toe in the shadows.

This can be observed by plotting the characteristic curves for Fuji Superia X-Tra 400 from https://asset.fujifilm.com/master/emea/files/2020-10/9a958fdcc6bd1442a06f71e134b811f6/films_superia-xtra400_datasheet_01.pdf against the model. (I digitized the data with WebPlotDigitizer, will attach the CSV and throw my plotting script into a gist.
scene_vs_tcoeff

Note that at the right hand side of the graph (tcoeff=1), the simple model translates to 4EV more scene light than what was actually recorded.
fuji_superia_400_density.csv

Plotting script at https://gist.github.com/Entropy512/5c82cb57408ac70ba263a3a0cb410b43

I'm not sure what the best way to handle this is. A painful workaround is to use a tone curve in Standard mode to compensate for this effect by adding a toe, but since the tone curve is applied AFTER exposure compensation and also after the Output Level adjustment in filmneg, it must be re-adjusted any time exposure is adjusted in either tool.

I'm not sure what the best way to handle this is. Another curve tool???

I realize that @rom9 probably wasn't pinged when I created this issue

@Entropy512
Copy link
Contributor Author

Entropy512 commented Apr 30, 2024

I've come up with an enhanced model that seems to fit published film response curves pretty well (but there might be a better formula for this?). It's python with numpy currently

EDIT: The original model I posted has serious problems at certain scale factors. The below code is a simplified version. Plots have not yet been updated.

scalefac = np.power(10,-2.8) #This is calculated so the "simple model" matches the main body of the film data, needs to be chosen differently for RT
exp = 1.5 #The exponent from the current implementation
scenelin = np.power(tcoeff,-exp)*scalefac
refin = scalefac
refout = np.power(10,-3.62) #This probably needs to be derived from the scalefac rather than scalefac being derived from Fuji's units
curvestr = 2.0 #Depends on the film, for Fuji Superia 400 it's 1.2 for red, 2.0 for green, 2.6 for blue
adj = np.power(inref,curvestr) - np.power(outref,curvestr)
new_lin = np.power((np.power(scene_lin, curvestr)-adj), 1.0/curvestr)

enhanced_film_model

  • The question is how to implement this model without blowing up into slider-itis...
  • First we clearly need normalization of the orange mask to 1.0 (or an appropriate consistent reference value that corresponds to a transmission coefficient of 1).
  • Then we need to choose the reference point in relation to the "simple" model at a transmission coefficient of 1.0 - This is about 3 EV down for Fuji, although it's HIGHLY nonlinear here, but maybe just choosing 3 EV will be enough for most films, with the curve strength being more important???
  • Then we need to expose the curve strengths for each color, unless it turns out that having the same strength for everything is "good enough" for most applications (I'm not so sure if this is the case though since there's significant delta between red, blue, and green for the Fuji data I have...)

Or do we just start putting film profiles with the curve strength, reference ratio, and powers in rtdata as presets?

@Entropy512
Copy link
Contributor Author

Rough proof of concept at https://github.com/Entropy512/RawTherapee/tree/filmneg_tcoeff

Needs some serious UI work. I'm probably going to make the transmission coefficient scaling factors behave similarly to the exponents - a "main" exponent and then two ratios, allowing someone to scale all three points with one slider and then finetune the ratios if necessary.

Actual film has differing behaviors in the toe, but so far I'm not getting any serious color cast issues by using a single curve "bendiness".

I'm getting incredible results with a fairly low amount of work/individual image fiddling (basically just exposure and whitebalance) now, and not having to try and enhance contrast. I'll try to dig up a sample that doesn't have people that might not want to be used as an example but highlights the improved shadow behavior. Unfortunately, most of my best examples that I've digitized so far have people in them.

@nomar500
Copy link

nomar500 commented May 1, 2024

Hi @Entropy512, thanks for filing this one!
I haven't had time to read through your considerations and pause.
But if you need sample shots, maybe I can help.
Here's a typically nasty shot on fresh kodak gold 200: https://drive.google.com/file/d/1UKrVS9Bjh_toiefFjFvP6YOyIrTeYlhl/view?usp=drive_link

Here's an easier one, on fuji industrial 400, with many more possible reference points: https://drive.google.com/file/d/1sqtB62nLU0hnJkAPhH_myyjYbWF4Iixr/view?usp=drive_link

Let me know if these are of any use or if I should look for other shots I might have in my library.

@nomar500
Copy link

nomar500 commented May 1, 2024

also, please let @rom9 know directly on pixls :)

@Entropy512
Copy link
Contributor Author

Entropy512 commented May 2, 2024

I THINK he should have been notified with my reply at https://discuss.pixls.us/t/any-interest-in-a-film-negative-feature-in-rt/12569/373?u=entropy512 but who knows. Sometimes pixls notifications are inconstent.

Poking at your examples soon. Keep in mind that having a sample of unexposed film captured under identical conditions (same backlight, same shutter time, etc) is EXTREMELY beneficial here.

The Gold 200 happens to be something I have existing data for, so the biggest challenge is figuring out what the black point (white in RAW) is. I'll actually upload an example taken with Gold 400 back in the late 1990s soon (tomorrow morning?), as the only subject in the photo is me in high school.

Looking at your examples - what did you use as a backlight??? Both shots are SEVERELY lacking in green illumination. As a result it's really hard to get a decent balance in the shadows because I don't know what the actual "orange mask" balance is. It REALLY helps to have an unexposed film sample taken with the EXACT same backlight conditions here... Also helps to have a more balanced backlight. I've been doing monochromatic backlight captures (see https://discuss.pixls.us/t/digitizing-film-using-dslr-and-rgb-led-lights/18825/31?u=entropy512) and so I adjust the backlight so that the orange mask is nearly white in my captures.

I can probably work around a highly unbalanced backlight, but I really need an unexposed film sample of the orange mask taken under identical conditions. Monochromatic RGB capture REALLY does help here though.

The Gold 200 shot really needs a better (more balanced) backlight setup and/or at least an "orange mask" reference. The BMW shot seemed to come out pretty well despite not being able to find ANY datasheet for Fuji "Industrial 400" and using Superia X-Tra 400 data instead.
D75_2657
fuji_indus400_bmw320i
And an example of me back in high school (Kodak Gold 400 for this one) - approximately 30 years ago:
neg10-1

@rom9
Copy link
Collaborator

rom9 commented May 2, 2024

Hi, sorry for the late reply. I also saw the post on pixls, but I didn't have the time to respond, sorry :_(
The concept is interesting, I'll take a look at the code this Sunday, I promise :-)
Sorry for the delay, and thanks for your work!

@rom9 rom9 self-assigned this May 2, 2024
@nomar500
Copy link

nomar500 commented May 2, 2024

hi @Entropy512,
Thanks for your feedback.
The backlight I'm using has a CN-T96 LED panel (video light), and I'm doing ETTR as much as possible.

I'm not sure I understand:

  • the raw histogram for the greenery shot does not look like it's missing much green,
  • I intentionally gave shared this shot because it has a bit of the much needed border.
    image

Now about matching the specs from the datasheet: I'm all for doing something about non-linearities, but,

  1. are we sure we want to compensate for this toe (and perhaps shoulder as well?) non-linearity based on specs?
  2. depending on film aging, film development (has it been push-processed?), how reliable is this toe characterisation? is it reasonable, as well, to apply a different datasheet to another type of film ? (typically, fuji superia 400 is not industrial 400)

@Entropy512
Copy link
Contributor Author

Entropy512 commented May 2, 2024

@rom9 - No problem, we're all busy, and no hurry. I'll probably rework the mask scaling to be ratiometric by then. Or maybe remove the mask scale slider entirely and require whitebalancing on the mask first.

@nomar500 - Whoops, I said green but I actually meant blue. Normally before inversion, un-whitebalanced shots are rich in green leading to a greenish tint, but this was yellow before inversion due to the blue peak being way lower than red and green. I also missed that bit of border.

Interestingly, while my mask picker SHOULD render the white balance tool redundant, I get better results if I white balance on the mask and THEN use the mask picker to finetune.

BTW, you might want to look into a backlight that has monochromatic R, G, and B components. A lot of people are using things like phone displays (good uniformity and many are OLED now). Using a broad-spectrum source means in addition to the original spectral response of the film to the scene, you have the spectral response of the dyes AND the spectral response of the camera. Using monochromatic illumination for each channel greatly reduces crosstalk between the channels, leaving the original SSF of the film being more of a factor vs. other factors. A bit more discussion on the topic at https://discuss.pixls.us/t/digitizing-film-using-dslr-and-rgb-led-lights/18825

I've gone a step further than having three monochromatic components and compositing a raw file using three single-color captures as described in the link above. My last post in that thread links to a script which automates the process for Sony cameras (might work for others, but I need to make it more robust to different behaviors of shutterspeed and capture target) and a Neewer RGB176 light which is pretty cheap. The light that I use is sadly discontinued but other Neewer lights may use the same protocol. This completely eliminates the camera's SSF and leaves the only crosstalk being from the dye spectral response (which is small).

As to film data - so far in actual usage, I'm finding:

  • While the datasheets for the film show very different curve behaviors for each color in the toe, using a single curve "strength" seems to have really good results to the point where I'm not sure if going to separate strengths is really that useful except in ????? corner cases
  • I'm finding myself using far lower curve "strengths" than a model fit implies - sometimes less than 1.0. "strength" isn't really a good word for this, as lower numbers have a more pronounced effect. For example the model fit for Fuji Superia X-Tra 400 has an EV delta of 2.72 and a curve strength of 1.5 for R, with G at 2.0 and B at 2.6. But I'm using a fixed value of 1.05 in the marching band shot I posted and an EV delta of 4.
  • A benefit of a gentler curve is that increases tolerance to errors in estimating the mask's maximum transmission coefficient/maximum input value.

Where the film datasheet is REALLY useful and still seems valid even for aged/pushed/whatever film is the scene spectral sensitivity response. When using the capture technique I described above, instead of using the camera color profile as the input profile, I use a color profile for the film derived from the published SSFs and it seems dead on for the Superia and also dead on for Kodak despite me applying Gold 200 SSF data to Gold 400.

@rom9
Copy link
Collaborator

rom9 commented May 11, 2024

Hi @Entropy512 , i finally managed to try your modification. As you mentioned:

Or maybe remove the mask scale slider entirely and require whitebalancing on the mask first.

I don't think that the prescale sliders are doing exactly the same job as the WB tool; keep in mind that the WB multipliers are applied to the raw values, before applying the camera input profile. If you have a DCP profile, the WB multipliers are also parameters for the input profile processing.
That said,i also think that the prescale multipliers here are redundant: they do the same job as the output scale sliders (the Cool/Warm and Magenta/Green sliders in the filmneg tool).
Since the negative inversion process is a simple exponentiation, applying a coefficient to the input should be the same as applying a coefficient to the output, just a different number:
(x*k)^n = (x^n) * (k^n)
So, i would keep the "Pick orange mask spot" button, save the spot values in the profile parameters, and remove the sliders.

Regarding the toe curve, i've noticed you used tha same pair of parameters for R,G and B channels; is this intended? Or do you think that each channel could have different toe curve parameters?

Previously, i also noticed about the non-linearity at the edges of the density range (see my posts on pixls around march/april 2021), and i wanted to take that into account. I thought that each channel would need a dedicated toe AND shoulder curve, and that would create an amazing number of sliders... maybe it was a bit too much :-D

@Entropy512
Copy link
Contributor Author

Entropy512 commented May 11, 2024

Hi @Entropy512 , i finally managed to try your modification. As you mentioned:

Or maybe remove the mask scale slider entirely and require whitebalancing on the mask first.

I don't think that the prescale sliders are doing exactly the same job as the WB tool; keep in mind that the WB multipliers are applied to the raw values, before applying the camera input profile.

In a proper color management workflow for film inversion (the tool is set to work in the input color space), the prescale sliders are also applied to raw values. Note that this workflow requires consideration of the camera's influence on things at the time of capture - that's why many people recommend a backlight that consists of three monochromatic sources (such as an OLED display) to reduce the impact of the camera SSF and the film dyes, and all professional film scanners take this a step beyond and composite three captures, each with a monochromatic backlight. See https://discuss.pixls.us/t/digitizing-film-using-dslr-and-rgb-led-lights/18825 (I think I already linked this before?), also https://github.com/Entropy512/rgb_led_filmscan/blob/main/capture_negative.py - Capturing in this manner nearly entirely eliminates the camera SSF and dye SSF from the picture except for some VERY minor crosstalk, so that color management becomes a matter of inverting in the input color space and then applying a color profile for the film and not the camera!

If you have a DCP profile, the WB multipliers are also parameters for the input profile processing.

Not when your DCP profile is generated from the film SSF ( https://github.com/Entropy512/rgb_led_filmscan/blob/main/ssfcsv_to_json.py )

That said,i also think that the prescale multipliers here are redundant: they do the same job as the output scale sliders (the Cool/Warm and Magenta/Green sliders in the filmneg tool). Since the negative inversion process is a simple exponentiation, applying a coefficient to the input should be the same as applying a coefficient to the output, just a different number: (x*k)^n = (x^n) * (k^n) So, i would keep the "Pick orange mask spot" button, save the spot values in the profile parameters, and remove the sliders.

After further investigation, this is indeed true for the simple model. It ceases to be true for the "enhanced" model - at that point having the correct Dmin/maximum transmission coefficient point being accurately selected is critical.

So far if whitebalancing on the orange mask, it seems like only a single slider is necessary to determine the appropriate white level that SHOULD be the same across all channels. I'm considering moving to a single slider here for this reason, although I have concerns that in some capture cases, the whitebalance tool's lack of direct R, G, and B tuning could be problematic. It's already known to be problematic for "nontraditional" scenarios - https://discuss.pixls.us/t/extreme-camera-white-balance-is-off-in-rawtherapee/43193

Regarding the toe curve, i've noticed you used tha same pair of parameters for R,G and B channels; is this intended? Or do you think that each channel could have different toe curve parameters?
I think I already commented that this is something up for discussion/further thought. My observations so far:

  • While the actual film model has three very different curve strengths, having a single "strength" seems to work OK in most scenarios
  • A lower "strength" actually has a more pronounced visual effect so I think a better name is needed for this
  • I've found that often using a "strength" much lower than the model seems to work better for me. Such as "strength" of 1.0 or even less when the model says 2.0 for one channel, 1.8 for another, and 2.8 for another

All in all it's a tradeoff between the ability to fine tune for corner cases, and having way too many sliders in the GUI for normal use cases. I don't know if there's some way to have an "advanced mode" switch that exposes more sliders when desired but hides them by default...

Previously, i also noticed about the non-linearity at the edges of the density range (see my posts on pixls around march/april 2021), and i wanted to take that into account. I thought that each channel would need a dedicated toe AND shoulder curve, and that would create an amazing number of sliders... maybe it was a bit too much :-D
The toe curve seems to be significantly more important than the shoulder curve, it WOULD be nice to somehow take into account the shoulder that is seen for Kodak film (but, interestingly, NOT for Fuji Superia X-Tra 400), but it seems like not correcting for this is acceptable since it makes the model even MORE complex.

Me in December 1998:
neg13-1
neg13-1.jpg.out.pp3.txt

Grumble, github doesn't allow attaching DNGs, and I had to rename the pp3 to pp3.txt - I'll find a way to upload the DNG later (maybe I'll just put these on the pixls thread?)

I now have implemented autofit of models in my density analyzer. ( https://github.com/Entropy512/rgb_led_filmscan/blob/main/density_plot.py )

Fuji Superia X-Tra 400:
fuji_superia_400_model
Note that there is no shoulder!

Kodak Gold 200:
kodak_gold200_model
Extremely strong shoulder

Kodak Ektar 100:
kodak_ektar100_density
I originally thought "wow no shoulder for this one" when looking at the PDF, but there is a shoulder, just FAR less pronounced than for Gold 200

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants