Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenEXR export support #197

Open
Rab-C opened this issue Jan 6, 2020 · 57 comments
Open

OpenEXR export support #197

Rab-C opened this issue Jan 6, 2020 · 57 comments
Assignees

Comments

@Rab-C
Copy link

Rab-C commented Jan 6, 2020

Hi there, really impressed with the app. It's been mentioned a couple of times in passing (most recently in #166, iirc) but I would also really appreciate EXR export.

The relevant library is ilmimf which is now maintained by the Academy Software Foundation as part of the OpenEXR master at:

https://github.com/AcademySoftwareFoundation/openexr

where the 2.4.0 release is currently getting hacked on.

There's a helpful, but not hugely up-to-date library integration guide here:

https://www.openexr.com/documentation/ReadingAndWritingImageFiles.pdf

And also an EXR best practices section at the end of the technical introduction guide here (also sadly a bit long-in-the-tooth):

https://www.openexr.com/documentation/TechnicalIntroduction.pdf

Finally, I don't know what your lead dev environment is, but I almost came acropper trying to build OpenEXR on Win10 until I read the first few posts on the issues page & grabbed the latest code.

I'm leaving this info here as much as a public signpost as anything (and OK, I confess, a secret prayer to the code pixies) but I will dig in and try to put some time & effort in when I (ha ha ha) have a bit of free time.

Cheers for reading,

Rab.

@ilia3101
Copy link
Owner

ilia3101 commented Jan 6, 2020

I love the idea of EXR, but the whole point of it is high dynamic range isn't it?

MLV App compresses the high dynamic range raw image to a low dynamic range image (tonemapping), and after that all processing is done in 16 bit integer, so everything stays within the 0-1 range. It's great for making nice videos, but not for HDR output. But I do like the idea of converting processing to floating point, I think MLV App has outgrown int16.

Keeping this in mind, is EXR still useful in your opinion?

@Rab-C
Copy link
Author

Rab-C commented Jan 6, 2020

[Woop, woop - WARNING! Wall'o'text incoming! - Woop, woop... Warning!]

[Also, I'm very junior in my area of work - I hope I'm not teaching anyone to suck eggs here! Not trying to sound super-expert - just wondering if passing on what I've seen might be helpful...]

For me, EXR is absolutely essential, but I'm aware my case might be a bit niche. Over the last year or so I've been working my way into the more high-end post industry, and at least in my experience in the movie/TV/ad space, EXR sequences are now pretty much all I see. So I have to pull DNGs out from the MLV container & convert to EXR either way, I don't have much choice in the matter.

I know from my own background that DNG sequences & import into Resolve are a perfectly normal workflow in the prosumer/indie sector. And they suit their purpose, with known work-arounds for their downsides. I don't think they'll disappear soon in that area.

But for my type of work, EXR has won out, and it's just a matter of how many tools I need to use to get into EXR. And I'd prefer it to just be 1, & for that 1 to be MLV-App, because it's a nice friendly interface, with rapid updates to MLV format support (Fast DNG Converter still can't support anamorphic MLV-resize, for instance, despite the pricetag), and although I'll always pay for a professional tool, I believe in open source.

I'd say the big reasons for EXR winning out in the sector I'm getting into (& therefore being valuable to me to ask for export support) are:

  • EXR is explicitly defined by SMPTE ST2065-4 as the format for ACES. That relationship is getting even closer in ACES 2.0 with the same people maintaining both codebases. And ACES is getting pretty widely-expected now.
  • EXR is always Scene-linear. Moving from software to software, and vendor to vendor, I've seen some weird, weird (mis)interpretations of DNG gamma & colorspace - EXRs have always come out the same.
  • and although EXRs can allow for 16-bit float, 32-bit integer or float, that does indeed always allow for full-range representation of any extant camera system, with overbrights. I guess that HDR-ness is only going to get more important as we move to the various HDR deliverable formats that different broadcasters are currently rolling out. Current DNG workflows aren't going to cut it.
  • I know DNG - after a 7 year gap - finally put out a new spec a few months ago to support depth maps and additional image data. I personally think that's likely to make the misinterpretation problem worse, and it's just too late for trying to have a format war for HDR imaging - EXR already got adopted, and has already supported everything I've read in the new DNG spec for years. I think that's a common view among my acquaintances.
  • The canonical implementation of DNG is Adobe's, the canonical EXR is open source & now maintained under the umbrella of a non-commercially-competitive part of the industry.
  • Bcos of that Nuke, Scratch, Smoke etc. etc. have all felt free to build-in excellent EXR import, standardization & sequence scrubbing out of the box. DNG... Not so much. You end up having to use helper apps. So another layer of translation tools or import codec-providers - with the attendant workload & possibility of error - disappears if you can jump straight into EXR from source.
  • The multi-channel nature of EXRs (with an arbitrary number of channels of any arbitrary types) has become so, so useful. It's used for everything when you're compositing a plate with CGI elements, different passes, even different grades, without messing with the original data. Especially when feeding back across different sites or vendors. All with good lossless compression & colour-handling. It's the defacto pipeline, at least in the places I've been. That all starts in my line of work with an EXR plate that ideally has been taken as directly & as "purely" as possible from capture to ingest.
  • When something has come up that hasn't been EXR, it's been .DPX, because that's the 'traditional' format from a film scanner, I guess. DNG is probably a very distant 3rd at this point *in the material I've seen, **in my relatively short time, ***as a pretty junior assistant (enough asterisks to limit any claim to omniscience, I hope!).
  • EXR's even become the file format of choice for a new generation of grading tools - for instance Baselight, or the Avid & Nuke Baselight plugins, save their grades in a '.BLG' file format that's actually just an EXR. Which means you can take it back apart again in any EXR-supporting software like Nuke: Original picture on one channel/"layer", alternate grades on others, none destructive of the original plate.

At all of these stages, any tonemapping for viewing, or the very few operations that are better done while temporarily converted out of linear-usually-float-land, are handled internally by the software package, rather than applied to the underlying original image data. The pipeline explicitly expects the linear EXR - just with with varying channels representing progress - all the way along until final, when a look is baked in during the only non-internal-&-temporary colourspace/ gamma transformation the image should go through since entering the pipeline. It's a good way to work, IMO, and EXR is what makes it manageable.

And I guess at a meta-level, having been involved in maintaining a few projects in my time, the fact that there's no currently-up-to-date and user-friendly cross-platform pipeline straight from MLV to EXR with the features MLV-App offers in a nice usable interface, means that there's a sizeable professional audience out there waiting for a tool they can use for just that job daily, and support, and recommend to others. MLV App is already great, but this is another area it could become a great defacto choice for. Again, in my opinion. (I do have a vested interest, after all!)

Blimey. I'd better stop writing before this text box crashes Github...

Thanks for your time! All the best,

Rab

@masc4ii
Copy link
Collaborator

masc4ii commented Jan 7, 2020

The question was not how nice EXR is. The question was if it might be useful to fill an EXR file (for linear high dynamic range picture information) with processed non-linear and low dynamic range picture information. With other words, it might be a kind of Ferrari without motor.

@ilia3101
Copy link
Owner

ilia3101 commented Jan 7, 2020

@Rab-C It seems like supporting EXR would add significant value to MLV App for your industry. I would like to do it if there's documentation for the exr library. Also do you know if EXR has built in colour gamut metadata?

@masc4ii If we do this, I will do a wrapper library like I did for AVFoundation, to make it as easy as possible from Qt part.

Also: it would actually be quite easy to add non-baked Linear floating point output for EXR to MLV App, by skipping most of the processing. Unfortunately this means you won't be able to use most of the adjustments - saturation, contrast, dark / light strength, is that important to you Rab? They are just grading tools and technically modify the true linearity. But it could be possible to adapt some of them to HDR, just so that you can do very small corrections while mostly keeping the linear nature.

@Rab-C
Copy link
Author

Rab-C commented Jan 7, 2020

Sorry, I misunderstood.

I thought you meant you were considering that it might be time to make a potential change to how extended highlight data within the raw readout from the Canon sensor could be piped through MLV App's processing engine. I jumped to the thought of a new, additional internal float "pipe-through" option to keep overbrights where they are above 1, potentially offering access in the future to MLV-App's really helpful de-squeeze, demosaic, chroma, gamut etc. choices without getting involved in the compression curves, remapping values, or clipping that at the moment transforms image data out to its export container. During which there would be an opportunity to add EXR export. I got carried away, I suspect, seeing what I wanted to see in your comment, rather than what you were actually saying.

Is the tonemapping/compression stage unavoidable in the App engine? If so, you're absolutely right, dropping to EXR rather than DNG wouldn't really be any advantage.

At the moment the only converter I know of that I feel I can really trust to pull raw Canon data directly to EXR without affecting the scene-linear values along the way is rawtoaces, which has its own downsides, such as forcing a single gamut option, and not having all MLV App's extra helpful capabilities. But it does still keep those >1 overbrights in place. Most other apps I've tried have involved either remapping LUTs/curves (such as Andy600's Cinelog solution to get overbrights into Adobe), which is the sort of intermediary I'd like to eliminate, or end up losing the extended highlight info (Adobe's own apps), or have downsides that don't ultimately make them very attractive (e.g.Fast DNG, as noted in my now-completely-irrelevant essay above).

I was thinking in a "wouldn't it be great if MLV App could do it all in one go!" sort of way, I guess. Which I have to admit may well be me - as I tend to! - dreaming of a perfect solution rather than paying enough attention to the words in front of me.

Sorry if that's what happened, or if I've still misunderstood.

@Rab-C
Copy link
Author

Rab-C commented Jan 7, 2020

Ah, managed to be writing the above when you posted your reply update! Will do a bit of work & read-through later!

@Rab-C
Copy link
Author

Rab-C commented Jan 7, 2020

OK, work can wait a little bit longer :) And great, it does sound like there's potentially a way forwards, which is a fantastic thought.

So yes, EXR export support, omitting the MLV App functions which need a transform out of linear to operate, but therefore preserving the overbrights, would be fantastic. Any of the "creative"-type functions - saturation, contrast, dark / light strength etc. - would always be saved for later in the pipeline & the app/operator/pre-written template assigned/permitted that kind of choice anyway. This is more about needing a good technical tool.

I do have to get on with some work, but I shall look through MLV-App again later & pop back with a suggestion list of what functions would be helpful to keep available if they could still work in scene-linear (top of my head: Cut in-out, debayer algorithm, focus pixel map), (have to think about white balance point & gamut a bit more). And which ones would pretty much always be put aside in favour of the tools in Nuke/Scratch/Baselight/whatever even if they could be made to work in linear within MLV App (e.g. all grading-type functions from colour tint downwards, and probably de-squeezing due to company preferences on interpolation algorithm, definitely denoising, for instance).

Great stuff, thanks!

@Rab-C
Copy link
Author

Rab-C commented Jan 7, 2020

Ah, and yes, Ilia, EXR does allow for extensive metadata, including gamut - and I believe even per-channel colorspace since there can be several per file (not somthing MLV-App would ever have to think about tho, as far as I can see, since it would be originating single-channel-colorspace files). I have Nuke scripts that read out & branch on metadata.

There's also a flag that can be set to say the EXR is ACES compliant, if it's the correct colorspace etc.

Tho, to be honest, those scripts also have to have error-catching because I do see EXRs where the gamut is in the filename rather than properly passed through metadata.

@masc4ii
Copy link
Collaborator

masc4ii commented Jan 7, 2020

@ilia3101 : yes, exactly this way, but multi-platform, if possible.
@Rab-C : for now we can generate (RAW-)DNG files (all processing is skipped) or processed clips (ffmpeg/AVFoundation, all processing done). I expect EXR to be a kind of debayered DNG, but not processed any further?! Otherwise linearity will be lost and there is no advantage at all compared to the existing export options (at least I don't see any advantage).

@bouncyball-git
Copy link
Collaborator

bouncyball-git commented Jan 7, 2020

Liked "Motorless Ferrari" analogie :)

In short as Ilia said mlvapp is not ready for prime time (True EXR export) until full floating point processing pipeline (IMHO preferably based on reliable/compatible OCIO lib) is implemented (to correctly handle overbrights etc.)

@ilia3101
Copy link
Owner

ilia3101 commented Jan 7, 2020

@ilia3101 : yes, exactly this way, but multi-platform, if possible.

Definitely would be multiplatform

@Rab-C : for now we can generate (RAW-)DNG files (all processing is skipped) or processed clips (ffmpeg/AVFoundation, all processing done). I expect EXR to be a kind of debayered DNG, but not processed any further?! Otherwise linearity will be lost and there is no advantage at all compared to the existing export options (at least I don't see any advantage).

Debayer, exposure, highlight reconstruction, and camera matrix (white balance + gamut conversion) can all be done and it's still linear. As well as raw corrections like stripes and dark frames.

In short as Ilia said mlvapp is not ready for prime time (True EXR export) until full floating point processing pipeline (IMHO preferably based on reliable/compatible OCIO lib) is implemented (to correctly handle overbrights etc.)

True EXR export can be added quite easily actually, if most of the processing is skipped we get everything preserved. First stage (matrix) is done in floating point anyway. People who need EXR don't need the processing anyway. I think it's a good idea to add EXR as soon as possible, because it adds a lof of value for high end post production.

@Rab-C What do you think about OCIO? Does rawtoaces use it?

What debayer does rawtoaces use? MLV App could possibly become a better rawtoaces...

@Rab-C
Copy link
Author

Rab-C commented Jan 7, 2020

I think @ilia3101 @masc4ii & myself are managing to get onto a wavelength here, which is pretty exciting.

But for my own clarity, and maybe for anyone who still thinks this is a pointless idea, I'll try to write a reply to @bouncyball-git that caches out what I'm suggesting.

With ref to your last comment @bouncyball-git, actually, my understanding (always with the proviso that I'm often an idiot, but I do try not to be! I get stuff wrong, I might be wrong about this!) is that this is about taking the data out of MLV-App's processing pipeline earlier, while it is still the original scene-linear. The whole point is to avoid using much of a processing pipeline beyond that point, not need to build a new one within MLV-App. Avoid pulling in anything like OCIO is kind've the whole point. The OCIO transforms are - well, they're data transforms. Exactly what we're trying to get out before.

I'm not a huge fan of the Ferrari metaphor, but it is vivid. So let's see if my brief perusal of the repo helps me tie it to actual code a bit better. Obviously you guys will know your code base waaay better than my amateur riffling-through, but if I've got the wrong end of the stick, please let me know & I'll be able to grasp things a bit better.

If I understand correctly

  1. MLV-App starts by bringing in raw data as 32-bit unsigned int (e.g. src/mlv/raw.h: lines 46-134). So to tie in the metaphor, we are starting with a pro-spec Ferrari. And

  2. by the time we get through the first operation - debayer - (e.g. src/debayer/debayer.c lines 115-121) because we're at " debayerto[ j ] = MIN((uint32_t)red1d[i], 65535);") we still have our Ferrari.

To answer @masc4ii - yes, to just simply take the data out at this point - basically untouched other than the debayer- to valid EXR would be immensely helpful. Especially with a lossless compression method & metadata, but even without. Because MLV is an app that offers much more than that, it continues to

  1. Add the things people like in a car, such as aircon & gps, & looking through src/processing/raw_processing.c & src/processing/processing.c I can see that
    a) some of those can still leave our Ferrari engine intact (e.g. setting in and out points) and
    b) some mean that we have to take out a few cylinders from our racetrack Ferrari either to make space to work, or to make the resulting car road-legal (for example src/processing/denoiser/denoiser_2d_median.c: line 52, denoise_2D_median needs to be able to bring image data in as an uint16_t).
    c) working out which helper functions can be easily offered without messing with the engine, and indeed which ones would even be valued by likely end-users if they can be, is what I was suggesting I make a start on above. So

  2. At the moment, by the time we end up exporting, we have modified to a helpful, useful road-legal car, but not the Ferrari we could have had if we exited the processing chain earlier, sure in the knowledge that it was only going to be used on a custom-built track by people with a licence.

@Rab-C
Copy link
Author

Rab-C commented Jan 7, 2020

@ilia3101 Ninja'd me again ;) Will go back & read the new post...

@ilia3101
Copy link
Owner

ilia3101 commented Jan 7, 2020

this is about taking the data out of MLV-App's processing pipeline earlier, while it is still the original scene-linear. The whole point is to avoid using much of a processing pipeline beyond that point

We are certainly on the same wavelength!

@ilia3101
Copy link
Owner

ilia3101 commented Jan 7, 2020

If I understand correctly

1. MLV-App starts by bringing in raw data as 32-bit unsigned int (e.g. src/mlv/raw.h: lines 46-134). So to tie in the metaphor, we are starting with a pro-spec Ferrari. And

2. by the time we get through the first operation - debayer - (e.g. src/debayer/debayer.c lines 115-121) because we're at " debayerto[ j ] = MIN((uint32_t)red1d[i], 65535);") we still have our Ferrari.

Small corrections:

  1. raw data is brought in (unpacked) to 16 bit, not 32 bit - this is more than enough, as the camera's pixels are only 10, 12 or 14 bit anyway.

  2. For debayer, the data is converted to floating point, as most of the algorithms need it to be that way. We may also do a temporary white balance/channel gain, so that debayer can correlate channels and detect details better. After debayer is done, the channel gains are brought back to original levels. Then the data is converted back to 16 bit integers. By this point, the debayer may have added some imaginary information, so the 16 bits are put to slightly better use.

The line of code MIN((uint32_t)red1d[i], 65535); looks like it's converting to 32 bit int, but really it's converting a floating point value to a 32 bit int, then limiting it's maximum value to 65535, so that it can be converted to 16 bit without overflow - this is not limiting the dynamic range in any way. The only reason an overflow may happen is the debayering algorithm over estimating a pixel.

After these first two stages, the rest of the processing happens, if I remember correctly, starting with white balance/gamut conversion, highlight reconstruction and exposure - right after this part, we can export to EXR.

@masc4ii
Copy link
Collaborator

masc4ii commented Jan 7, 2020

Debayer, exposure, highlight reconstruction, and camera matrix (white balance + gamut conversion) can all be done and it's still linear. As well as raw corrections like stripes and dark frames.

Yapp... this would be best. So this sounds indeed more or less like EXR = "a debayered DNG".

After these first two stages, the rest of the processing happens, if I remember correctly, starting with white balance/gamut conversion, highlight reconstruction and exposure - right after this part, we can export to EXR.

So whitebalance is already baked into the EXR?

@Rab-C
Copy link
Author

Rab-C commented Jan 7, 2020

Thanks for the corrections, @ilia3101.

I believe rawtoaces brings in through libraw, and I've suddenly realized I don't know what the details of their implementation are, other than that it complies with SMPTE S2065-4.

I shall go and have a look at the ampas source & S2065-4 to see what's what.

Other than that, it's an early start for me tomorrow morning, so I'll just say a big thank you for a fruitful & interesting day's discussion!

@ilia3101
Copy link
Owner

ilia3101 commented Jan 7, 2020

Debayer, exposure, highlight reconstruction, and camera matrix (white balance + gamut conversion) can all be done and it's still linear. As well as raw corrections like stripes and dark frames.

Yapp... this would be best. So this sounds indeed more or less like EXR = "a debayered DNG".

Kind of. EXR is also more straight forward to use, as the image is already in a stadard colour space like ACES, not weird camera colour that needs different conversions for different temperatures.

After these first two stages, the rest of the processing happens, if I remember correctly, starting with white balance/gamut conversion, highlight reconstruction and exposure - right after this part, we can export to EXR.

So whitebalance is already baked into the EXR?

I think there is no rule to say it can't be. The only rule is that it's linear. Whatever suits the user. Even if not white balancing for the light, it could be used to compensate for the tint of a lens. And because EXR is linear and floating point, any white balance balance can be un-done and changed very easily afterwards... even if the EXR is in a small gamut like rec709, because the channels can still go negative, therefore all out of gamut colours will be preserved.

@Rab-C do you think white balance is useful with EXR? Or do you need to keep all colours at their exact chromaticities and do it later at the "grading" stage?

I believe rawtoaces brings in through libraw, and I've suddenly realized I don't know what the details of their implementation are, other than that it complies with SMPTE S2065-4.

Ah ok, so it could be using AMaZE debayer like MLV App.

@bouncyball-git
Copy link
Collaborator

bouncyball-git commented Jan 8, 2020

Well, count me on that wavelength as well, because I want to have any possible pro format to be implemented in mlvapp. And I agree that tapping RGB data before any post processing is the only correct way to save untouched data to EXR format.

@Rab-C: It's just you are not aware of some problems we had/have in color pipeline.

  1. some remaining post debayer aliasing which should've been reduced by white balance before debayer algo but it is not working 100% actually.
  2. no accurate colors below 3000K (channel clipping etc)
    etc...
    Well Ilia knows about it better than me 😁

@ilia3101

After these first two stages, the rest of the processing happens, if I remember correctly, starting with white balance/gamut conversion, highlight reconstruction and exposure - right after this part, we can export to EXR.

After this, processing is done on 16bit again. So if we need to write Half/Full Float EXR we have to convert it back to float right?

@bouncyball-git
Copy link
Collaborator

bouncyball-git commented Jan 8, 2020

@Rab-C

Finally, what I wanted to say is that 2 processing approaches you described are not the same at all.

  1. Export DNG, hence utouched raw data, and convert is with some PRO editing/compositing software. In this case debayer RAW to desired color space conversion is done by that software
  2. Export EXR directly from mlvapp. The debayer, color conversion done by mlvapps pipeline (which is great, but as I've said not ready for prime time yet)

The results will be definitely different...

@Rab-C
Copy link
Author

Rab-C commented Jan 8, 2020

Just a quick answer, @ilia3101 bcos I'm at work, but I've been thinking about white balance & chromaticities.

I think in getting EXR export up-and-running, we should think in terms of a 'principle of minimum intervention'. There will be potential features that would be 'nice to have's - white balance picking, colorspace transforms, 18% picker are all things that immediately spring to mind as candidates for a sort of 'Swiss Army knife'-type tool, for instance.

But actually on reflection, that's not the priority, and those functions are perhaps better left as you say to other tools further down the post pipeline, so that people who want to adopt MLV-App as part of their process can set their own templates on how they want to treat/translate the incoming data, to get it into whatever format their pipeline requires. Anyone we're considering as a potential user for this feature has plenty of options for that kind of operation already available to them. So if we leave teams to use their own (audited, internally-project-coordinated) tools of choice to do any other types of transform, rather than us on this side trying to give people the tools inside MLV-App to tailor that data to their pipeline in advance, I think that will make development easier & suit most use-cases well.

That way also, if they don't like the computational approach we might choose on this side of the ingest, or if they have in-house proprietary algorithms that they would prefer to use over the open source approaches MLV-App offers, they can simply write a Nuke Script/whatever to do it the way they would prefer, rather than having to dig into how MLV-App has done it, and reverse / compensate accordingly.

And I think that - at least in any initial feature release - would include white balance.

I would suggest a feature roadmap that looked something like:

.01 get scene-linear data out as untouched as possible into a scanline EXR, no compression, skeleton metadata structures
.02 Additional lossless compression methods from within the EXR library offered, all relevant metadata available from MLV source correctly translated into EXR
.03 EXR formatting & metadata now at stage to be able to validly offer an option to flag exported EXR as ACES-compliant

would mean that even by the time .02 was out, & .03 was getting hacked on, MLV-App would be the best available option for the job.

[Ha, ha, "just a quick answer" ;)]

@ilia3101
Copy link
Owner

ilia3101 commented Jan 8, 2020

@bouncyball-git

After this, processing is done on 16bit again. So if we need to write Half/Full Float EXR we have to convert it back to float right?

Correct, but it is still floating point right after matrix, which is when we'll export to EXR, all overbrights and whatever is preserved. EXR will not get the rest of the processing afterwards, and doesn't really need it.

And yes, there is a channel clipping problem under 3000k, but with float and wide gamut EXR nothing would end up clipped, and should be handeld correctly when graded with software at a later stage.

The clipping is caused by out of gamut colours + individual channel clipping. I have been working on some fixes for this issue. Seems like the best solution is to intersect with the RGB cube on the line between the out of gamut colour and grey of the same luminance. I will add this to MLV App soon. This is only relevant to normal processing, not for EXR.

@ilia3101
Copy link
Owner

ilia3101 commented Jan 8, 2020

@Rab-C Thanks for the quick answer :)

@ilia3101
Copy link
Owner

ilia3101 commented Jan 8, 2020

Looks like rawtoaces does do white balance.

@bouncyball-git
Copy link
Collaborator

bouncyball-git commented Jan 8, 2020

@ilia3101

Good to know!
Eagerly awaiting for all float solution!!! 😀

@Rab-C
Copy link
Author

Rab-C commented Jan 8, 2020

I noticed that when I started poking into the repo as well, @ilia3101

The next question I was going to try to answer for myself, but ran out of time on, was whether that was because they are hardcoding to always put out ACES-spec EXRs, or whether it was intrinsic in the debayering to any kind of EXR.

@Rab-C
Copy link
Author

Rab-C commented Jan 8, 2020

The answer to that might be more obvious to someone who knows the maths better than I do - as we go through this I'm learning how much I thought I knew, but actually just had a general overview of. In some areas, the minute I dig down to "yes, but how is that actually done mathematically" I basically just find a little note from my brain that says "here be dragons".

@Rab-C
Copy link
Author

Rab-C commented Jan 8, 2020

Just saw your rawtoaces issues question in time to avoid being ninja'd again.

I've spent a few hours poking around in the rawtoaces repo & digging out a couple of my old textbooks, trying to build an understanding of what the white balance maths in rawtoaces and the different command line white balance source options, are doing.

And whether the adjustment made to the chromaticity matrix based on the variable white balance is a specific part of the adjustment they make to the IDT transform that support standardizing into ACES AP0.

And if so, whether non-ACES EXRs (in their various different colorspaces) would want the same adjustments done, or whether they would want to use the camera-specific default chromaticity matrix - whether libraw, Adobe or Canon-supplied - without any other variable adjustment being applied to reflect different white balances.

In other words, whether variable white balance needs to be included in all variations of any debayer-and-export-EXR support from the "0.1" above beginning, or whether it is an optional part of the code that will become relevant for ACES-compliance at "0.3".

I'm still trying to build that understanding to my own satisfaction. If anyone knows the answer, please feel free to put me out of my headache-inducing tech-spec-trance!

@Rab-C
Copy link
Author

Rab-C commented Jan 8, 2020

If it's not clear why I'm worrying, the only really acceptable way to find per-shot white balance in most places I've seen is to use a colorchecker. Because there's a high number of shots to do that on (at least one colorchecker per lighting set up) people either use the tools built-in to their post apps, or script their own gizmos, to auto-find the colorchecker in-shot & extract the correct per-shot white balance adjustment automagically in batch. And then some workflows apply the same adjustment to other shots in the same setup (the card isn't shot each time). Using whatever method / algorithm / colorspace is chosen for their pipeline. Which may well not be ACES.

If the goal is to let people do whatever operations they want with whatever algorithm they specify at whatever stage of their pipeline they choose, I was thinking it was better to default to not applying any transformation - or optional part of a transformation - automatically if there was a putative valid EXR that would not want it to have been involved earlier in the process. If so, it's important to know whether producing an EXR by applying the default chromaticities to the linear debayer data with no further reference to variable white balance is a valid option.

If so, I'd say it's desirable to explicitly separate out the maths rawtoaces does in their case to involve variable white balance in IDT into AP0 from the rest of the process MLV-App would use to debayer & create an EXR.

Again, please let me know if I seem to be worrying unnecessarily or have got the wrong end of the stick.

@Rab-C
Copy link
Author

Rab-C commented Jan 9, 2020

Think I've reached my conclusion.

I don't believe variable white balance is relevant for building a basic EXR from the linear bayer data, just the default chromaticity matrix. If the chromaticity values used in that process are properly added into the EXR metadata, then when an end-user takes the file into their pipeline, they have everything they need to do their work in the default colorspace, or use their standard toolset for transforming into ACES using the tools already chosen for that purpose, and the recorded chromaticities from the metadata.

I think we should consider even putting aside my suggested "0.3". ACES is hard, and it's about to change considerably with the introduction of the new parametric transforms (although not - expectedly - on the input side) and there are already tools to manage that transform process into ACES AP0 available to end users in the apps we're targeting here, as long as they're given enough starting info. Which the linear data & chromaticity metadata will do.

My 10c-worth, anyway.

@ilia3101
Copy link
Owner

ilia3101 commented Jan 9, 2020

I think it can't be hard to export true ACES. All it needs to be is linear, and in ACES AP0 gamut. What else could be needed to comply with ACES standard?

@Rab-C
Copy link
Author

Rab-C commented Jan 9, 2020

it can't be hard to export true ACES

Well, I guess "hard" is always relative to brain power ;) I find it hard. You, my friend, may find it a walk in the park with your coding brain.

There's quite a lot more to to meet the standard for an ACES-compliant EXR, yes. This is one of those things that I keep at an overview level for myself, because I don't believe anybody can grasp everything at its lowest & highest levels of comprehension, and because quite frankly some of the transform details & dimensional stuff makes my head spin.

But the TLDR version is that you apply an IDT matrix, with exactly which one you apply depending on the camera model, what the scene illuminant was, what colorspace and input format we're in. And then transform the now spectal-response-standardized values into AP0. Add in the minimum-acceptable header flags & metadata as you export the EXR, and - hey presto - ACES-compliant EXR.

I like the Top Gear style "How hard can it be?" but I know people who've found it too much of a maze & have bailed. But if you are up to the challenge, I know I would be impressed & would use the feature at some point.

I just would also be happy to take an EXR feature that didn't go that far & handle the transform to ACES inside my other tools, which would be less work to implement.

But wow, if it's of interest I can point you to the reference implementation:

https://github.com/ampas/aces_container

And see if I can nab a copy of SMPTE ST2065-4 from work's IEEE sub, if that would help?

@Rab-C
Copy link
Author

Rab-C commented Jan 9, 2020

(Plus of course you've already seen an example of the IDT process in the rawtoaces code we've been discussing.)

@ilia3101
Copy link
Owner

ilia3101 commented Jan 10, 2020

But the TLDR version is that you apply an IDT matrix, with exactly which one you apply depending on the camera model, what the scene illuminant was, what colorspace and input format we're in. And then transform the now spectal-response-standardized values into AP0.

Seems like this is exactly what MLV App can do already :) Our IDT matrices are currently the daylight/tungsten ones from Adobe (rawtoaces uses these for most cameras, but can generate it's own from spectral data).

MLV App does these things:

  1. Convert camera RGB to XYZ using adobe IDT matrix
  2. Transform to LMS space using CIECAT02 matrix
  3. Do chromatic adaptation (white balance) in LMS space
  4. Transform back to XYZ
  5. Transform XYZ in to user-selected output gamut (AP0 is already one of the options!)

So we might be close to ACES support...

But wow, if it's of interest I can point you to the reference implementation:

https://github.com/ampas/aces_container

How is this library different from ilmimf? Does it use ilmimf internally? Is it used by rawtoaces?

I will try and get EXR export working as soon as possible. Then we can compare with rawtoaces and make sure it works correctly.

@Rab-C
Copy link
Author

Rab-C commented Jan 10, 2020

ilmimf is the generic EXR library, supporting all types & features. ACES deliberately sets limits (e.g. not all compression types are allowed into ACES-compliant, PIZ_COMPRESSION & B44A aren't) and adds requirements.

So using the ilmimf code with extra constraints (either manually specced to match the aces_container defaults, or - tho I haven't checked to see if the 2 have been kept properly feature-synced - just by calling the options specced within ilmimf's own OpenEXR/IlmImf/ImfAcesFile.cpp) will produce ACES-compliant EXRs as a subset of all possible output files. Whereas the aces_container implementation looks to have been deliberately coded to only ever produce ACES EXRs (e.g. aces_Writer.cpp only has code to support scanline EXRs, nothing for tile-based).

@Rab-C
Copy link
Author

Rab-C commented Jan 10, 2020

And thanks for the clear workflow explanation of what's happening within MLV-App, @ilia3101 - that helps my mental map a lot!

Some houses will have built their own IDTs for ACES using the spectral response stuff that Charles Poynton & others started training people on a few years ago. But they may not have those IDTs as standalone files that could be offered to MLV-App as an alternative transform matrix, they may be embedded within scripted gizmos & not easily-extractable by people who don't have a maths-level understanding of what's going on. And as I've mentioned, there are plenty of people who don't use or want to go near ACES. But as long as we preserve the option for that group of users to have a file with the minimum intervention we've been talking about, so they can use their existing tooling & choices after they've brought the EXR into their pipeline, I actually think many or even most people will be fine with a 'developers' choice' default selection of IDTs & will welcome a single-stage jump to ACES EXRs. If I can prove it isometrically, I know I will.

If we actually get this up and running,

i) supporting the 2020 ML shooting options (10 & 12-bit, lossless compressed, anamorphic arbitrary-dimensions, HDR, dual-iso etc.) that MLVs & MLV-App currently enable,
ii) with the kind of e.g. .fpm & in-out mark 'minimally-intrusive nice-to-have' MLV-App features we talked about earlier, and
iii) I can prove the MLV-App output files using Nuke & the reference implementations to show isometric output, & offer those proofs for open examination by potential users,

I'm convinced it will jump MLV-App right to the top of available tools.

Googling around for this discussion I've seen so many threads from people on various forums looking for a way to bring MLVs into their existing (ACES or non-ACES) pipelines as EXRs with their overbrights intact. Many of whom you can see in those threads getting caught up in slightly baroque multi-stage, multi-app prep-processes they don't feel confidence in, getting frustrated & having to give up. This would give them a very simple, straightforward answer as to what to use & what to do. And I'm sure will be met with great gratitude.

I don't see a donation link on the MLV-App site, or even know if the main voices adding their development time & effort to MLV-App morally approve of donation links for open source projects. But I personally think that if you manage to start offering that kind of professional option, and it ends up being used by post houses in commercial pipelines, it would be entirely fitting for people to be able to put a bit into the kitty to say thank you. Just a quiet suggestion for when the feature launches.

Exciting times!

@ilia3101
Copy link
Owner

ilia3101 commented Jan 10, 2020

@Rab-C Thanks for clearing up the different EXR libraries. Not sure which option I'll go with yet. Whichever one will be simpler to implement.

iii) I can prove the MLV-App output files using Nuke & the reference implementations to show isometric output, & offer those proofs for open examination by potential users,

MLV App and rawtoaces will produce slightly different colour (might not be a visible difference) - because it appears rawtoaces does white balance simply on the RAW RGB channels (less good for accuracy), while MLV App does it in LMS. Do you think this could be an issue for getting "isometric" results?

I could be wrong about rawtoaces white balance though, I don't fully understand their code.

Some houses will have built their own IDTs for ACES using the spectral response stuff that Charles Poynton & others started training people on a few years ago.

What is this Charles Poynton spectral stuff?

But they may not have those IDTs as standalone files that could be offered to MLV-App as an alternative transform matrix, they may be embedded within scripted gizmos & not easily-extractable by people who don't have a maths-level understanding of what's going on.

I've not released anything yet, but I have (slowly) been working on spectral IDT methods (better than just matrix based). It could become part of MLV App eventually.

Googling around for this discussion I've seen so many threads from people on various forums looking for a way to bring MLVs into their existing (ACES or non-ACES) pipelines as EXRs with their overbrights intact.

That is very surprising and nice to hear!

@ilia3101
Copy link
Owner

I don't see a donation link on the MLV-App site, or even know if the main voices adding their development time & effort to MLV-App morally approve of donation links for open source projects. But I personally think that if you manage to start offering that kind of professional option, and it ends up being used by post houses in commercial pipelines, it would be entirely fitting for people to be able to put a bit into the kitty to say thank you. Just a quiet suggestion for when the feature launches.

Good point about the post houses. IMO Hardest thing about donations for open source is distributing them fairly between the contributors. And some percentage would definitely deserve to go to Magic Lantern as well.

Anyone have thoughts?

@ilia3101 ilia3101 pinned this issue Jan 15, 2020
@Rab-C
Copy link
Author

Rab-C commented Jan 15, 2020

Just a quick update @ilia3101 - I've effectively taken myself back to school on the low-level specifics of the transforms we're going to need. I'm re-reading Poynton's book Digital Video & HD & re-doing the FXPhD modules he wrote & presented, to fill in those gaps in my knowledge that have become apparent. About 5 years ago he was the first signal processing/color tech guy I know to start offering workshops on how to use (relatively!) inexpensive equipment to produce your own IDTs, which I didn't get to attend, but which I know a few post houses & vfx shops used as the basis for their own pipeline inputs.

I'm also trying to get myself up-to-date on what's going on with ACES. I wasn't aware, but there's a little shopping list of complaints that have built up amongst users, and actually quite big changes are being discussed even on the ingest side for v2. There also may even be a change of working colorspace in the next version, since there are serious concerns about just how many camera plates contain significant data that ends up out-of-gamut when following the default workflow. I'm trying to get my head around what all the issues being raised might mean in practical terms.

But it will definitely be desirable in MLV-App's EXR support to be able to pull data out from MLV to EXR before any transform into XYZ - because there have been problems with non-orthogonality & colour-mismatching in that process that have led people to write very specific ways of handling & checking that transform in their tools, which they will want to keep using.

A couple of the key posts that really started the conversation on that concern are here:

https://www.colour-science.org/posts/about-rendering-engines-colourspaces-agnosticism/

and here:

https://nbviewer.jupyter.org/gist/sagland/3c791e79353673fd24fa

But yeah, the TLDR is that an option to pull out MLV frame data simply demosaicked to linear RGB straight to an EXR before any transform into XYZ would be good.

My thought is that the biggest contribution I can make to the whole area is to write a Nuke script based on the MLRawViewer python code that will allow - very slowly, just for technical proving purposes - the extraction, display & inspection of single frames from MLVs at each stage of the process: raw linear grayscale, demosaicked to linear RGB, standardised through IDT, transformed to AP0 etc. and so on. So that the results of the various existing ML, OpenEXR & ACES tools & implementations can be inspected & compared with one another & a "known good" reference, and anyone developing a new process has a data set of example images to check their results against at each stage.

Does that sound useful?

@ilia3101
Copy link
Owner

@Rab-C That would be very useful. Also sorry about my slow reply here!

About 5 years ago he was the first signal processing/color tech guy I know to start offering workshops on how to use (relatively!) inexpensive equipment to produce your own IDTs, which I didn't get to attend, but which I know a few post houses & vfx shops used as the basis for their own pipeline inputs.

Is it all based on spectral measurements of the sensor? Are the most advanced IDTs still only matrix based? Surely they must have something better by now.

But it will definitely be desirable in MLV-App's EXR support to be able to pull data out from MLV to EXR before any transform into XYZ - because there have been problems with non-orthogonality & colour-mismatching in that process that have led people to write very specific ways of handling & checking that transform in their tools, which they will want to keep using.

I was thinking about using aces_container as it is small and easy to add to mlv app, compared to ilmimf, but if the ability to export camera RGB is desirable, maybe aces_container is not the best option, as it can only export aces tagged exr, which camera RGB is not. Luckily I haven't written any code for exr exporting so I can still decide.

Or would it be ok to export it in an aces container as long as the user is aware that the data has not been transformed to aces (because they would have selected that option themselves).

I'm also trying to get myself up-to-date on what's going on with ACES. I wasn't aware, but there's a little shopping list of complaints that have built up amongst users, and actually quite big changes are being discussed even on the ingest side for v2. There also may even be a change of working colorspace in the next version, since there are serious concerns about just how many camera plates contain significant data that ends up out-of-gamut when following the default workflow. I'm trying to get my head around what all the issues being raised might mean in practical terms.

I heard they are looking in to gamut mapping to fix the colour clipping you sometimes get with matrices.

A couple of the key posts that really started the conversation on that concern are here:

https://www.colour-science.org/posts/about-rendering-engines-colourspaces-agnosticism/

and here:

https://nbviewer.jupyter.org/gist/sagland/3c791e79353673fd24fa

Those pages look familiar, I may have seen them before when I was looking in to gamuts and 3d rendering. Very interesting to read.

I hope to get started on actually implementing EXR support as soon as possible.

Good night.

@Rab-C
Copy link
Author

Rab-C commented Jan 22, 2020

@ilia3101

sorry about my slow reply here!

No problem! I'm also finding myself slowing down as I dig into the details of what's needed to actually implement this. The colour science is quite advanced, and I'm realising I can't just rely on other libraries - if I want to be sure my 'proving' tool is correct, I have to make sure it's doing each bit of maths the way I need it to. The libraries can be a guide, but I'm going to have to go through and verify each operation explicitly. Fair enough, though. That's a good operating principle for creating a tool that is itself supposed to be a 'sanity check' & the material is certainly interesting!

I heard they are looking in to gamut mapping to fix the colour clipping you sometimes get with matrices.

There's two type of problem, one of which we already have easy answers for, the other which is a bit harder to create a generic fix for. The first is the issue I'm sure from our chats you're familiar with in, for example, BT.709 or P1 where the transform into the colorspace from scene linear gives you values for some pixels just outside the legal triangle for the color standard. i.e. Hues that are more saturated than is allowed. Back in the past that would have just been hard-clipped at the legal limit for saturation along that axis. But these days a common better approach sees that fixed by "gain" - multiplying the data along that axis by some <1 factor, to bring the whole range back into legal. Or if you're feeling extra-careful, in which a non-linear component is added to the multiplication factor to protect the original recorded values of the pixels whose saturation is only mid way out towards the axis limit, and just scaling back the top x% of pixels' saturation.

But an additional problem with the newer ACES definition is that its bounds are dramatically wider than the 709/P1/Wide Gamut triangles of the past, by design. Thinking about the familiar color "horseshoe", ACES inbound transforms deliberately scale values far further out than e.g.709, well into the region far closer down towards the line of purples. What's new is that this region is out of the familiar basically-planar x,y area previous color triangles are plotted in, and gets into the area of the horseshoe that's actually 'dropping away' into the 3D projected Y-space that we usually can ignore. So some of the iso-hue lines that we're used to being simply straight lines that point away from the white point actually start to become curves in the 2D x,y space in this region. So a straight-line "gain" multiplication to change the distance from white will, in some of the hues in this region, actually not just change the saturation of the pixel but also the hue. That requires a new approach to bringing pixel values back into gamut.

I was thinking about using aces_container

It's been a little while since I poked through the codebase, but iirc, aces_container skips whole types of EXR that it would be useful to be able to output. And non-optionally does some transforming that it would be more attractive to keep optional. If we imagine a bright future - an over-brights future? ;-) - in which MLV-App is the primary way to get data out of an MLV for pro-spec work, ACES-only EXR output would be pretty restrictive. Only IMO, as always. I'll try to take another look & get a more specific answer, if there's a strong reason to avoid ilmimf? I'm also wondering now what library ffmpeg uses for its EXR output? I'll add that to the investigation list.

Is it all based on spectral measurements of the sensor?

I found Scott Dyer's post here very interesting reading, I think you might enjoy it as well - it's proper hands-on work:

https://acescentral.com/t/results-from-an-idt-evaluation/2229

@DeafEyeJedi
Copy link

DeafEyeJedi commented Jan 22, 2020 via email

@Rab-C
Copy link
Author

Rab-C commented Jan 22, 2020

Glad you're finding it interesting, @DeafEyeJedi !

Update for the amusement of @ilia3101 - went digging into the FFMpeg code. Found exr.c & followed through the definitions & the codec/format interactions with libav

Got a bit puzzled because I could see the decode functions, and the code to pass the instruction to the img2 output format method to call for output in an exr. But couldn't, no matter where I looked, find the encoder. Turns out that's because they don't have one. Their decoder uses the ilm code, lightly adapted, but in a conversation that might be familiar from a couple of weeks ago, it's explained on the mailing list:

FFmpeg doesn't have an exr encoder yet.
The most difficult part, is to add float/half pixel format in ffmpeg,
before adding an exr encoder.
(for now ffmpeg exr decoder, decode data in 16bit (int) not in float)

Looks like a bit of a technology gap has opened up between the open source and commercial worlds here...

@ilia3101
Copy link
Owner

ilia3101 commented Jan 22, 2020

@Rab-C interesting about ffmpeg. So it uses integer internally :( Not good, they need to add float support.

I found Scott Dyer's post here very interesting reading, I think you might enjoy it as well - it's proper hands-on work:

https://acescentral.com/t/results-from-an-idt-evaluation/2229

Ah yes, saw that, it is very interesting. I really hope the ACES guys will capture and release spectral data for more cameras some day.

BT.709 or P1 where the transform into the colorspace from scene linear gives you values for some pixels just outside the legal triangle for the color standard. i.e. Hues that are more saturated than is allowed. Back in the past that would have just been hard-clipped at the legal limit for saturation along that axis. But these days a common better approach sees that fixed by "gain" - multiplying the data along that axis by some <1 factor

Yep, just recently added the multiplying thing to MLV App, I did it with a smooth reinhard curve so that the factor adapts (it does make grass a tiny bit desaturated, something I need to fix). It doesn't work perfectly, there might still be some issues I haven't found.

there's a strong reason to avoid ilmimf?

Not really, it might just be some extra effort to link it, but I can sort that out. Also seems a bit more complicated to use. It might mean linux users will have to install an additional package before compiling mlv app. Let's use ilmimf though, the features seem worth it.

@Rab-C
Copy link
Author

Rab-C commented Jan 30, 2020

Bit of an update, @ilia3101 - I got quite excited by OpenImageIO for a while because it has raw input and EXR output. So I was thinking all we'd need to do was patch OpenImageIO's raw input plugin to support MLV files & we'd have an all-in-one do-it-all library.

But then I went digging into the code and found the raw input plugin is based on some libraw code that used to clip overbrights (don't know if it's patched now, but it was a red flag to look closer) and the openimageio toolset functions themselves (colorspace transforms, interpolation etc.) are definitely not float-friendly.

BUT the EXR output plugin is absolutely float-correct. In fact it's just a wrapper for ilmimf. But it's a wrapper that exposes the ilmimf functions in quite a friendly & easily-called way within the openimageio lib itself.

So for MLV-App purposes, where the data is already available in memory & it's just an output library that's needed, if you find ilmimf putting up any resistance, it looks possible to me to use openimageio as a friendlier wrapper for it instead.

I'm now planning on doing that myself for the proofing tool, at least initially. So let me know if you end up looking at openimageio & finding some show-stopper I've missed, if you'd be so kind! Thank you!

And on that front I just have one obstacle to overcome now for a hacked-together proof-of-concept, in that I don't quite understand the MLV RAWF data block format for arbitrary resolution & bit-depth yet. Just need more time reading the .h I think...

I'll of course share anything I can get working.

@ilia3101
Copy link
Owner

ilia3101 commented Feb 3, 2020

Thanks! I will try out ilmimf first, but good to know about openimageio

@ilia3101
Copy link
Owner

ilia3101 commented Feb 3, 2020

And on that front I just have one obstacle to overcome now for a hacked-together proof-of-concept, in that I don't quite understand the MLV RAWF data block format for arbitrary resolution & bit-depth yet. Just need more time reading the .h I think...

If you need to read MLV files, you can try using my new "libMLV" library I'm working on: https://github.com/ilia3101/libMLV/

It is a C library, I can help you out with getting it to work, it should be quite easy to do.

It can get raw frames, I can help you out with a simple debayer, and you can do the rest (conversion to aces)

@ilia3101
Copy link
Owner

ilia3101 commented Feb 3, 2020

@Rab-C I think it will be easiest to get it working on Mac and Linux, in terms of compiling and linking everything.

What system do you use? and what about most EXR users? I heard linux is popular in the industry...

@Rab-C
Copy link
Author

Rab-C commented Feb 8, 2020

Apologies, @ilia3101 - I'm on a rather fraught & badly-prepped shoot without my proper gear or comms, should be finishing this weekend (Hallelujah). V.excited to see the new code in action when I get back!

I'm on Windows, but I use Debian thru SUA, so I'll be able to test. (Looking forward to it!)

Most people I know are on Win, but that may just be coincidence - I'll be genuinely interested to see which platform sees the most activity internationally. Where I am, Linux is standard for anything headless or farmed, e.g. dedicated ingest/transcode, or stations which are pre-configured software-hardware combos from a supplier (e.g. a Baselight station). Most general- or mixed-purpose workstations that need decent grunt, decent value & networked administration tend to be Win. Apart from the shops or solo acts that swear by - and will generally stick by - their Macs. But some jobs & software within the pipeline will be almost exclusively Mac almost anywhere. Apart from if someone mainly Win-based feels they have a strong reason to avoid Macs - in which case they'll increasingly now add a couple of linux workstations to do whatever other workflows would be using the Macs for. Assumptions on that front also seems to vary by region as well. So "clear as mud"!

Back soon, fingers crossed!

@ilia3101
Copy link
Owner

Ok then I will do my best to make sure Windows is working straight away.

@cedricp
Copy link

cedricp commented Jan 11, 2021

Hi,
I'm currently working on MLVApp to add ACES2065-4 support. The idea is to use the rawtoaces library to do the job. I've got a working version right now and trying to make an AppImage of it.
I had to do some changes on rawtoaces library so it can be downloaded here : https://github.com/cedricp/rawaces (not really easy to compile, though)
The MLVApp repo : https://github.com/cedricp/mlvapp
It's only for Linux box so far. I did some tests and it looks OK.

@cedricp
Copy link

cedricp commented Jan 12, 2021

AppImage here for testing : https://drive.google.com/file/d/1QpL2UTNbiHkmRgG67B6MG9Y9yqD5bItE/view?usp=sharing
Did some test in Blender with opencolorIO ACES configuration, working great ! Dunno how exposures values are computed, but it does the job :)

@bouncyball-git
Copy link
Collaborator

bouncyball-git commented Jan 17, 2021

Hey man! Just saw it. This is great!!!

I have a few questions though (I more or less understand additional export settings):

  1. Can you explain them in detail?
  2. Why we need supported camera setting if previous 3 options indirectly define the camera by specific matrix etc and there is only 5d mark II, what about other cameras?
  3. I had no time to look at sources yet so do you use LIBRAW to debayer or mlvapp's built in one?
  4. I guess writing to exr is happening just after raw correction/debayer and before the image processing right?

@bouncyball-git
Copy link
Collaborator

Well, after brief code examination, answers to questions 3 and 4 are clear. "startExportEXR" hooks dng RAW buffer and rawtoaces does its job internally, e.g. debayer/wb/processing and after that exports exf file. Raw corrections done before buffer hooking.

Questions 1 and 2 are still relevant.

@cedricp
Copy link

cedricp commented Jan 18, 2021

Hi,
You're right about buffer processing, I generate a DNG buffer and give it to rawtoaces lib. Only first two comboboxes are relevant for the moment (wb-method and matrix mode) the third/fourth actually show the supported camera/illuminant (informative only).
About supported cameras, you need to look at the rawtoaces lib. Hope it helps, and I know it's very WIP right now.

@KeygenLLC
Copy link

KeygenLLC commented Dec 26, 2021

I can also make use of EXR export in certain cases. Running on macOS Mojave here with Canon 5D III.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants