Skip to content

Color Processing

Bjorn Stahl edited this page Nov 20, 2020 · 1 revision

This page tracks the state, plan and policies for display colour management, calibration, colour aware rendering, blending, and deep-/high- definition rendering.

TODO

p = partial

[x] Scanout buffer allocation for R10G10B10A2, FP16 formats
[x] Color Management page allocation and transfer
  [x] WM API 
[ ] Calibration Builtin / Hook-script helper
[x] sRGB and floating-point FBO allocation
[p] FBO color attachment from scanout capable memory
[x] 3D texture creation and WM API for 3D texture definition
[ ] HDR Display metadata configuration
[ ] Reference shaders for 3D/1D LUT colour grading/correction
[ ] HDR page allocation and transfer
  [x] structure / name / binding / WM event
  [ ] client metadata format definition (luminance, ...)
[ ] Add HDR format supports to aloadimage (as reference / test)
[ ] Add HDR format supports to afsrv_decode (mostly copy the aloadimage conversion)
[ ] Tone-map arcan-lwa to SDR or forwarded scRGB FP16 when nested

Clients

Clients are default bound to a compile-time set colour space, practically linear interleaved RGBA888 formatting with linear alpha, though there have been specialised builds that deal with a full YUV- only path for embedded use. This is built on the assumption that composition and blending is the default, that sRGB sourced texture transfers or FBOs are not guaranteed by a wide enough range of accelerated drawing APIs and that many client developers simply are not aware of what space they are operating in. There is an explicit toggle to indicate that a buffer is sRGB (following the principle that if you ask for something, you signal some level of awareness).

Subsequently, the client can request a subprotocol for color management controls as well as HDR metadata. Subprotocol requests are synchronous and default-fail; if the request is rejected, the compile time default is the only path.

If CM is permitted, a memory page for receiving display profiles and setting ramps is returned. The WM indicates how/which displays are mapped into this page, as well as read-only vs read-write permissions. It also needs to explicitly enable permissions for this through target_flags. Xarcan, for instance, will try and use this setting to map the XRandr exposed virtual display to let legacy colour management clients work.

If HDR is permitted, a memory page for the extended metadata is provided and synched per frame (to deal with variations in content-luminance etc).

Server/WM

The WM is mainly responsible for configuring the mapping, though it should be trivial to write an external helper script or a WM- agnostic hook script that exposes enough controls for an external calibration tool to be possible -- this could be done by interposing the _display_event handler (to detect new displays), map_video_display function (to track what is visible on a screen), expose a connection path or run a tagged launch_target (arcan_db add_target color_manager BIN -colormgmt /path/to/my/calibtool) in order to be able to show calibration contents at explicit screen positions at specific orders (overlays), then retrieve and store the profile.

The platform layer can deliver sensor analog data from ambient lighting to help guide backlight controls - but there is no tool that currently translates from the sensor-buses from intel and the AMD ones are entirely(?) absent.

For applying a correction profile, there is a facility for setting ramps manually per displays - and if those fail, image_shader with a correction shader that has access to a 1D/3D lut. Similarly, there is a virtual LED controller exposed to each display for controlling backlight strength.

Server/platform

This is dependent on the progress of kms/gbm and a number of other system layers. Hopefully we can extend the backlight controls to deal with EDR controls, possibly with stepping and range (the default led format is just r8g8b8 so either base256 encode the level or add something less awful), exposing the property to the WM and react/enable relevant settings if a FP16/R10G10B10 buffer is mapped on a map_video_display. Thus testing should also be a one-liner of having a reference FP16 source and a regular one, and alternate map calls.