New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues with Color-correcting Kinect and RealSense recordings #85
Milestone
Comments
Thanks! I think https://stackoverflow.com/questions/70233645/color-correction-using-opencv-and-color-cards looks the most promising. I will look at it. |
Some ideas on implementation. Let's assume 8 cameras for now.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We should color-correct our recordings made with offline Realsense or Kinect capturers, but that is currently impossible because of a number of distinct issues:
We haven't captured a color reference, so we would have to make an educated guess as to what the correction should be. But we have captured the Aruco Origin target, which has areas of white, black, red, green, blue and yellow, so maybe we can use that for our educated guess? Or if that doesn't work we could do it "by taste" if we have serious software like BlackMagic DaVinci.
If we want to use standard software like DaVinci I guess we have to isolate the color track out of our recording, process that in DaVinci, and then re-insert the color track into the recording. But this re-inserting is going to be a problem:
.mkv
files. I can use a very longffmpeg
command line to extract the RGB track. But I have not been able to find the command line to re-insert the track. The Azure Kinect MKV file is rather non-standard-complying. Help here would be needed. (and incidentally the ability to reconstruct Kinect MKV files would also help @ashutosh3308 with his VQEG experiments)..bag
format, which isn't really a video file format but more a stream of events. I don't think we have any chance of recreating this.An alternative would be to do the conversion in code, as we are reading the RGB data from the recording to construct the point clouds. I presume that we could multiply each
(r, g, b)
value by a 3x3 matrix (possibly after de-gamma-correcting it). But do do this I would need a reference to working code, or to an algorithm.The text was updated successfully, but these errors were encountered: