Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When will the AzureKinect branch be finalized? #52

Open
VisionaryMind opened this issue Dec 20, 2020 · 10 comments
Open

When will the AzureKinect branch be finalized? #52

VisionaryMind opened this issue Dec 20, 2020 · 10 comments

Comments

@VisionaryMind
Copy link

This branch seems to be in an incomplete state. It compiles properly, but the server-client pair do not produce any RGBD data in the viewer and upon saving point clouds, it streams thousands of 1Kb PLY files to the out directory -- it is impossible to stop this save process by clicking on "Stop Saving". The program is unresponsive and must be manually shutdown. Without looking through the code, it appears you are missing the libraries for the Azure Kinect SDK v2.0 (specifically k4a and k4arecord).

@MarekKowalski
Copy link
Owner

Hi, thanks for reaching out. Due to work from home I unfortunately do not have access to a Kinect, so it's hard for me to make fixes to this branch. Having said that, last time I checked, the app worked fine. Let's try to debug this. Here are some questions:

  1. I'm not really sure what you mean by Azure Kinect SDK v2.0, the project is currently set to use Azure Kinect SDK v1.4.1, which should download via NuGet. This is also the latest version available in the Azure Kinect repo. Could you elaborate on what you meant there?
  2. This issue sounds like the client is not reading any data from the Kinect. When you open the client app do you see the Kinect's camera image in the app window?

@VisionaryMind
Copy link
Author

Apologies for the confusion. I am working with multiple SDK's here and mistakenly wrote 2.0, but yes -- I am using v1.4.1 direct from the repo. Also, the client is not reading data from the Kinect at all. I see a "capture device failed to initialize".

@MarekKowalski
Copy link
Owner

I see, looks like bInitialized is set to false in bool AzureKinectCapture::Initialize(). It might be set in one of the following lines: 36, 54, 114

Can you try setting breakpoints in those lines and seeing which it is? Here are some possible causes depending on the lines:

  • if it's line 36 then looks like the SDK can't open the device. Did you try other Azure Kinect apps? Did they work?
  • if it's line 54 then looks like there is an issue with the Kinect's internal calibration. We'd have to see what the solutions are in this case.
  • if it's line 114 then everything else worked, but the frames are still not arriving. This might indicate some sort of an issue with how LiveScan3D initializes the Kinect.

@VisionaryMind
Copy link
Author

Thank you for the additional guidance. Looking through the code, I noticed that there as no AzureKinectCapture class, and it suddenly became apparent that I had not switched from Master to AzureKinect branch. Once I made that switch, it is now able to capture and record frames. Thank you for taking the time to respond and sorry for the distraction!

@VisionaryMind
Copy link
Author

I would like to ask one last question regarding point clouds. Is there a reason they are rendered upside down? Would you be able to provide a tip as to where in the code the rotation on Z axis could be turned 180 degrees? Live view also displays the point cloud upside down.

@MarekKowalski
Copy link
Owner

Hi, this is due to the Azure Kinect's coordinate system being different than Kinect v2. The simplest way to change it is to perform calibration using the markers in the docs section. The easiest way to do this would be:

  • go to server settings, add a marker with id 0 and set it's rotation around z axis to 180 degress.
  • print the marker, place it in a position that is seen by the Kinect and press calibrate in the server. If you have no way to print it, you can show it on your phone.

Marek

@VisionaryMind
Copy link
Author

This is not specifically related to the original issue, but I want to keep it in the AzureKinect branch category. I am attempting to capture timestamps for all frames, and am storing them in an array inside the KinectServer.KinectServer GetStoredFrame method. Unfortunately, when these are streamed to file along with the PLYs, the timestamps appear to start after the recording is stopped. Is there a more appropriate place to capture a timestamp for each frame? I noticed you are storing the current time intFPSUpdateTimer inside OpenGLWindow.cs, however, I would have presumed this happens after the frame is received and GetStoredFrame is invoked. If you have a moment, please let me know what I have missed, as I would very much like to be able to move the depth camera around and know, to the microsecond, when each frame is being captured.

@ChristopherRemde
Copy link

ChristopherRemde commented Dec 23, 2020

I don't know if this helps you, but in the PR #49 I changed the timestamp to be taken directly from the kinect device, rather than the PC it runs on. That could be a bit more accurate.

But please note that this timestamp is not a synchronized global time (e.g. 11:12 AM) but rather a timer which starts when the device starts it capture (e.g. 5.7192 seconds after the device start.)

@VisionaryMind
Copy link
Author

VisionaryMind commented Dec 23, 2020

But please note that this timestamp is not a synchronized global time (e.g. 11:12 AM) but rather a timer which starts when the device starts it capture (e.g. 5.7192 seconds after the device start.)

Yes, I was aware of this caveat, and therefore did not try to work with the device's time. Our workflow uses multiple capture devices (audio, video, LiDAR, depth, DSLR), and everything is being synced to LTC / SMPTE timecode. Even if I were to capture system time and then increment it by the Kinect's timer, I am quite certain it would not be in sync with any other device capturing at the same time. System time is about as close as we have, especially if the devices are on a single system.

It sounds to me that the answer here is to feed an LTC timesync into the Kinect's audio stream. Do you implement such a stream anywhere in your code? I did not see a feature to capture audio, but Kinect has 360-degree spatial capability, and it might be quite novel to use it, should multiple "clusters" of Kinects be implemented for parallel volumetric capture.

I will be happy to move this in to the feature request section, should you feel it is something worth pursuing. Most of our code is written in Python, so if you have any CPP/C# snippets of examples lying around that implements such a feature, please let me know. Thank you for your time!

@MarekKowalski
Copy link
Owner

The app does not read the audio stream anywhere in the code unfortunately and I don't think I have any snippets of such code laying around.
I feel that an alternative solution for you might be to synchronize the devices using an external trigger as discussed here. You could have the trigger pulse generated at a precise time, which would provide you with the frame's timestamp.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants