Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Place the calibrated cameras in Unity #60

Open
carlafernandez00 opened this issue Feb 9, 2022 · 6 comments
Open

Place the calibrated cameras in Unity #60

carlafernandez00 opened this issue Feb 9, 2022 · 6 comments

Comments

@carlafernandez00
Copy link

Hey!

I am using Live Scan3D for calibrating 2 Azure Kinect and, after that, I wanted to place both cameras in Unity. To do it I extracted the rotation matrix and translation vector from the .txt files and made the necessary computations to define the rotation and translation of the camera objects from Unity (I attach the code). But the position and rotation from Unity didn’t correspond to the “reality”.

// rotationMatrixCV = 3x3 rotation matrix; translation = translation vector
        var rotationMatrix = new Matrix4x4();
        for (int i = 0; i < 3; i++)
        {
            for (int j = 0; j < 3; j++)
            {
                rotationMatrix[i, j] = rotationMatrixCV[i][j];
            }
        }
        rotationMatrix[3, 3] = 1f;

        var localToWorldMatrix = Matrix4x4.Translate(translation)* rotationMatrix;


        Vector3 position;
        position.x = localToWorldMatrix.m03;
        position.y = localToWorldMatrix.m13;
        position.z = localToWorldMatrix.m23;
        transform.position = position;

        Vector3 forward;
        forward.x = localToWorldMatrix.m02;
        forward.y = localToWorldMatrix.m12;
        forward.z = localToWorldMatrix.m22;
        

        Vector3 upwards;
        upwards.x = localToWorldMatrix.m01;
        upwards.y = localToWorldMatrix.m11;
        upwards.z = localToWorldMatrix.m21;

        transform.rotation = Quaternion.LookRotation(forward, upwards);

I think the problem can be that the coordinate systems of Unity and Live Scan3D are different. Any suggestion will be appreciated!

Thank you in advance!

@ChristopherRemde
Copy link

Hey Carla,

If I remember it correctly you have to inverse the matrix https://docs.unity3d.com/ScriptReference/Matrix4x4-inverse.html.
Does that do anything useful?

@MarekKowalski
Copy link
Owner

Hi Carla,
sorry, I missed this message earlier. I'm just going through the code trying to remember what format the calibration is stored in the .txt file. The relevant code is in lines 710 - 714 in liveScanClient.cpp and in utils.cpp.
It appears that a point is transformed from local coordinates to world coordinate as follows:
x' = R(x+t),
The 4x4 matrix you use in Unity assumes a transform of the following form: x'=Rx + t. Thus, I believe what you need to do is multiply the translation you get from the .txt file by the rotation matrix you got from the same file. The same operation is also done in lines 138-145 of KinectSocket.cs.

Once you do that the position of the camera in Unity should match what you get in the app.

Marek

@carlafernandez00
Copy link
Author

Hi Marek!

Thank you for your response. I tried to compute the transformation as you suggested (x' = R (x+t)), but I haven't got the correct reconstruction.
I attach the new code and the result obtained in Unity so you can see it more clearly and maybe spot some errors.

Code:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class TestCalib : MonoBehaviour
{
    public Vector3 position;
    public Vector3 rotation;
  
    public int index = 0;
    // Start is called before the first frame update
    Vector3 translation = new Vector3(0.0f, 0.0f, 0.0f);
    float[][] rotationMatrixCV =
           {
         new float[3],
         new float[3],
         new float[3],
         };
    private void Update()
    {
        //transform.rotation= Quaternion.Euler(cam1.TransformDirection(rotation));
        //transform.position = cam1.TransformPoint(position);
    }

    void Start()
    {  // rotationMatrixCV = your 3x3 rotation matrix; translation = your translation vector

        // set the rotation matrix and translation vector of the camera 
        if (index == 0)
        {
            rotationMatrixCV[0] = new float[3] { 0.25563f, 0.25744f, -1.51961f };
            rotationMatrixCV[1] = new float[3] { -0.192701f, -0.819072f, -0.540359f };
            rotationMatrixCV[2] = new float[3] { -0.981055f, 0.172012f, 0.0891256f };

            translation = new Vector3(0.0199481f, 0.547296f, -0.836701f);
        }
        else if (index == 1)
        {
            rotationMatrixCV[0] = new float[3] { 0.15183f, -0.06187f, -1.14252f };
            rotationMatrixCV[1] = new float[3] { -0.0408712f, -0.954359f, 0.295853f };
            rotationMatrixCV[2] = new float[3] { -0.997299f, 0.0208791f, -0.0704222f };

            translation = new Vector3(0.0610309f, -0.297932f, -0.952634f);
        }

        var rotationMatrix = new Matrix4x4();
        for (int i = 0; i < 3; i++)
        {
            for (int j = 0; j < 3; j++)
            {
                rotationMatrix[i, j] = rotationMatrixCV[i][j];
            }
        }
        rotationMatrix[3, 3] = 1f;
        Vector4 translation_vector = rotationMatrix*new Vector4(translation[0], translation[1],translation[2], 1.0f);
        var localToWorldMatrix = Matrix4x4.Translate(translation_vector) * rotationMatrix;

        Vector3 position;
        position.x = localToWorldMatrix.m03;
        position.y = localToWorldMatrix.m13;
        position.z = localToWorldMatrix.m23;
        transform.position = position;

        Vector3 forward;
        forward.x = localToWorldMatrix.m02;
        forward.y = localToWorldMatrix.m12;
        forward.z = localToWorldMatrix.m22;
        

        Vector3 upwards;
        upwards.x = localToWorldMatrix.m01;
        upwards.y = localToWorldMatrix.m11;
        upwards.z = localToWorldMatrix.m21;

        transform.rotation = Quaternion.LookRotation(forward, upwards);
    }
}

Result:

camLiveScan

The idea is to place the two cameras in Unity and then project the point clouds captured by each of them. If the cameras were well placed, the two point clouds should merge and reconstruct the whole scene. But here's the result of placing the two cameras using the previous script. Clearly there's something wrong.

Thank you again!

@ChristopherRemde
Copy link

Hey I'm just curious, you took the pointclouds from the .ply files and then imported them into Unity right?
Because the pointclouds already have the transformations applied onto them, so you wouldn't need to change their transforms at all. For example if you load all of the unmerged .ply frames into Meshlab, they should already appear as "stitched" pointcloud.

@carlafernandez00
Copy link
Author

Hi Christopher! No, I'm not taking the .ply files, I have the Kinects connected to Unity and getting the pointclouds in real time from there.

@ChristopherRemde
Copy link

Ah I see!

Maybe you can take a look at this project/script here? As far as I know it works and imports the camera extrinsics from an modified version of Livescan into Unity. You don't need the modified version of Livescan though, it only saves the camera extrinsics into a slightly different format, but the values are the same as in the .calib file.

https://github.com/Elite-Volumetric-Capture-Sqad/volumetricpipeline/blob/main/Unity/LS3D_ExtrinsicsViewer/Assets/ViewCameras.cs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants