Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question, how to we get the real size of the object ? #136

Open
Jiakui opened this issue May 29, 2021 · 3 comments
Open

Question, how to we get the real size of the object ? #136

Jiakui opened this issue May 29, 2021 · 3 comments

Comments

@Jiakui
Copy link

Jiakui commented May 29, 2021

Dear Author,

Following the example "fitting a hand mesh to several RGB sensor images", I am trying to fitting a mesh of objects to several RGB images. My question is, how can we make the obtained mesh with real size,for example , if I measure the length or width of the mesh, can I get the values as same as I measure the real objects with a ruler.

It seems that we can set the intrinsic parameters of the camera. Does that guarantee the outcome of the mesh has the same size as the real objects?

Thanks so much !

@martinResearch
Copy link
Owner

If you have only one camera it is not possible to estimate the size of the object: you could have a small object close to the camera or a big object far from the camera. With multiple cameras you need some values that have units (meters for example). The only values with units in the camera models are the translation part of the extrinsics (4th column). If your multiple cameras actually correspond to multiple positions of a single physical camera along a trajectory then you will need to have some measurement of the length of the camera displacement. This could potentially be done by having a second visible object in the scene with known size. If you really have multiple physical cameras, you will need to calibrate their respective positions using a calibration object with knows size (a checkerboard for example). You could use opencv to do the calibration. I hope this clarifies your options.

@Jiakui
Copy link
Author

Jiakui commented May 29, 2021

Thanks so much for your very rapid answer. If I want to do it with your first option, "This could potentially be done by having a second visible object in the scene with known size.", should I simultaneously fit two meshes with the given multiple images? (I can actually have a second visible object with known shape and size, so that I should only fit the position of the second visible object). This is not supported by your code now, how should I rewrite your mesh_fitter.py ?

For your second option, I can calibrate the respective positions of each image using a checkerboard, but I don't know the relative coordinate from the object to checkerboard, and I still need to optimize for the relative position from the object to the checkerboard. Do you have any suggestion about how to modify your mesh_fitter.py ?

Thanks again !

@martinResearch
Copy link
Owner

In order to estimate the camera positions with a known object I would not use my library but opencv. Mesh fitting through differentiable rasterisation is not the right tool here. You will need to estimate your camera intrinsics beforehand using classical intrinsic camera calibration using a checkerboard with opencv. Then once you have the intrinsics, you will be able to estimate the camera trajectory in a sequence with your both the checkerboard and your object by keeping the checkboard at fixed location and in view of the cameras using opencv's perspective-n-points.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants