Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fountain_p11 dataset #407

Closed
HelliceSaouli opened this issue Mar 10, 2018 · 14 comments
Closed

Fountain_p11 dataset #407

HelliceSaouli opened this issue Mar 10, 2018 · 14 comments

Comments

@HelliceSaouli
Copy link

HelliceSaouli commented Mar 10, 2018

Hello
so i wanted to reconstruct fountain_p11 using the ground truth camera calibration so I used the midelbury.sh script to create the scene and then followed by using featurerecon. and it seems that ground truth camera are not correct. I even used it with smvs and it didn't work. also tried to rephotograph the ground truth model with ground truth camera (with my own code which work fine with other scenes) and it did not work. can some one please tell me how to use Sretcha data set correctly

EDIT:
it seems that the ground truth camera param are correct since i used it with pmvs-2 and it did work fine. this is wiered

@simonfuhrmann
Copy link
Owner

So the way you described it sounds good -- you create a scene with the script and run featurerecon. The script, however, is for Middlebury and not for Strecha datasets. You may have to adapt it a little.

  • Can you take a look at the meta.ini file yourself and compare with the parameters from Strecha?
  • Before running featurerecon, can you inspect the scene with UMVE?

@HelliceSaouli
Copy link
Author

Well because in Strecha fountain_p11 contain only 11 images i manually created a fountain_par.txt so I think Middelbury script will run fine and meta.ini. are correct the umve run fine also if run it before featurerecon it shows cameras with weird rotations when i run featurerecon i get only 1000 and something point forming cone like shape. check the image below
cone

Also if you noticed Strecha give in his data set 2 files camera which contains K , R , T. and another file called P which contains the projection matrix if take I compute p = K*[R|t] i get a matrix simlar to the one in the P file with one column diffrent check image blow.
matrice

this dataset is bugging me how did people used it to validate stuff

@simonfuhrmann
Copy link
Owner

It's difficult to see from the image what is wrong. If you get features in a cone-like shape, some of the camera parameters are probably wrong. May be the cameras are inverted? You math in the second screenshot looks wrong to me. Shouldn't RT have all zeros in the last row, with a one in the bottom right corner?

@HelliceSaouli
Copy link
Author

the math is correct according to pinhole camera model R is 3x3 t is 3x1 and k is 3x3 why should i augement RT by row. i mean you can augment the projection p if you want to do some homogenous transformation.
Also i just checked on the net i found out that Strecha compute his projection P like this : p = k * [R^T |-R^T t]
the "^T" means transpose but i don't get this.
back to the problem :
fountain_par.txt here is the file you can try run script and featurerecon on it your self if you have time. i think the camera parameters given in the dataset are wrong or they are not compatible with MVE and SMVS

@simonfuhrmann
Copy link
Owner

Maybe someone on the team has time to look into this, I don't. @nmoehrle, @flanggut?

@HelliceSaouli
Copy link
Author

Thank you. also i found out that all Strecha datasets even new ones in here : https://cvlab.epfl.ch/data/strechamvs do not work either so this role the assumption that the camera param are wrong and lead me to believe that the ground truth camera param are somehow not compatible with MVE and SMVS

@simonfuhrmann
Copy link
Owner

Please post one of your meta.ini files here.

@nmoehrle
Copy link
Contributor

When I look at the screenshot of UMVE I can see that the Rotations are incorrect, the views are supposed to form a arc looking towards the center. When I experimented with strecha I had my own conversion scripts and since they are written in python I didn't try to integrate them into MVE. Can you show me the script that you used to convert the camera parameters, or give a link?

@nmoehrle
Copy link
Contributor

My best guess is that you did not convert the camera position (c stored in the strecha camera files) into the translation (t = -R * c). Further I think there was some oddity with the strecha camera files, the camera matrix is in row major and the rotation matrix is in column major, or if you will, the transposed rotation matrix is stored (R^t).

@HelliceSaouli
Copy link
Author

HelliceSaouli commented Mar 14, 2018

@simonfuhrmann here the meta file
meta.txt
@nmoehrle well i used this: https://github.com/simonfuhrmann/mve/wiki/Middlebury-Datasets to get camera parameters.
and yes i didn't (t = -R * c) so i thought that Strecha gives you the translation vector t what confused me is that Stesha in the readme file say p = k * [R^T |-R^T t] if he just replaced t by c -_- . i will try to use this information and see what it gives

@nmoehrle
Copy link
Contributor

This script cannot parse the .camera files of the strecha benchmark, it just reads the middlebury camera parameters format, a single line that looks like this:
"imgname.png k11 k12 k13 k21 k22 k23 k31 k32 k33 r11 r12 r13 r21 r22 r23 r31 r32 r33 t1 t2 t3".

The .camera files have a entirely different structure:

Line Content
1-3 K matrix
4 unknown
5-7 R^t
8 c
9 width height

@HelliceSaouli
Copy link
Author

@nmoehrle yes yes i'm aware of that i manually created a file fountain_par.txt from .camera for 11 image (lazy to write my own parser) the only thing i didn't consider is (t = -R * c) i used c given in .camera as t. i will fixe this later and post the results

@HelliceSaouli
Copy link
Author

@nmoehrle well i think the problem solved thanx to you
screen

@nmoehrle
Copy link
Contributor

Yes this is how I remember it :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants