You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We manually create Colmap sparse folder structures using intrinsics and extrinsics from external hardware, via a script that generates the necessary images.txt, cameras.txt, points3D.txt, and project.ini files for COLMAP, following the expected formats and conventions. Next, we use 'File:Import model' and launch 'Processing:Feature Extraction:Pinhole' with a ‘Shared’ camera with ‘Custom’ Colmap Pinhole camera parms from our hardware (e.g.: 3222.910, 3222.437, 2052.824, 1548.820). We then run 'Processing:Feature Matching:Exhaustive', and lastly call triangulation from the CLI (see the log below).
We use the supplied [python scripts] to visualize the 3D points and cameras from the triangulation, and export them to other software (including Agisoft MetaShape and Meshlab) to cross-check the visualization.
For this example, 'visualize_model.py' reports (17) images with (1) camera as expected, with 678 triangulated points. However, the the Open3D visualization suggests these points are not correctly triangulated for our camera frustums:
For reference, here is a similar view of the same cameras imported and triangulated in MetaShape:
The camera positions, rotations and local axes match in the Open3D and MetaShape visualizations.
The triangulation step in Colmap is computing a fair number of valid points as shown in the log below, but perhaps the 3D points from each view are not passing geometric cross-check because we need to scale our extrinsics? If so, how can we find the correct scaling value?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
We manually create Colmap sparse folder structures using intrinsics and extrinsics from external hardware, via a script that generates the necessary images.txt, cameras.txt, points3D.txt, and project.ini files for COLMAP, following the expected formats and conventions. Next, we use 'File:Import model' and launch 'Processing:Feature Extraction:Pinhole' with a ‘Shared’ camera with ‘Custom’ Colmap Pinhole camera parms from our hardware (e.g.: 3222.910, 3222.437, 2052.824, 1548.820). We then run 'Processing:Feature Matching:Exhaustive', and lastly call triangulation from the CLI (see the log below).
We use the supplied [python scripts] to visualize the 3D points and cameras from the triangulation, and export them to other software (including Agisoft MetaShape and Meshlab) to cross-check the visualization.
For this example, 'visualize_model.py' reports (17) images with (1) camera as expected, with 678 triangulated points. However, the the Open3D visualization suggests these points are not correctly triangulated for our camera frustums:
For reference, here is a similar view of the same cameras imported and triangulated in MetaShape:
The camera positions, rotations and local axes match in the Open3D and MetaShape visualizations.
The triangulation step in Colmap is computing a fair number of valid points as shown in the log below, but perhaps the 3D points from each view are not passing geometric cross-check because we need to scale our extrinsics? If so, how can we find the correct scaling value?
Beta Was this translation helpful? Give feedback.
All reactions