COLMAP rotation matrix vs drone data #2468
Replies: 1 comment
-
So what you're saying is that you get poses from your drone and poses from a reconstruction (using images captured by the drone) and they look different?
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi!
I'm doing a project comparing COLMAP to using a drone's own pose data when creating NeRFs. I'm quite new to this and need some help with the coordinate systems.
My drone has pitch, roll and yaw data, and the coordinate system they use is having x go through the front of the aircraft, y through the side and z down (consistent with aircraft body coordinate systems). My roll, pitch and yaw data is rotation around x, y and z axis respectively.
When I calculate my rotation matrices using this data, they look very different to that of COLMAP for the same frame, but I don't understand exactly why. If COLMAP uses opencv as a camera, how do I make sure my rotation matrices are relative to the same coordinate system? How does COLMAP decide on an origin and how can I find it?
To summarize, how can I use my drone data instead and get the correct rotation matrix that I can use instead of COLMAP as input?
Let me know if any additional information is needed in order to answer this question. Thank you in advance! :)
Beta Was this translation helpful? Give feedback.
All reactions