Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allowing External Angle Sources in Pose Estimation #1309

Open
wants to merge 11 commits into
base: master
Choose a base branch
from

Conversation

InterstellarDrift
Copy link

@InterstellarDrift InterstellarDrift commented Apr 13, 2024

This pull request adds the IMU_SOLVE_BEST_TAG PoseStrategy. It takes the target returned by the getBestTarget method of PhotonPipelineResult and solves solely from that tag. When tested in simulation, the regular multitag solve could be off by as much as half a meter or more and about +/- 0.1m of jitter, but IMU_SOLVE_BEST_TAG maintained both high accuracy and low levels of noise: about +/- 0.01-0.02. (This is not accurate, I pulled those figures from memory and as such they are an approximation)

#746

@InterstellarDrift InterstellarDrift requested a review from a team as a code owner April 13, 2024 19:51
@mdurrani808
Copy link
Contributor

How does this perform when the robot is rotating? A major problem with MegaTag2 is that you have no idea when the gyro reading is from without the timestamp, leading to inaccuracies. This is fine when you're sitting in place, but once you're moving, things start getting dicey.

@InterstellarDrift
Copy link
Author

InterstellarDrift commented Apr 13, 2024

Unfortunately I don't have much information on that at the moment, as so far all of my testing has been done in simulation and for some reason I can't rotate the robot in teleop, as there is something broken in my simulation at the moment. I'll look into compensating for latency though.

@mcm001
Copy link
Contributor

mcm001 commented Apr 13, 2024

The other thing to note is that this still only uses a single tag. How does using the multitag pose but with the robot yaw perform?

@srimanachanta
Copy link
Member

Suggest a refactor here to support an InterpretableBuffer of Rotation angles which would solve timestamp issues and also allow this to be used with multitag better with timestamp synchronization across measurements. I'd prefer to internalize this though as the PhotonPoseEstimator has grown a lot with different strategies which can lead to it being confusing.

@InterstellarDrift
Copy link
Author

I'd prefer to internalize this though as the PhotonPoseEstimator has grown a lot with different strategies which can lead to it being confusing.

I definitely agree that moving the register function to a different class would be more viable, or else it would become tedious to work with if multiple PoseEstimators were used in addition to adding unnecessary options. Would PhotonUtils be an acceptable place to store the heading register?

I plan on pushing the commits I've made over the past month this afternoon or evening.

@InterstellarDrift
Copy link
Author

How does this perform when the robot is rotating? A major problem with MegaTag2 is that you have no idea when the gyro reading is from without the timestamp, leading to inaccuracies. This is fine when you're sitting in place, but once you're moving, things start getting dicey.

Last night I finally did a side-by-side comparison of the estimated pose's position before and after latency compensation was applied, and the results are quite substantial. At the test location I was using between the speaker and the amp in simulation, without latency compensation, the estimated pose would get flung out as far as the opposite side of the stage, but post-application at the same speeds, the farthest point the estimation was thrown by was no more than 2-2.5 feet or so, and that was more from the fact that it was detecting tags halfway across the field, and so the error spikes significantly at that point. It appeared to also hold up similarly when the simulated robot was rotating at 5 radians per second as opposed to the 2 radians per seconds the first two tests were performed with.

I intend to upload the screen recordings I collected of the advantagescope visualization later this afternoon.

@InterstellarDrift
Copy link
Author

The other thing to note is that this still only uses a single tag. How does using the multitag pose but with the robot yaw perform?

I haven't tested what using the robot's yaw in a multitag solve looks like yet, but I did develop a solver algorithm that only uses two tags to calculate the robot's position without needing the robot's heading. It sidesteps the potential reflected ambiguity represented by only using two tags because it uses the leftmost tag leftmost in perspective of the best two tags as the anchor tag. This forces a singular perspective out of the system by calculating the angle between the anchor tag and the secondary tag and then rotating the offset translation from the anchor tag derived using trigonometry by that angle, the field-relative offset from the anchor tag is obtained and then applied to the field-relative position of the anchor tag from the tag layout, resulting in the robot's position on the field.

Both the one tag and two tag solvers achieve greater stability by calculating the offset from the tag purely from its center and the camera's height and angular offsets rather than the SolvePNP transform from the tag's corners.

@InterstellarDrift
Copy link
Author

No latency compensation, 2 radians per second
Screencast from 05-13-2024 11:27:45 PM.webm

@InterstellarDrift
Copy link
Author

@InterstellarDrift
Copy link
Author

This is what it looks like when it's stationary:
image

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants