-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plotting Update #887
base: main
Are you sure you want to change the base?
Plotting Update #887
Conversation
Carlson-J
commented
Nov 6, 2023
•
edited
edited
- Adding measurement uncertainty to plotly plots.
- Adding override for plotting non-uniform time steps.
- Bug fixes
- Quality of life updates
Added check that meas model is correct dims for current implementation. Added and updated unit tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Query regards change to 3d plotting errors, but otherwise looks good.
Thanks for the contribution.
x_err = np.sqrt(cov[0, 0]) | ||
y_err = np.sqrt(cov[1, 1]) | ||
z_err = np.sqrt(cov[2, 2]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have more details on reason for this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous way of plotting the uncertainty was incorrect, as it assumed the eigen value/vectors were aligned with the xyz axis, which is not generally the case. The error bars would really need to be rotated to be correct. To avoid having to do that, we now just plot the variance along each axis. This does not perfectly show where the uncertainty is but is better then what we had.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is adequate for situations where the off-diagonal elements (covariance) aren't significant. But, again, not generally. Is it that much more effort to plot the eigenvectors?
I'm not in favour of adding 'measurement uncertainty'. I think it conflates measurement -- a realisation of a sensing action -- with measurement model uncertainty. A single measurement is a sample drawn from a distribution defined by the measurement model. There's no uncertainty -- that's what the sample was. If you want to visualise how likely other samples are to appear in that vicinity you have to know what the likelihood of that sample was, which means you need the ground truth. And we're plotting in state space. There isn't necessarily a linear relationship between state and measurement space which makes the ellipse you're trying to plot non-elliptical. |
@jmbarr I think the piece you are missing is that sometimes the uncertainty of a sensor is not constant. In real sensors, one can update the uncertainty based on the measurement. For example, given the SNR of a signal one can derive and update the uncertainty. |
No, I understand this perfectly well. Stone Soup, with its ability to attach specific models to measurements, handles this quite well. The issue with plotting is that error bars (ellipses, etc) about a particular measurement do not accurately reflect the likelihood that this measurement was realised. Consider a low-likelihood realisation of a measurement, i.e. one from the edge of the sensor model distribution. Drawing a symmetrical error bound corresponding to the sensor uncertainty about this point would be incorrect. You are effectively saying that measurements from even lower-likelihood regions of the distribution are as likely as those from higher-likelihood regions. (This is easier to see with a diagram.) |
@jmbarr I see your point. The use case I am trying to address here is that I have some real radar data with uncertainty for each measurement. I would like to visualize how the tracker associates with and updates its internal uncertainty based on these measurements. Plotting the measurement uncertainty seems like the best way to visualize this. How would recommend displaying this information? |
@Carlson-J it's a good question to which no wholly satisfactory (to my mind) answer exists. Assuming the uncertainty is intrinsic to the sensor (i.e. in no way a statistical representation of the a datum made by, say, repeated individual measurements), then the 'correct' thing to do is transform the ground truth into measurement space and represent uncertainty about that point -- that's where the object really is in measurement space. Of course you can't do that with real data. As a compromise it might be instructive to plot the predicted measurement mean with the uncertainty about that. It's not strictly speaking the correct place to put the uncertainty but it would allow you to visualise the (potentially evolving) measurement uncertainty. |
@jmbarr How is this different than what I have implemented? Given a datum and its uncertainty, we are plotting the point and its uncertainty in measurement space, which we can optionally transform into XYZ. |
The difference is that you're plotting the mean of the measurement prediction, not the measurement itself. Whilst not a reflection of where the measurement is likely to occur (which requires the ground truth), it's at least a reflection of where you think the measurement is likely to occur. And, crucially, allows you to visualise the sensor uncertainty. |
@jmbarr When I plot it I am plotting the actual measurement, i.e., I don't add any noise to it. I do not see another way to plot actual data in the current famework. Is there a way to plot the actual measurement in a way that is more aligned with your liking? |
I think you plot the measurement via |