Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2023 COPRs seem to be incorrect on the insights tab #5327

Open
Nicholas-Stokes opened this issue Jun 13, 2023 · 0 comments
Open

2023 COPRs seem to be incorrect on the insights tab #5327

Nicholas-Stokes opened this issue Jun 13, 2023 · 0 comments

Comments

@Nicholas-Stokes
Copy link

So I have been doing some testing with the Blue Alliance API to look at component piece counts given by Linear Regression in order to get a better understanding of an individual robot's performance. But when I went to check my work with the Insights tab, I noticed some inconsistencies, but only on the tele-op side.

Expected Behavior

image
This bit of code from matchstats_helper.py says that Total Game Piece count should be given by adding up the teleOpGamePieceCount from the score_breakdown and when multiplying with the inverted matrix it will give the estimated total game piece count contributed by that team.
image

image

Here is some code I wrote to test this hypothesis, and when I test this with the event code "2023midet" and the teamKey "frc2337", I get an expected total piece count of 9.37.

Current Behavior

Currently on the insights tab for "2023midet" the Total Game Piece Count for 2337 is 11.26, which despite how much I, and probably Zach Orr, would love that to be true, that does not seem to be the case.
Here is the link to the insights page
https://www.thebluealliance.com/event/2023midet#event-insights

Possible Solution

I wanted to try and figure out where this 11.26 figure is coming from, so I reproduced the above score matrix for autoGamePieceCount and got that result and then added it to the totalGamePieceCount result from earlier and got this result
(Note that the auto value is that same as the one in insights)

image

Here is the test code
image

What I believe is happening is that somewhere in the insights tab, the teleOpGrid and all associated parts are being assumed to represent the scoring on the grid in tele-op when in fact it is actually the final state of the grid (tele+auto). The simplest solution would be to go back in, check that this is the case, and if so, subtract the auto scoring from the tele (total) scoring to get the actually tele-scoring which should provide more accurate cOPR stats for those who are interested.

Context

I just want people who are looking at insights to not be confused or mislead about a teams actual capabilities and should get a fairer assessment of scoring capabilities (at least as much as linear regression allows)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant