You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So I have been doing some testing with the Blue Alliance API to look at component piece counts given by Linear Regression in order to get a better understanding of an individual robot's performance. But when I went to check my work with the Insights tab, I noticed some inconsistencies, but only on the tele-op side.
Expected Behavior
This bit of code from matchstats_helper.py says that Total Game Piece count should be given by adding up the teleOpGamePieceCount from the score_breakdown and when multiplying with the inverted matrix it will give the estimated total game piece count contributed by that team.
Here is some code I wrote to test this hypothesis, and when I test this with the event code "2023midet" and the teamKey "frc2337", I get an expected total piece count of 9.37.
Current Behavior
Currently on the insights tab for "2023midet" the Total Game Piece Count for 2337 is 11.26, which despite how much I, and probably Zach Orr, would love that to be true, that does not seem to be the case.
Here is the link to the insights page https://www.thebluealliance.com/event/2023midet#event-insights
Possible Solution
I wanted to try and figure out where this 11.26 figure is coming from, so I reproduced the above score matrix for autoGamePieceCount and got that result and then added it to the totalGamePieceCount result from earlier and got this result
(Note that the auto value is that same as the one in insights)
Here is the test code
What I believe is happening is that somewhere in the insights tab, the teleOpGrid and all associated parts are being assumed to represent the scoring on the grid in tele-op when in fact it is actually the final state of the grid (tele+auto). The simplest solution would be to go back in, check that this is the case, and if so, subtract the auto scoring from the tele (total) scoring to get the actually tele-scoring which should provide more accurate cOPR stats for those who are interested.
Context
I just want people who are looking at insights to not be confused or mislead about a teams actual capabilities and should get a fairer assessment of scoring capabilities (at least as much as linear regression allows)
The text was updated successfully, but these errors were encountered:
So I have been doing some testing with the Blue Alliance API to look at component piece counts given by Linear Regression in order to get a better understanding of an individual robot's performance. But when I went to check my work with the Insights tab, I noticed some inconsistencies, but only on the tele-op side.
Expected Behavior
This bit of code from matchstats_helper.py says that Total Game Piece count should be given by adding up the teleOpGamePieceCount from the score_breakdown and when multiplying with the inverted matrix it will give the estimated total game piece count contributed by that team.
Here is some code I wrote to test this hypothesis, and when I test this with the event code "2023midet" and the teamKey "frc2337", I get an expected total piece count of 9.37.
Current Behavior
Currently on the insights tab for "2023midet" the Total Game Piece Count for 2337 is 11.26, which despite how much I, and probably Zach Orr, would love that to be true, that does not seem to be the case.
Here is the link to the insights page
https://www.thebluealliance.com/event/2023midet#event-insights
Possible Solution
I wanted to try and figure out where this 11.26 figure is coming from, so I reproduced the above score matrix for autoGamePieceCount and got that result and then added it to the totalGamePieceCount result from earlier and got this result
(Note that the auto value is that same as the one in insights)
Here is the test code
What I believe is happening is that somewhere in the insights tab, the teleOpGrid and all associated parts are being assumed to represent the scoring on the grid in tele-op when in fact it is actually the final state of the grid (tele+auto). The simplest solution would be to go back in, check that this is the case, and if so, subtract the auto scoring from the tele (total) scoring to get the actually tele-scoring which should provide more accurate cOPR stats for those who are interested.
Context
I just want people who are looking at insights to not be confused or mislead about a teams actual capabilities and should get a fairer assessment of scoring capabilities (at least as much as linear regression allows)
The text was updated successfully, but these errors were encountered: