Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Label Real-World Data of Category Using Argoverse 2 Prediction Challenge Techniques? #264

Open
SunHaoOne opened this issue May 9, 2024 · 0 comments

Comments

@SunHaoOne
Copy link

Hi,
I am currently using an API to train models for submission to the leaderboard in the Argoverse 2 prediction challenge. In addition to this, I'm interested in applying these techniques to real-world data. I'm seeking clarification on how tracks are labeled within the dataset:

  • Track Fragment: These are lower quality tracks that may contain only a few timestamps of observations.

  • Unscored Track: These tracks are used for contextual input and are not scored.

  • Scored Track: These are high-quality tracks relevant to the Autonomous Vehicle (AV) and are scored in the multi-agent prediction challenge.

  • Focal Track: This is the primary track of interest in a given scenario and is scored in the single-agent prediction challenge.
    My main question revolves around the criteria for selecting these labels, especially:

  • Does a focal track represent tracks that have a longer duration of data?

  • Besides duration, are 'scored tracks' also selected based on their proximity to the AV or other factors?
    I'm considering using a similar methodology to label and utilize real-world data to achieve comparable performance. Thank you for any insights or advice on this approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant