Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

export ALLENACT_VAL_METRICS = /path/to/metrics__val_*.json #265

Open
98ming opened this issue Mar 27, 2021 · 2 comments
Open

export ALLENACT_VAL_METRICS = /path/to/metrics__val_*.json #265

98ming opened this issue Mar 27, 2021 · 2 comments

Comments

@98ming
Copy link

98ming commented Mar 27, 2021

Problem / Question

How to generate metrics__val_.json files?
image
Why can the code only generate metrics__test_
.json files

@jordis-ai2
Copy link
Collaborator

The current supported way to get metrics for validation tasks is to run the test mode with those tasks, but we'll consider this for future releases.

@Lucaweihs
Copy link
Collaborator

@98ming Does the above make sense and work for you? I agree that this is a bit confusing, really it should say "_inference{}" or something similar as you can change the tasks you're "testing" to be anything you'd like by modifying the

     def test_task_sampler_args(...) -> Dict[str, Any]

function in your experiment config. Here's an example of this being done for the AI2-THOR rearrangement task, notice that changing which lines are commented out will change which tasks are evaluated. You can better organize these by adding an --extra_tag when running the tests from the command line.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants