Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging losses on the validation / test split #203

Open
prithv1 opened this issue Sep 8, 2020 · 3 comments
Open

Logging losses on the validation / test split #203

prithv1 opened this issue Sep 8, 2020 · 3 comments
Labels
enhancement New feature or request

Comments

@prithv1
Copy link

prithv1 commented Sep 8, 2020

Problem

I think right now logging losses (PPO or otherwise) as defined under core/algorithms/onpolicy_sync/losses/ is only supported on the train split. If I understand correctly, right now on the train split, it's possible to log losses (and other quantities defined in the loss definitions -- policy entropy, etc.) as well as metrics (and rewards) defined in the task definitions (success, SPL, etc. under plugins/robothor_plugin/robothor_tasks.py for instance). However, on val, only logging the latter is supported. While logging loss values (and other quantities defined in loss definitions) on val may not be super-useful in case of PPO, but may be worthwile for debugging experiments that have additional custom defined losses (action-prediction, etc.).

Desired solution

To be able to observe loss (and other quantities defined under loss definitions) on the val split in the tensorboard logs.

@prithv1 prithv1 added the enhancement New feature or request label Sep 8, 2020
@Lucaweihs
Copy link
Collaborator

👍

I've implemented this at some point on an old branch. I think the main roadblock to having this on by default is that we cannot expect that certain sensor readings are available in the validation / test sets. For instance, if we train a model with imitation learning, it is very possible that we do not have expert actions for the test set. Would it be sufficient to be given the option to define new, custom, logable values in the experiment config?

@prithv1
Copy link
Author

prithv1 commented Sep 9, 2020

I think the main roadblock to having this on by default is that we cannot expect that certain sensor readings are available in the validation / test sets. For instance, if we train a model with imitation learning, it is very possible that we do not have expert actions for the test set.

I see. Makes sense.

Would it be sufficient to be given the option to define new, custom, logable values in the experiment config?

Yes. I think this will be useful. However, if it's only restricted to metrics that only utilize task-level information -- path taken, etc. to compute new metrics -- then I guess this would only slightly make things easier for the user in the sense that the user doesn't have to modify the task definition by themselves. I think if this encompasses loss logging (under the constraints that sensor values are available for the specified loss), that will be quite useful. But I do see your point that this might not generalize well to all settings -- might be easier for auxiliary losses that don't necessarily rely on the specific task definition (CPC|A, inverse / forward dynamics) as opposed to others.

@Lucaweihs
Copy link
Collaborator

Great!

I think if this encompasses loss logging (under the constraints that sensor values are available for the specified loss), that will be quite useful.

Sounds good. In principle we could just allow people to pass in a list of losses that they'd like to have recorded during testing, perhaps that's the most robust solution as the user could also record losses not included during training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants