Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Log metrics in test mode #1482

Open
mmeendez8 opened this issue Jan 28, 2024 · 8 comments
Open

[Feature] Log metrics in test mode #1482

mmeendez8 opened this issue Jan 28, 2024 · 8 comments
Assignees

Comments

@mmeendez8
Copy link
Contributor

What is the feature?

Just noticed that when running mmseg with a pretrained model and a test set to evaluate its performance, the final metrics are not logged in my visbackend (mlflow).

I was exploring the source and noticed that the LoggerHook class is the one in charge of dumping metrics during training and eval.

I was wondering if there is any reason of why runner.visualizer.add_scalars() is not called after_test_epoch

Any other context?

No response

@mmeendez8
Copy link
Contributor Author

@HAOCHENYE any news? Just trying to figure out if I should patch this locally or send a PR here

@mmeendez8
Copy link
Contributor Author

@HAOCHENYE would it be possible to get some feedback on this?

@fsbarros98
Copy link

Also need update on this!

@HAOCHENYE
Copy link
Collaborator

Sorry for the late response. The reason for not calling add_scalars in after_test_epoch is that the test set typically does not have the ground truth, and we usually only calculate various metrics and statistics on the validation set.

@fsbarros98
Copy link

that is a somewhat valid response, but especially if I'm running a test.py script I would expect that test metrics would be logged in

@mmeendez8
Copy link
Contributor Author

mmeendez8 commented Apr 22, 2024

I see... So is your plan to assume that the test set does not have the ground truth, or should we find a way to extract and log it when the ground truth is present?

@fsbarros98
Copy link

If gt is not present do we even have metrics? I'm using this for MMagic, so in generation we also don't have gt, but we mainly compute metrics by comparing test features (generated samples) and train features... I believe that whenever metrics are calculated for testing they should also be added to the visualizer

@HAOCHENYE
Copy link
Collaborator

Visualizer is a globally accessible variable, and you can get the visualizer at any location using visualizer = Visualizer.get_current_instance() and then call the interface like visualizer.add_scalar() to record the information you want. You can implement it in a custom hook, or any other places you want (maybe model.xxx, metric.xxx ...)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants