Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation loss always equals zero for Instance/Semantic Segmentation with Model Garden #11117

Open
luiz-felipe-moreira opened this issue Nov 22, 2023 · 3 comments
Assignees
Labels
models:official models that come under official repository type:bug Bug in the code

Comments

@luiz-felipe-moreira
Copy link

My experiments for instance segmentation output 'validation_loss': 0.0, every time the metrics are summarized. It also happens in the official tutorials of instance/semantic segmentation.

I’m running the notebook Instance Segmentation with Model Garden 1 from Tensorflow Model Garden documentation. During the training, it periodically displays the metrics, but the value of the validation loss is always zero. This can be seen in the output of the ‘Train and Evaluate’ section of the notebook.
For example:

...
eval | step:   1200 | steps/sec:    3.3 | eval time:   60.0 sec | output: 
{'AP': 0.0814979,
 'AP50': 0.16509584,
 'AP75': 0.07145937,
  ...
 'mask_ARs': 0.0015765766,
 'steps_per_second': 3.3340747415649394,
 'validation_loss': 0.0}
...

I noticed the same behavior in official Semantic Segmentation tutorial. In this tutorial, the outputs are shown on the tutorial page: Semantic Segmentation with Model Garden.

@luiz-felipe-moreira luiz-felipe-moreira added models:official models that come under official repository type:bug Bug in the code labels Nov 22, 2023
@laxmareddyp
Copy link
Collaborator

Hi @luiz-felipe-moreira,

Sorry for the delay in response, We are looking into this issue and will let you know once it is updated from our side.

Thanks.

@laxmareddyp
Copy link
Collaborator

laxmareddyp commented Jan 23, 2024

Hi @luiz-felipe-moreira ,

Could you please check this gist in which exp_config.task.validation_data.resize_eval_groundtruth = True set this variable to true to enable validation on validation data for semantic segmentation and also pass this variable eval_summary_manager=summary_manager.maybe_build_eval_summary_manager( params=exp_config, model_dir=model_dir) while training as evaluation results has image data . For instance segmentation the validation loss kept intentionally as zero.

Thanks.

@laxmareddyp laxmareddyp added the stat:awaiting response Waiting on input from the contributor label Jan 23, 2024
@luiz-felipe-moreira
Copy link
Author

Hi @laxmareddyp,

Thanks for you response.
I can't understand why the validation loss is kept intentionally as zero for instance segmentation. I think that tracking the value of the validation loss is necessary.
For example, it's important to compare the graph of the validation loss with the graph of the training loss in order to recognize model overfitting. When you check the graphs in tensorboard, all graphs are shown perfectly, except for the validation loss graph.

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting on input from the contributor label Jan 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models:official models that come under official repository type:bug Bug in the code
Projects
None yet
Development

No branches or pull requests

2 participants