You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My experiments for instance segmentation output 'validation_loss': 0.0, every time the metrics are summarized. It also happens in the official tutorials of instance/semantic segmentation.
I’m running the notebook Instance Segmentation with Model Garden 1 from Tensorflow Model Garden documentation. During the training, it periodically displays the metrics, but the value of the validation loss is always zero. This can be seen in the output of the ‘Train and Evaluate’ section of the notebook.
For example:
I noticed the same behavior in official Semantic Segmentation tutorial. In this tutorial, the outputs are shown on the tutorial page: Semantic Segmentation with Model Garden.
The text was updated successfully, but these errors were encountered:
Could you please check this gist in which exp_config.task.validation_data.resize_eval_groundtruth = True set this variable to true to enable validation on validation data for semantic segmentation and also pass this variable eval_summary_manager=summary_manager.maybe_build_eval_summary_manager( params=exp_config, model_dir=model_dir) while training as evaluation results has image data . For instance segmentation the validation loss kept intentionally as zero.
Thanks for you response.
I can't understand why the validation loss is kept intentionally as zero for instance segmentation. I think that tracking the value of the validation loss is necessary.
For example, it's important to compare the graph of the validation loss with the graph of the training loss in order to recognize model overfitting. When you check the graphs in tensorboard, all graphs are shown perfectly, except for the validation loss graph.
My experiments for instance segmentation output
'validation_loss': 0.0
, every time the metrics are summarized. It also happens in the official tutorials of instance/semantic segmentation.I’m running the notebook Instance Segmentation with Model Garden 1 from Tensorflow Model Garden documentation. During the training, it periodically displays the metrics, but the value of the validation loss is always zero. This can be seen in the output of the ‘Train and Evaluate’ section of the notebook.
For example:
I noticed the same behavior in official Semantic Segmentation tutorial. In this tutorial, the outputs are shown on the tutorial page: Semantic Segmentation with Model Garden.
The text was updated successfully, but these errors were encountered: