-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using the MLflow logger produces Inconsistent metric plots #19874
Comments
Hi @gboeer I am "happy" to see I am not the only one having issues logging with MLFlow. I am finetuning a pretrained transformer model on 2000ish images. So not an insane amount of data. As you can see, metrics such as Also, I tell my trainer to log every 50 steps, but in my epochs-step plot I see points at the following steps only: 49, 199, 349, 499, ... not every 50. Here is my logger:
Passed to my trainer:
My metrics are logged in the following way in the training_step and validation_step functions:
I guess it's a problem from lightning but not 100% sure. I hope we'll get suppot soon. I serve my ML models on MLFlow and it works fine, so I don't want to go back to tensorboard for my DL models only. EDIT: My bad, it seems to do that just when the training is still on. When the training is finished, the plots display correctly. But still, I thought we were supposed to be able to follow the evolution of metrics as training progresses, and in this case it's not very possible. |
@Antoine101 I guess what you observed about the stepsize may just have to do with zero-indexing. |
Bug description
When using the MLFlowLogger I have noticed that in some cases, the produced plots in the overview
Model metrics
section of the MLflow web app are messed up.Additionally, the plots for the same metric, when viewed in the detail view are displayed correctly, hence are very different from the plots in the overview tab.
I am not absolutely sure, but I think this may have to do with how the
step
parameter is propagated to MLflow or how theglobal_step
is calculated.In my current experiment I use a large training set, and a smaller validation set and I have set the Trainer to
log_every_n_steps=20
.For the training step this seems to work fine (the plots all look good), but I guess that during the validation step this log step size is larger than the total amount of batches per validation step. If so however, I still wonder why the plots in the detailed view of the validation metrics all look fine, but only the plots in the
Model metrics
overview are messed up.During the validation step I tried using the normal lightning
self.log
, as well as theself.logger.log_metrics
theself.logger.experiment.log_metric
and the direct apimlflow.log_metric
, all which lead to similar results (though also not the same produced plots).See the following images which illustrate the plots for each of those calls:
For comparison, the log for the detailed metric view from the same experiment
I would like to point out that I don't see this behavior with other experiments, usually with smaller sized datasets where I also used smaller log_every_n_steps and that yet I have not been able to reproduce this issue with those smaller setups.
Edit: Another side note, I also use the same metric
val_accuracy
(the one I log with the simple self.log()) as monitor for the ModelCheckpoint which also works as expected. So internally the metric is calculated and handled correctly, and the detailed metric plot also reveals this. Only the overview pane for all metrics for some reason shows this strange behavior.What version are you seeing the problem on?
v2.2
How to reproduce the bug
No response
Error messages and logs
No response
Environment
Current environment
More info
No response
The text was updated successfully, but these errors were encountered: