Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support log_every_n_steps with validate|test #18895

Closed
wants to merge 4 commits into from

Conversation

carmocca
Copy link
Member

@carmocca carmocca commented Oct 30, 2023

What does this PR do?

Fixes #10436

Logging per step during fit's validation, regular validation, testing, or predicting is not generally useful when the logged values depend on the optimization process.
However, sometimes you want to log values that do not depend on the optimization process. Examples of this are throughput-related metrics (as in #18848) or batch-specific metrics.

This PR adds support for tuning the logging interval under these circumstances.

This is only a breaking change if the user was calling self.log(..., on_step=True).
The previous behavior is equal to setting log_every_n_steps=1 now. Before, this value was ignored.


📚 Documentation preview 📚: https://pytorch-lightning--18895.org.readthedocs.build/en/18895/

cc @Borda @carmocca @justusschock

@carmocca carmocca added feature Is an improvement or enhancement logging Related to the `LoggerConnector` and `log()` breaking change Includes a breaking change pl Generic label for PyTorch Lightning package labels Oct 30, 2023
@carmocca carmocca added this to the 2.2 milestone Oct 30, 2023
@carmocca carmocca self-assigned this Oct 30, 2023
@carmocca carmocca changed the title Support log_every_n_steps with validate|test|predict Support log_every_n_steps with validate|test Oct 30, 2023
Comment on lines +117 to 120
# this is for backwards compatibility
if add_epoch: # only enabled for training metrics
step -= 1
scalar_metrics.setdefault("epoch", self.trainer.current_epoch)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO this bit of logic is not well designed, but keeping it to limit the PRs scope

@@ -548,11 +548,11 @@ def test_step(self, batch, batch_idx):
call(metrics={"train_loss": ANY, "epoch": 0}, step=0),
call(metrics={"valid_loss_0_step": ANY, "valid_loss_2": ANY}, step=0),
call(metrics={"valid_loss_0_step": ANY, "valid_loss_2": ANY}, step=1),
call(metrics={"valid_loss_0_epoch": ANY, "valid_loss_1": ANY, "epoch": 0}, step=0),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The step value here corresponded to the training step. Now it's for the validation step

@carmocca carmocca marked this pull request as ready for review November 18, 2023 23:21
Copy link
Contributor

github-actions bot commented Nov 18, 2023

⛈️ Required checks status: Has failure 🔴

Warning
This job will need to be re-run to merge your PR. If you do not have write access to the repository, you can ask Lightning-AI/lai-frameworks to re-run it. If you push a new commit, all of CI will re-trigger.

Groups summary

🔴 pytorch_lightning: Tests workflow
Check ID Status
pl-cpu (macOS-11, lightning, 3.8, 1.12, oldest) failure
pl-cpu (macOS-11, lightning, 3.9, 1.12) failure
pl-cpu (macOS-11, lightning, 3.10, 1.13) failure
pl-cpu (macOS-11, lightning, 3.10, 2.0) failure
pl-cpu (macOS-11, lightning, 3.10, 2.1) failure
pl-cpu (ubuntu-20.04, lightning, 3.8, 1.12, oldest) failure
pl-cpu (ubuntu-20.04, lightning, 3.9, 1.12) failure
pl-cpu (ubuntu-20.04, lightning, 3.10, 1.13) failure
pl-cpu (ubuntu-20.04, lightning, 3.10, 2.0) failure
pl-cpu (ubuntu-20.04, lightning, 3.10, 2.1) failure
pl-cpu (windows-2022, lightning, 3.8, 1.12, oldest) failure
pl-cpu (windows-2022, lightning, 3.9, 1.12) failure
pl-cpu (windows-2022, lightning, 3.10, 1.13) failure
pl-cpu (windows-2022, lightning, 3.10, 2.0) failure
pl-cpu (windows-2022, lightning, 3.10, 2.1) failure
pl-cpu (macOS-11, pytorch, 3.8, 1.13) failure
pl-cpu (ubuntu-20.04, pytorch, 3.8, 1.13) failure
pl-cpu (windows-2022, pytorch, 3.8, 1.13) failure
pl-cpu (macOS-12, pytorch, 3.11, 2.0) failure
pl-cpu (macOS-12, pytorch, 3.11, 2.1) failure
pl-cpu (ubuntu-22.04, pytorch, 3.11, 2.0) failure
pl-cpu (ubuntu-22.04, pytorch, 3.11, 2.1) failure
pl-cpu (windows-2022, pytorch, 3.11, 2.0) failure
pl-cpu (windows-2022, pytorch, 3.11, 2.1) failure

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py, tests/tests_pytorch/trainer/logging_/test_eval_loop_logging.py.

🟢 pytorch_lightning: Azure GPU
Check ID Status
pytorch-lightning (GPUs) (testing Lightning | latest) success
pytorch-lightning (GPUs) (testing PyTorch | latest) success

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py, tests/tests_pytorch/trainer/logging_/test_eval_loop_logging.py.

🟢 pytorch_lightning: Benchmarks
Check ID Status
lightning.Benchmarks success

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.

🟢 pytorch_lightning: Docs
Check ID Status
docs-make (pytorch, doctest) success
docs-make (pytorch, html) success

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.

🟢 mypy
Check ID Status
mypy success

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.

🟢 install
Check ID Status
install-pkg (ubuntu-22.04, app, 3.8) success
install-pkg (ubuntu-22.04, app, 3.11) success
install-pkg (ubuntu-22.04, fabric, 3.8) success
install-pkg (ubuntu-22.04, fabric, 3.11) success
install-pkg (ubuntu-22.04, pytorch, 3.8) success
install-pkg (ubuntu-22.04, pytorch, 3.11) success
install-pkg (ubuntu-22.04, lightning, 3.8) success
install-pkg (ubuntu-22.04, lightning, 3.11) success
install-pkg (ubuntu-22.04, notset, 3.8) success
install-pkg (ubuntu-22.04, notset, 3.11) success
install-pkg (macOS-12, app, 3.8) success
install-pkg (macOS-12, app, 3.11) success
install-pkg (macOS-12, fabric, 3.8) success
install-pkg (macOS-12, fabric, 3.11) success
install-pkg (macOS-12, pytorch, 3.8) success
install-pkg (macOS-12, pytorch, 3.11) success
install-pkg (macOS-12, lightning, 3.8) success
install-pkg (macOS-12, lightning, 3.11) success
install-pkg (macOS-12, notset, 3.8) success
install-pkg (macOS-12, notset, 3.11) success
install-pkg (windows-2022, app, 3.8) success
install-pkg (windows-2022, app, 3.11) success
install-pkg (windows-2022, fabric, 3.8) success
install-pkg (windows-2022, fabric, 3.11) success
install-pkg (windows-2022, pytorch, 3.8) success
install-pkg (windows-2022, pytorch, 3.11) success
install-pkg (windows-2022, lightning, 3.8) success
install-pkg (windows-2022, lightning, 3.11) success
install-pkg (windows-2022, notset, 3.8) success
install-pkg (windows-2022, notset, 3.11) success

These checks are required after the changes to src/lightning/pytorch/callbacks/device_stats_monitor.py, src/lightning/pytorch/callbacks/lr_monitor.py, src/lightning/pytorch/callbacks/throughput_monitor.py, src/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py, src/lightning/pytorch/trainer/trainer.py.


Thank you for your contribution! 💜

Note
This comment is automatically generated and updates for 60 minutes every 180 seconds. If you have any other questions, contact carmocca for help.

@awaelchli
Copy link
Member

awaelchli commented Nov 21, 2023

Hey @carmocca
There are some significant differences here that should be discussed. Here is my simple example:

import os

import torch

from lightning import seed_everything
from lightning.pytorch import LightningModule, Trainer
from torch.utils.data import DataLoader, Dataset

from lightning.pytorch.loggers import TensorBoardLogger


class RandomDataset(Dataset):
    def __init__(self, size, length):
        self.len = length
        self.data = torch.randn(length, size)

    def __getitem__(self, index):
        return self.data[index]

    def __len__(self):
        return self.len


class BoringModel(LightningModule):
    def __init__(self):
        super().__init__()
        self.layer = torch.nn.Linear(32, 2)

    def forward(self, x):
        return self.layer(x)

    def training_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("train_loss", loss)
        return {"loss": loss}

    def validation_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("valid_loss", loss)

    def test_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("test_loss", loss)

    def configure_optimizers(self):
        return torch.optim.SGD(self.layer.parameters(), lr=0.1)


def run():
    seed_everything(0)
    train_data = DataLoader(RandomDataset(32, 64), batch_size=2)
    val_data = DataLoader(RandomDataset(32, 64), batch_size=2)
    test_data = DataLoader(RandomDataset(32, 64), batch_size=2)

    logger = TensorBoardLogger("tb_logs", name="my_model")

    model = BoringModel()
    trainer = Trainer(
        default_root_dir=os.getcwd(),
        logger=logger,
        limit_train_batches=10,
        limit_val_batches=10,
        limit_test_batches=10,
        num_sanity_val_steps=4,
        max_epochs=3,
        enable_model_summary=False,
        log_every_n_steps=2,
    )
    trainer.fit(model, train_dataloaders=train_data, val_dataloaders=val_data)
    trainer.validate(model, val_data)
    trainer.test(model, dataloaders=test_data)


if __name__ == "__main__":
    run()

Running tensorboard --logdir tb_logs and comparing master with this branch.

Red: master
Blue: this PR

Issue 1:
Here we see that the new behavior no longer logs the epoch value at the epoch-end step (blue stops at step 29).

image

It falling short now means there would be a gap when resuming:

image

For this reason, I believe the previous behavior should be kept.

Issue 2:

Here the test loss no longer gets logged at the global step, but the step is local to the test stage.

image

Issue 3:

Finally, a few things are going on for the validation. 1) The fit-validation values are no longer logged, because they don't fall into the logging interval (they are epoch-end). 2) Without using the global step for logging fit-validation logging, you can no longer track the validation loss across your training. It all gets logged to the same step value.

image

@carmocca
Copy link
Member Author

I'll look at Issue 1 and 3. Thanks.
Issue 2 is expected and the point of this PR, that is, testing using its own index instead of the training global step.

@carmocca carmocca marked this pull request as draft November 21, 2023 15:07
Copy link

gitguardian bot commented Jan 16, 2024

⚠️ GitGuardian has uncovered 2 secrets following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secrets in your pull request
GitGuardian id Secret Commit Filename
- Generic High Entropy Secret 78fa3af tests/tests_app/utilities/test_login.py View secret
- Base64 Basic Authentication 78fa3af tests/tests_app/utilities/test_login.py View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

Our GitHub checks need improvements? Share your feedbacks!

@carmocca carmocca removed this from the 2.2 milestone Jan 31, 2024
@carmocca carmocca closed this Apr 30, 2024
@carmocca carmocca deleted the carmocca/log-interval-eval branch April 30, 2024 00:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking change Includes a breaking change feature Is an improvement or enhancement logging Related to the `LoggerConnector` and `log()` pl Generic label for PyTorch Lightning package
Projects
None yet
Development

Successfully merging this pull request may close these issues.

log_every_n_steps is broken for Trainer.{validate,test,predict}
2 participants