Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scalar plots are empty, log_scalar values only appear on experiment overview #11

Open
stocyr opened this issue Apr 22, 2020 · 3 comments
Labels

Comments

@stocyr
Copy link

stocyr commented Apr 22, 2020

  • Plots of any log_metric are empty, though the latest logged scalars do appear in the experiment overview table and the compare panel as single float values.

Setup:

  • Windows 10
  • Deepkit Release 2020.1.5
  • Deepkit SDK 1.0.1
  • Python 3.7.x (running local or remote)
@marcj
Copy link
Member

marcj commented Apr 24, 2020

@stocyr What do you mean with log_scalar? We don't have such a method. Can you please provide the source lines? Could you please also provide screenshots (of the experiment overview list vs experiment detail view)?

@stocyr
Copy link
Author

stocyr commented Apr 24, 2020

I updated the post above. Here's the snipped of the code:

import deepkit

def train(args, model, device, train_loader, optimizer, epoch):
    # ...
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = data.to(device), target.to(device)
        optimizer.zero_grad()
        output = model(data)
        loss = F.nll_loss(output, target)
        loss.backward()
        optimizer.step()

        _, predicted = output.max(1)
        total += target.size(0)
        correct += predicted.eq(target).sum().item()

        args.experiment.log_metric('loss/train', loss.item(), x=epoch + (batch_idx / len(train_loader)))

def main():
    # ...
    args = parser.parse_args()
    experiment = deepkit.experiment()
    args.experiment = experiment
    # ...
    for epoch in range(1, args.epochs + 1):
        args.experiment.epoch(epoch, args.epochs)
        train(args, model, device, train_loader, optimizer, epoch)

if __name__ == '__main__':
    main()

The effect can be seen here on the running experiment (top most)
2020-04-24 19_52_14-Deepkit

2020-04-24 19_50_54-Deepkit

2020-04-24 19_52_25-Deepkit

And here's what happens if I pull the "smoothing" slider a little:
2020-04-24 19_52_36-Deepkit

update
I recently noticed that the plots show up when the Deepkit App is connected to a remote-hosted Team Server (eg. a non-localhost account). So the bug only appears to happen on the localhost connected app.

@marcj
Copy link
Member

marcj commented May 19, 2020

That's interesting. Seems to be a windows issue. Can you open the devtools to see if there are any errors? STRG+SHIFT+I, then "console" tab.

@marcj marcj added the Windows label May 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants