You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In PyTorch SimCLR and follow-up papers used a simple class to apply transforms to a batch. (See Here).
Here in Braindecode, if we do the same we get an error in the AugmentDataloader as transforms are not a list or accepted format. If we create a new dataset, it is super slow. I think it applies augmentations on the CPU for each sample separately and serially. Any suggestion to speed up the process? In my implementation, it is 1000X faster without augmentation!
The text was updated successfully, but these errors were encountered:
We have recently improved dataset creation, and we have studied in depth how to generate dataset more efficiently. We came to the conclusion that it wasn't as slow as we thought, we were just using it wrong.
To make the discussion more fruitful, can you provide a minimum viable code that shows the slowness? We did not do this study with an augmentation module, maybe there is something to improve.
In PyTorch SimCLR and follow-up papers used a simple class to apply transforms to a batch. (See Here).
Here in Braindecode, if we do the same we get an error in the AugmentDataloader as transforms are not a list or accepted format. If we create a new dataset, it is super slow. I think it applies augmentations on the CPU for each sample separately and serially. Any suggestion to speed up the process? In my implementation, it is 1000X faster without augmentation!
The text was updated successfully, but these errors were encountered: