Skip to content

PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model

License

Notifications You must be signed in to change notification settings

wekaco/samplernn-pytorch

 
 

Repository files navigation

wekaco/samplernn-pytorch

Fork of deepsound-project/samplernn-pytorch "A PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model"

  • Using pytorch==1.5.1 torchvision==0.6.1 cudatoolkit=10.2
  • Docker ready

Training

Prepare a dataset yourself. It should be a directory in datasets/ filled with equal-length wav files. Or you can create your own dataset format by subclassing torch.utils.data.Dataset. It's easy, take a look at dataset.FolderDataset in this repo for an example.

The results - training log, loss plots, model checkpoints and generated samples will be saved in results/.

Special thanks to

Continue the work of:

About

PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model

Resources

License

Stars

Watchers

Forks

Languages

  • Python 97.0%
  • Other 3.0%