Skip to content

Tensorflow implementation of Long Short-Term Memory model for audio synthesis used for thesis

Notifications You must be signed in to change notification settings

nnyase/ThesisMusicGeneration

Repository files navigation

Bachelor Thesis

Comparing Deep Learning Models in Music Composition

You can read my thesis at - 🎯 thesis website: Comparing Deep Learning Models in Music Composition

The following is how to use my LSTM to generate music This project allows you to train a neural network to generate midi music files that make use of a single instrument

Requirements

  • Python 3.x
  • Installing the following packages using pip:
    • Music21
    • Keras
    • Tensorflow
    • h5py

Training

To train the network you run lstm.py.

E.g.

python lstm.py

The network will use every midi file in ./midi_songs to train the network. The midi files should only contain a single instrument to get the most out of the training.

NOTE: You can stop the process at any point in time and the weights from the latest completed epoch will be available for text generation purposes.

Generating music

Once you have trained the network you can generate text using predict.py

E.g.

python predict.py

About

Tensorflow implementation of Long Short-Term Memory model for audio synthesis used for thesis

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages