Lab Materials for MIT 6.S191: Introduction to Deep Learning
-
Updated
Feb 26, 2024 - Jupyter Notebook
Lab Materials for MIT 6.S191: Introduction to Deep Learning
An AI for Music Generation
Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development.
Resources on Music Generation with Deep Learning
Train an LSTM to generate piano or violin/piano music.
🧠+🎧 Build your music algorithms and AI models with the next-gen DAW 🔥
"Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions", ACM Multimedia 2020
Experiment diverse Deep learning models for music generation with TensorFlow
MIDI / symbolic music tokenizers for Deep Learning models 🎶
Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.
Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.
Projects from the Deep Learning Specialization from deeplearning.ai provided by Coursera
The "Hands-On Music Generation with Magenta" book code repository and info resource
A toolkit for symbolic music generation
MusicTransformer written for MaestroV2 using the Pytorch framework for music generation
generates music (midi files) using a Tensorflow RNN
Pytorch Implementation of wavegan model to generate audio
Music generation with Keras and LSTM
a list of demo websites for automatic music generation research
This is the dataset repository for the paper: POP909: A Pop-song Dataset for Music Arrangement Generation
Add a description, image, and links to the music-generation topic page so that developers can more easily learn about it.
To associate your repository with the music-generation topic, visit your repo's landing page and select "manage topics."