Skip to content

Pre-trained models for ISMIR 2019 Paper Large-Vocabulary Chord Transcription via Chord Structure Decomposition

License

Notifications You must be signed in to change notification settings

music-x-lab/ISMIR2019-Large-Vocabulary-Chord-Recognition

Repository files navigation

Large-Vocabulary Chord Transcription via Chord Structure Decomposition

This is the official repo for the ISMIR 2019 paper Large-Vocabulary Chord Transcription via Chord Structure Decomposition.

Screenshot for audio chord recognition (visualized via Sonic Visualizer): image

Pretrained models

This repo contains the pretrained models that does not incorporate label reweighting. This model provides the best overall accuracy. However, if you want the pretrained models with label reweighting, you can download it here:

https://drive.google.com/drive/u/1/folders/1y5-zTFaBliymPe7uY2MZfUAsvPzwmGBL

Chord recognition with pretrained model

After installing all the dependencies, run the following code:

python3 chord_recognition.py path_to_audio_file path_to_output_file [chord_dict]

For example,

python3 chord_recognition.py example.mp3 example_chord.lab

Here, chord_dict is an optional parameter that tells the HMM which chord dictionary to use for decoding. Potential options are:

  • submission: the default value (recommended to use). It is the chord dictionary we used for ISMIR 2019 submission.
  • ismir2017: the chord dictionary for MIREX competition.
  • full: the list of all chords from MARL dataset. It is not tested and not recommended to use.

You may also manually adjust the chord dictionary by editing them in the folder data/*_chord_list.txt.

Training

First prepare the jams dataset in the following format:

    chord_data_1217/
        audio/
            TR6R91L11C8A40D710.mp3
            ...
        chordlab/
            TR6R91L11C8A40D710.lab
            ...

Then modify the value of JAM_DATASET_PATH in settings.py to the path of the dataset (e.g., some_path/chord_data_1217)

Then run storage_creation.py to get h5 data files jams_xchord.h5d and jams_cqt.h5d.

Then run chordnet_ismir_naive.py 0 for training/testing on data split #0.

If you encounter with some errors with torch.bool() just follows the error message to add some bool() to index tensors.

Testing

Run chordnet_ismir_naive_eval.py for testing.

Junyan

About

Pre-trained models for ISMIR 2019 Paper Large-Vocabulary Chord Transcription via Chord Structure Decomposition

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages