Skip to content

DiffAPF/LA-2A

Repository files navigation

Differentiable All-pole Filters for Time-varying Audio Systems

Chin-Yun Yu, Christopher Mitcheltree, Alistair Carson, Stefan Bilbao, Joshua D. Reiss, and György Fazekas

arXiv Listening Samples Plugins License

Feed-forward Compressor (LA-2A) Experiments

Getting started

First, please install the required packages, including our differentiable compressor torchcomp, by running:

pip install -r requirements.txt

Training

Firstly, you need to download the SignalTrain dataset from here. The training configurations are listed under cfg/. Each configurations listed under cfg/data corresponds to a dataset. Please modify the input and target path of cfg/data/la2a*.yaml to the files of the dataset you downloaded.

To train the proposed differentiable feed-forward compressor, run:

python train_comp.py data=la2a_50

The training logs will be uploaded to your wandb account under the project dafx24. In this example, the model is trained with peak reduction of 50. Change the data argument to la2a_75 or la2a_25 to train the model with peak reduction of 75 or 25, respectively.

To train the frequency-sampling compressor (similar to DASP), run:

python train_comp.py data=la2a_50 compressor.simple=true compressor.freq_sampling=true

A ckpt.yaml will be created under the logging folder (under outputs/ by default) after training, which contains the parameters of the lowest training loss model. We also provide our trained parameters under the folder learned_params/, with filenames as [method]_[peak_reduction].yaml.

Evaluation

You can use your checkpoints ckpt.yaml or our provided learned parameters to evaluate the compressor. Given a wave file, you can compress it using the following command:

python test_comp.py ckpt.yaml input.wav output.wav

Additional notes

  • cfg/data/ff_*.yaml are configurations for the feed-forward compressor experiments (FF-A/B/C in the paper). Please use digital_compressor.py to get the targets if you want to reproduce the experiments.

Links

  • torchcomp: Differentiable compressor implementation.
  • training logs: All training logs of the compressor experiments in the paper.

Citation

@misc{ycy2024diffapf,
  title={Differentiable All-pole Filters for Time-varying Audio Systems},
  author={Chin-Yun Yu and Christopher Mitcheltree and Alistair Carson and Stefan Bilbao and Joshua D. Reiss and György Fazekas},
  year={2024},
  eprint={2404.07970},
  archivePrefix={arXiv},
  primaryClass={eess.AS}
}

About

Feed-forward compressor experiments source code for "Differentiable All-pole Filters for Time-varying Audio Systems".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages