-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Target length #11
Comments
Hi Karim, The main reason is AudioSet, the primary dataset we used to pretrain the SSAST model, mostly consists of 10s audios. Using longer or shorter audio lengths is perfectly fine. In my opinion, when the downstream task is unknown, the pretraining audio length should be the longer the better because we use cut/interpolate to adjust the audio length (positional embedding) between the pretraining and fine-tuning stage. Cut should be better than interpolation. However, Transformer is O(n^2) so longer input will be more computationally expensive. This is the code for positional embedding for different input length: ssast/src/models/ast_models.py Lines 192 to 201 in bfc5c1a
-Yuan |
Thank you!! |
Hi Yuan,
Thanks again for this great work, I have been using both this and the original AST model for some downstream tasks. I am currently looking into some other time series data, and was wondering if there was a particular reason you chose 10 seconds for the audio length during audioset pretraining. Why not 5 seconds, 15? Did you consult any specific resources to conclude this or is it more arbitrary?
Thanks,
Karim
The text was updated successfully, but these errors were encountered: