Skip to content

Latest commit

 

History

History
36 lines (29 loc) · 1.77 KB

gouyon14.md

File metadata and controls

36 lines (29 loc) · 1.77 KB

On Evaluation Validity in Music Autotagging

Fabien Gouyon, Bob L. Sturm, João Lobato Oliveira, Nuno Hespanhol, Thibault Langlois

Music autotagging, an established problem in Music Information Retrieval, aims to alleviate the human cost required to manually annotate collections of recorded music with textual labels by automating the process. Many autotagging systems have been proposed and evaluated by procedures and datasets that are now standard (used in MIREX, for instance). Very little work, however, has been dedicated to determine what these evaluations really mean about an autotagging system, or the comparison of two systems, for the problem of annotating music in the real world. In this article, we are concerned with explaining the figure of merit of an autotagging system evaluated with a standard approach. Specifically, does the figure of merit, or a comparison of figures of merit, warrant a conclusion about how well autotagging systems have learned to describe music with a specific vocabulary? The main contributions of this paper are a formalization of the notion of validity in autotagging evaluation, and a method to test it in general. We demonstrate the practical use of our method in experiments with three specific state-of-the-art autotagging systems –all of which are reproducible using the linked code and data. Our experiments show for these specific systems in a simple and objective two-class task that the standard evaluation approach does not provide valid indicators of their performance.

Source Code

Dataset