MusicCaps is a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts. For each 10-second music clip, MusicCaps provides:
1) A free-text caption consisting of four sentences on average, describing the music and
2) A list of music aspects, describing genre, mood, tempo, singer voices, instrumentation, dissonances, rhythm, etc.
Variants: MusicCaps
This dataset is used in 1 benchmark:
Recent papers with results on this dataset: