P7-13: Perceptual Errors in Music Source Separation: looking beyond SDR averages
Saurjya Sarkar, Victoria Moomijan, Basil Woods, Emmanouil Benetos, Mark Sandler
Subjects: Evaluation methodology ; Machine learning/artificial intelligence for music ; Human-centered MIR ; Sound source separation ; Evaluation, datasets, and reproducibility ; User-centered evaluation ; MIR tasks ; Knowledge-driven approaches to MIR
Presented In-person
4-minute short-format presentation
Music source separation extracts individual instrument/performer stems from mixed musical recordings. Performance is typically evaluated using metrics like source-to-distortion ratio (SDR), with higher values indicating better separation. However, relying on global SDR averages across test datasets provides limited insight into model performance. While improved average SDR suggests superior performance, it reveals little about specific strengths and weaknesses. Additionally, averaged metrics fail to account for SDR variance, which depends heavily on the musical characteristics of the test set. These limitations make cross-task/stem comparisons potentially misleading. To address these issues, we conducted a listening study evaluating source separation models across three tasks: 6-stem separation, Lead vs. Backing Vocal Separation, and Duet Separation. Participants assessed diverse examples, particularly those with poor objective or subjective performance. We categorized failure cases into three error types and found that while SDR generally correlates with perceptual ratings, significant deviations occur. Some errors substantially impact human perception but aren't well captured by SDR, while in other cases, listeners perceive better quality than SDR suggests. Our findings reveal nuances missed in current evaluation paradigms and highlight the need to include error categorization and performance distribution alongside averaged metrics.