The lecture will take place in TANNA SCHULICH HALL, followed by a wine and cheese reception in the lobby of the Elizabeth Wirth Music Building. This event is free and open to the general public.
We invite you to share this event via Facebook.
Abstract
Automatic music transcription is the task of creating a score representation (for example in Western common music notation) from an audio recording or live audio stream. Although research on this topic spans almost 50 years, progress in the last few years has been quite remarkable. The field has moved from a situation where data was scarce, methods were ad hoc, and there were no standard methodologies or datasets for comparing competing approaches, to the current state where data-rich models are trained and tested on standard benchmark datasets. Various transcription tasks are addressed, such as transcription of a single or multiple simultaneous instruments, and transcription of the main melody or the bass line, the chords or the lyrics. After discussing some of the methods we have developed and used for music transcription, I will give examples of the application of this technology for understanding human music-making, such as analysis of melodic patterns in jazz improvisation and of expressive timing in classical and jazz performance.