CIRMMT Student Symposium 2017 - Abstracts
To view the list of presenters: CIRMMT Student Symposium - List of Presenters
General Assembly and Student Symposium 2017 Overview
9:30-11:00: ORAL PRESENTATIONS / PRÉSENTATIONS ORALES
9:30 - Jason Nobel, Eddy Kazazis: Towards a perceptual chordal space: an empirical study on auditory preference rulesHasegawa has suggested that complex harmonies, such as those used in “atonal” music by Schoenberg and later in “spectral” music by Grisey, can be analyzed as upper partials of a hypothetical virtual fundamental. Kazazis and Hasegawa built upon this idea and suggested that any complex chord can be mapped to a unique position in a three-dimensional space whose axes represent chordal qualities, namely “chordal hue” (virtual or actual fundamental), “chordal saturation” (roughly corresponding to inharmonicity), and “chordal brightness” (chordal centroid). We evaluate the perceptual relevance of this three-dimensional chordal space, and its ability to predict listeners’ responses to complex harmonies drawn from the 20th and 21st century repertoire. Participants rate a presented chord according to: the extent of its “rootedness” for a set of pre estimated virtual fundamentals; the amount of coherence or incoherence between its constituent notes; and its pitch height in a presumably holistic mode of listening. The resulting data are analyzed for estimating correlations along the dimensions of chordal hue, chordal saturation, and chordal centroid. These quantifications also provide a different analytical insight on Noble and McAdams’s previous studies on the perceptual difference between “chords” and “sound masses,” which showed that density alone is not a sufficient predictor of perceptual fusion. Finally, a brief introduction to compositional applications of the chordal space is given.
10:00 - Zored Ahmer: Activity based music recommendations
For this project, we create a music recommendation system that considers current user activity. More specifically, we aim to use the address (url) of the web page the user is visiting to guide their music playlist. We plan on doing this by creating a configurable browser extension.
We start by having a user create playlists for browser based activities they partake in. A user might make one for ‘Work’, one for ‘Play’ and one for ‘Reading’. The user then customizes each playlist with the websites they consider to fall into that activity, seed information and tuneable parameters (such as danceability). The we use the Spotify Recommendation API. The Recommendation API accepts a seed (artist, track, genre) and optionally tune-able parameters (such as danceability) and returns a list of songs. We play one of these songs and if the user likes it (denoted by how long they listen to it before skipping), the song is used as part of the seed. If the user switches their active website and the new website is part of a different playlist, then we switch the recommendation seed and tuneable parameters accordingly. This way, the extension effectively maintains several different playlists and the user’s browser activity switches between them. The extension will include a UI that allows for the configuration of all tuneable parameters per playlist. This will facilitate fine grained tuning of playlists.
We hope that by creating this browser extension, the day to day activity of users can be better accounted for and more enjoyable music recommendations given to them.
10:30 - Juan Sebastian Delgado, Alex Nieva: Multimodal visual augmentation for cello performance
Visual observation of performance gesture has been shown to play a key role in audience reception of musical works, particularly in experimental and new creations. As author Luke Windsor points out, “the gestures that ‘accompany’ music are potentially a primary manner in which an audience has direct contact with the performer.” This project proposes to augment audience perception through the creation of a visual display that is responsive to performance gesture. Our main goals are: A) to control the interactive lighting (visual display) by mapping the gestural information collected; B) to use the interactive lighting to augment interpretation of the musical text; and C) to make performance practice decisions in parallel to designing the lighting/gestural interface.
For this, we have used motion and relative position tracking sensors to gather information from the performer gestures and map it to a visual display consisting of an array of addressable light-emitting diodes on the surface of a cello. We investigated meaningful gestures with the optical motion capture system at CIRMMT and analyzed the performance techniques to choose the most convenient sensors. In addition, we performed audio feature extraction with a piezoelectric sensor to fusion the data and to convey the final information to be mapped. The system is based on Wi-Fi capable micro-controller boards programmed with mapping algorithms that control full-duplex communication among 3 of them and create the visuals that display a variety of lighting effects controlled by information derived from the sensor system.
In order to put this project into practice as well as to disseminate and promote the use of new technologies in the performing arts, an original piece by composer Luis Naon (Paris Conservatory/IRCAM) was written for the purpose of this project. “Pájaro contra el borde de la noche” for solo cello, ensemble, and electronics, adapted also for solo cello and electronics explores an array of performing techniques. As a result, these different techniques allowed us to have more material for extracting meaningful gestural information and thus, developing a comprehensive visual display to amplify the musical experience.
11:15-12:30: CIRMMT STUDENT AWARD LIGHTNING ROUND
GAUS, Groupe d’Acoustique de l’Université de Sherbrooke, Sherbrooke, Canada.
CIRMMT, Centre for Interdisciplinary Research in Music, Media and Technology, McGill University, Montréal, Canada.
Spatial audio in multimedia, digital arts as industrial applications, is in a boom. After a half century dominated by only one principle, stereophonic illusion between two loudspeakers, many alternative solutions appear and are improved to reproduce, or synthesize, spatially sound fields. Thus, 2D audio systems as 5.1 or 7.1 have been conquering our living rooms and theatres. In addition, 2D and 3D loudspeaker arrays based on different principles, as Wave Field Synthesis (WFS), High Order Ambisonic (HOA) or Directional Audio Coding (DirAC), are sufficiently advanced to reach a wider public. Reproduced sound fields in 3D require an audio digital processing creation or audio recording in 3D. In the latter case, recording solutions developed for 5.1 or 7.1 are not accurate enough to induce elevation rendering. Moreover, commercial ambisonic microphones that can record 3D sound fields do not exceed order 3, while research on HOA goes up to order 5. Based on Pierre Lecomte PhD, this project deals with prototyping an ambisonic microphone. A 50-node discretization on a sphere, according to Lebedev grid, enable microphone array to record at ambisonic order 5. In order to compensate frequency limitations, combination of two concentric 50-node microphone arrays, rigid and empty, respectively, is investigated. An introduction of 3D sound field recording uses and applications will be provided. Ambisonic theory is then briefly presented. The theory is applied to the specific cases of rigid and empty spherical microphones arrays. Advantages of Lebedev grid discretization and double-layer combination are described. Finally, the steps to achieve this project and preliminary results are shown.
Spatialization as a compositional parameter in multi-speaker environments opens many doorways to new possibilities in music research, while simultaneously highlighting several important questions. How can it be used as an effective compositional tool? On a technical level, in which manner will the spatialization be implemented in a space? How would one capture such an acoustic space in a recording that would emulate these environments accurately? It is clear that spatialization as a compositional parameter reveals a deeper problem, not just for the composer, but also for the sound engineer. It is the task of this research project to explore these questions in a compositional piece.
This project supports the conception, implementation and use of a system that interfaces with MIDI enabled pipe organs in churches. In essence, the sound producing system of the organ becomes the "synthesizer" component of a digital musical instrument where the input and mapping system can be arbitrarily chosen to correspond to any performance gesture. The system is intended to be used by artists and composers realizing new works of interactive performance that involves gestural control of the instrument using alternative controllers.
Though there has been recent growth in the field of musicians' wellness, researchers lack resources on clear kinetic, kinematic, or physiological data (KKP) collection methods or an easily accessible body of data with which to compare their results. The goal of the Kinetic-Kinematic-Physiological-Musician (KKPM) Database Project is twofold. In the short term it aims to design and implement a database which will contain a compilation of key measurements of musicians including a wide variety of KKP data, particularly those related to performance and musculoskeletal disorders. Collected data will be standardized by the establishment of measurement parameters. Once collected, data will be allocated to a pre-set, protected file sharable between researchers. The long term goal is to maintain the database operationally and continuously add information so it becomes a reliable source and reference for future research.
Though measurement parameters will eventually encompass a wide variety of KKP data, the focus during the first year will be on sEMG collection on a set group of muscles per instrument (flute and violin) as well as postural assessment. In addition to facilitating research that can support the prevention of musculoskeletal disorders in musicians, the KKPM Database Project can potentially help to improve musical performance as well. With the aid of technology, knowledge that has been developed in other fields, such as kinesiology, can be transferred to music research; this work will hopefully lead to the implementation scientifically supported training methods.
12:15 - Matthew Boerum, Jack Kelly, Diego Quiroz: How do virtual environments affect localization and timing accuracy when panning audio sources three dimensionally
This research seeks to investigate the dependencies, variables and possible inaccuracies in sound source localization when presented through virtual reality (VR) using 3D control devices. Using the Oculus Rift, a high-quality virtual reality head-mounted display (HMD), we will evaluate how virtual environments affect a simple audio task such as 3D panning accuracy and duration. Our experimental method requires the use of CIRMMT A816 (semi-anechoic) room and 17 Genelec 8030 loudspeakers to create a 3D audio playback system. With an accurate 3D architectural model of A816 and photorealistic models of the Genelec 8030s placed in the same physical location as the real A816 setup, we can present a highly accurate, visual representation of the real world environment in virtual reality through the Oculus Rift HMD. Using both hardware rotary controls, and the Leap Motion hand gesture controller, we can present listeners with the ability to pan audio spatially in three dimensions within the 3D audio playback system. Localization accuracy will be tracked via the 3D panning software used to pan the audio sources within the 3D audio playback system. The test interface will also track the duration from start to finish for each localization-panning task.
12:30-1:45: LUNCH & POSTER AND AUDIO DEMOS / 12H30 - 13H45 : DÎNER & AFFICHES ET DEMONSTRATIONS D'AUDIO
As part of a mobile remote implicit communication system, we use vibrotactile patterns to convey background information between two people on an ongoing basis. Unlike systems that use memorized tactons (haptic icons), we focus on methods for translating parameters of a user's state (e.g., activity level, distance, physiological state) into dynamically created patterns that summarize the state over a brief time interval. We describe the vibration pattern used in our current user study to summarize a partner's activity, as well as preliminary findings. Further, we propose additional possibilities for enriching the information content.
Cynthia Tarlao: Mind the moving music: auditory motion in experimental music
Arun Duraisamy: Understanding the acoustic behaviour of natural fibre composites and the effects of temperature and humidity
1:45-3:15: ORAL PRESENTATIONS / 13H45 - 15H15 : PRÉSENTATION ORALE 5
Hospitals are overwhelmingly filled with sounds produced by alarms and patient monitoring devices. Consequently, these sounds create a fatiguing and stressful environment for both patients and clinicians. As an attempt to attenuate the auditory sensory overload, we propose the use of a multimodal alarm system in operating rooms and intensive care units. Specifically, the system would utilize multisensory integration of the haptic and auditory channels. We hypothesize that combining these two channels in a synchronized fashion, the auditory threshold of perception of subjects will be lowered, thus allowing for an overall reduction of volume in hospitals. The results obtained from pilot testing support this hypothesis. We conclude that further investigation of this method can prove useful in reducing the sound exposure level in hospitals as well as personalizing the perception and type of the alarm for clinicians.
3:30-4:30: KEYNOTE ADDRESS / 15H30-16H30 CONFÉRENCE INVITÉE
15:30 - Tim Crawford: Learning to live with error? Some reflections on trying to use computers to do musicology
In the mid-1980s it looked as though the new "Personal Computer" could offer exciting new ways to do musicology. With experience, the realisation dawned that no part of that process did not contain unsolved problems, and it didn't take long come to an understanding that only by collaboration with others could anything at all be achieved. Getting interested people from different disciplines together in the same room to talk about music and how to tackle it was easier than I expected, though it's still unclear whether we were in fact always talking about the same thing.
Arising in some degree from such conversations, with the addition of a certain amount of good fortune in grant funding, and coinciding with the nascent revolution in digital music distribution at the turn of the new century, the ISMIR conferences essentially catalysed a new hybrid discipline, Music Information Retrieval. But all the time, the question continues to nag: "What can all this do for a musicologist?”.
The essential problem seems to be that music - at any rate, the stuff studied in musicology - resists the rigid categorisation and analysis axiomatic to MIR. While this is hardly a new observation, I would like to think that accepting this fact rather than regarding it as a "failure" is one of the main keys for success in getting computers to work their magic.