Description
Presenters will give a 15-minute talk on their research, followed by a Q&A. A catered reception will follow.
Schedule
4:00–4:30pm: Gabriel Vigliensoni – Weaving memory matter: Data- and interaction-driven approaches for sustained musical practice
4:30–5:00pm: Andrea Gozzi – Audio augmented reality through bone conduction: Challenges and perspectives
5:00–6:00pm: catered reception
Presenters
Gabriel Vigliensoni
Dr. Gabriel Vigliensoni is an Assistant Professor in Creative Artificial Intelligence in the Department of Design and Computation Arts at Concordia University. He is a regular member of the Applied AI Institute and the Centre for Interdisciplinary Research in Music Media and Technology. His research and creative practice explore the creative affordances of the machine learning paradigm in the context of sound- and music-making. Vigliensoni obtained his PhD in Music from McGill University in 2017, and later carried out postdoctoral research projects at Goldsmiths, University of London, and the Creative Computing Institute of the University of the Arts London. Vigliensoni's creative work and research have been showcased internationally at venues and conferences such as CCA (QC), CMMAS (MX), MUTEK (QC, CL), IKLECTIK (UK), IRCAM (FR), ISEA (CA), NMF (UK), NIME (US), ICMC (US), and ISMIR (US, FR, BR, CN, NL).
Weaving memory matter: Data- and interaction-driven approaches for sustained musical practice
In this talk, I will share my ongoing research on the control and steerability of neural audio synthesis models through data- and interaction-driven approaches. Critiquing the limitations of large “foundation” models in expressing musical intentions and agency, I will emphasize how using small datasets allows performers increased creative agency when training these machine learning models. By adopting interactive machine learning techniques that map lower- to higher-dimensional feature spaces, we can enhance expressivity and introduce long-term coherence often lost in state-of-the-art models. These alternative methods treat training data as a vehicle for communicating intention rather than as representations of “ground truth,” and employ interactive mechanisms to better capture human intention and agency. Finally, I will illustrate these concepts with examples from my own creative practice with sound and music, demonstrating how these approaches can enable richer, sustained musical engagements.
Andrea Gozzi
Assistant Professor in Electroacoustics at the University of Sherbrooke. Member of the Tempo Reale research center (Florence, Italy) and co-founder of Mezzo Forte (Meudon, France), a company specialized in the field of augmented sound reality. Associate researcher at the Laboratoire Formes Ondes (LFO) of the Université de Montréal, member of the Center for Interdisciplinary Research in Music Media and Technology (CIRMMT), the International Laboratory for Brain, Music and Sound Research (BRAMS), Centre de recherche interuniversitaire sur les humanités numériques (CRIHN), and the Interdisciplinary Observatoire interdisciplinaire de création et de recherche en musique (OICRM) in Montréal.
As a professor, he has taught “sound design” and the “history of rock music” (popular music) at the University of Florence, as well as “sound design” at the LABA Academy (Florence). As a composer and musician, he has collaborated with Italian and international artists, both on stage and in the studio, participating in events such as Live 8 (2005) in Rome and touring in France, Germany, England, and Canada. As a sound designer, he has developed projects for theater, museums, and multimedia books. He is the author of numerous essays and books dedicated to the history of rock and musical biographies published in Italy and Canada.
Audio augmented reality through bone conduction: Challenges and perspectives
Bone Conduction Headphones (BCHs) enable the seamless blending of real and mediated sound layers, enhancing the realism of virtual sonic objects for Audio Augmented Reality (AAR) experiences. A small device attached to the temporal region of the skull sends the audio signal directly to the cochlea, bypassing the external and middle ear. Since the auditory canal is unobstructed, the headphones allow the mixing of real and virtual sound layers, reducing our cognitive ability to distinguish between the two elements. My interdisciplinary research/creation – merging acoustics, sound design, musical composition, and perception studies – explore the latest experiences in this field. Use cases for augmented concert projects and museum visits with immersive audio will be presented, as well as studies and tests conducted at CIRMMT (Agile Seed funding) and the International Laboratory for Brain, Music and Sound Research (BRAMS) in Montréal.