The lecture will take place in TANNA SCHULICH HALL, followed by a wine and cheese reception in room A833 (8th floor of the New Music Building).
ABSTRACT:
Music technologies will open the future up to new ways of enjoying music both in terms of music creation and music appreciation. In this lecture, I will introduce the frontiers of music technologies by showing some practical examples to demonstrate how end users can benefit from singing synthesis technologies, music understanding technologies, and music interfaces.
From the viewpoint of music creation, I will demonstrate a singing synthesis system, VocaListener, that can synthesize natural singing voices by analyzing and imitating human singing, and a robot singer system, VocaWatcher, that can generate realistic facial motions of a humanoid robot singer. I will also introduce the world's first culture in which people actively enjoy songs with synthesized singing voices as the main vocals: emerging in Japan since singing synthesis software such as Hatsune Miku based on VOCALOID has been attracting attention since 2007. Singing synthesis thus breaks down the long-cherished view that listening to a non-human singing voice is worthless. In fact, live concerts featuring Hatsune Miku based on singing synthesis have been successful not only in several cities in Japan but also in Taipei, Los Angeles, New York, Singapore, Hong Kong, Jakarta, Shanghai, etc. This is a feat that could not have been imagined before.
As for music appreciation, I will demonstrate a web service for active music listening, "Songle", that has analyzed more than 880,000 songs on music- or video-sharing services and facilitates deeper understanding of music. Songle enables users to control computer-graphic animation and physical devices such as lighting devices and robot dancers in synchronization with music available on the web. I will then demonstrate a web service for large-scale music browsing, "Songrium", that allows users to explore music while seeing and utilizing various relations among more than 700,000 music video clips on video-sharing services. Songrium has a three-dimensional visualization function that shows music-synchronized animation, which has already been used as a background movie in a live concert of Hatsune Miku.
ABOUT MASATAKA GOTO:
Masataka Goto received the Doctor of Engineering degree from Waseda University in 1998. He is currently a Prime Senior Researcher and the Leader of the Media Interaction Group at the National Institute of Advanced Industrial Science and Technology (AIST), Japan. In 1992 he was one of the first to start work on automatic music understanding, and has since been at the forefront of research in music technologies and music interfaces based on those technologies. Over the past 23 years, he has published more than 200 papers in refereed journals and international conferences and has received 41 awards, including several best paper awards, best presentation awards, the Tenth Japan Academy Medal, the Tenth JSPS PRIZE, and the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology (Young Scientists' Prize). He has served as a committee member of over 90 scientific societies and conferences, including the General Chair of the 10th and 15th International Society for Music Information Retrieval Conferences (ISMIR 2009 and 2014). In 2011, as the Research Director he began a 5-year research project (OngaCREST Project) on music technologies, a project funded by the Japan Science and Technology Agency (CREST, JST).
VIDEO ARCHIVE - MASATAKA GOTO
APA video citation:
Goto, M. (2016, January 4). Frontiers of music technologies - singing synthesis and active music listening -
CIRMMT Distinguished Lectures in the Science and Technology of Music. [Video file].
Retrieved from https://www.youtube.com/watch?v=qASbgvulDSc&feature=youtu.be