Do you notice a mistake?
NaN:NaN
00:00
Abstract: For the past 60 years, machines have been involved in all aspects of music: playing, recording, processing, editing, mixing, composing, analyzing, and synthesizing. However, in software terms, music is nothing but a sequence of numbers and functions describing waveforms (what to play) and scores (when to play). It doesn't have a notion of what music sounds like, and how it is perceived and received by listeners, in its context, time and space. The Echo Nest is a music intelligence company that provides a deep and granular level of musical information at scale, on both content and context. By listening to every song (tempo, rhythm, timbre, harmony), and reading every piece of music text online (blog posts, news, reviews), the "musical brain" constantly learns to reverse engineer music. Its knowledge on 35 million unique songs and 2 million artists was generated automatically and dynamically over the past 6 years. Through many examples and live demos, we demonstrate the power of big-data driven software in the context of personalized listening experiences and music creation.
Bio: Tristan earned a doctorate in Media Arts and Sciences from MIT in 2005. His academic work combined machine listening and machine learning technologies in teaching computers how to hear and make music. He first earned an MS in Electrical Engineering and Computer Science from the University of Rennes in France, later working on music signal parameter extraction at the Center for New Music and Audio Technologies at U.C. Berkeley. He has worked with leading research and development labs in the U.S. and France as a software and hardware engineer in areas of machine listening and audio analysis. He is a co-founder and the Chief Science Officer of Music Intelligence company The Echo Nest, which powers smarter music applications for a wide range of customers including MTV, Spotify, The BBC, MOG, eMusic, Clear Channel, Rdio, EMI, and a community of more than 12,000 independent application developers.
June 2, 2012 16 min
Abstract: Un des aspects les plus prospectifs de la relation instrument/machine réside dans le développement des moyens d’analyse acoustique des sons instrumentaux et vocaux en temps réel que l’on appelle descripteurs audio. Le nombre de ce
June 2, 2012 56 min
Abstract: This talk introduces a new music interaction system based on style modeling, in a MIR-oriented perspective called VirtualBand. VirtualBand aims at combining musical realism and quality with real-time interaction, by capturing esse
June 2, 2012 54 min
June 2, 2012 44 min
Abstract: Interactive Improvisation Systems involve at least three cooperating and concurrent expert agents: machine listening, machine learning, model based generation. Machine listening may occur during the initial learning stage (off-lin
June 2, 2012 59 min
Abstract: Over the past years, music information research has elaborated powerful tools for creating a new generation of applications that redefine the boundaries of music listening and music making. The recent availability of affordable mo
June 2, 2012 28 min
Abstract: The wealth of tools developed in music information retrieval (MIR) for the description, indexation, and retrieval of music and sound can be easily (ab)used for the creation of new musical material and sound design. Based on automa
June 2, 2012 44 min
Abstract: Although MIR did arguably not start as a research discipline for promoting creativity and music performance, this trend has begun to gain importance in recent years. The possibilities of MIR for supporting musical creation and mus
June 2, 2012 57 min
Do you notice a mistake?
1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43
Monday through Friday 9:30am-7pm
Closed Saturday and Sunday
Hôtel de Ville, Rambuteau, Châtelet, Les Halles
Institut de Recherche et de Coordination Acoustique/Musique
Copyright © 2022 Ircam. All rights reserved.