\r\nLa première génération qui entre de plein droit dans la base est donc celle constituée par John Cage, Olivier Messiaen ou encore Elliott Carter.\r\n\u003C/p>\r\n\r\n\u003Ch3>Contenus\u003C/h3>\r\n\r\n\u003Cp style=\"text-align: justify;\">\r\nLes données sont progressivement mises à jour depuis juillet 2007, en remplacement de celles de l’ancienne version de la base, développée entre 1996 et 2001 par Marc Texier. L’information peut donc être incomplète pour certains compositeurs non encore traités : dans ce cas l’indication « ! Informations antérieures à 2002 » apparaît en haut de page. Pour tous les autres documents, la date de dernière mise à jour est indiquée en haut de page.\r\n\u003C/p>\r\n\r\n\u003Ch3>Mises à jour et nouvelles entrées\u003C/h3>\r\n\r\n\r\n\u003Cp style=\"text-align: justify;\">Les mises à jour se font compositeur par compositeur. Pour un compositeur donné, sont systématiquement revus ou créés les documents suivants :\r\n\u003C/p>\r\n\u003Cul style=\"text-align: justify;\">\r\n \u003Cli>la biographie\u003C/li>\r\n \u003Cli>le catalogue exhaustif de ses œuvres (y compris, si possible, les œuvres disparues, retirées ou posthumes)\u003C/li>\r\n \u003Cli>une liste de ressources bibliographiques, discographiques et internet,\u003C/li>\r\n \u003Cli>des éventuels documents attachés (Parcours de l’œuvre, interviews, analyses, notes de programme etc.)\u003C/li>\r\n\u003C/ul>\r\n\u003Cp style=\"text-align: justify;\">\r\nLa définition des priorités de mises à jour et nouvelles entrées des compositeurs s’opèrent suivant une méthodologie basée sur l’observation de la vie culturelle européenne :\r\n\u003C/p>\r\n\u003Cul style=\"text-align: justify;\">\r\n \u003Cli>Avant chaque saison, nous relevons les programmations à venir des principaux festivals, institutions et ensembles musicaux européens investis dans le domaine de la création musicale. Cette observation s’opère par cercles concentriques en partant de l’activité propre de l’Ircam (année n-2), puis de celle des partenaires privilégiés (année n-1) jusqu’aux grandes institutions et festivals européens de création (année n) ;\u003C/li>\r\n \u003Cli>Chaque compositeur est crédité de points en fonction de l’importance et de l’intensité de l’activité musicale le concernant. Ce classement permet de définir les priorités pour chaque trimestre ;\u003C/li>\r\n \u003Cli>Si un compositeur n’a pas obtenu assez de points pour figurer dans les priorités, il cumule ceux-ci sur le trimestre suivant ; et ainsi remonte progressivement dans la liste des priorités.\u003C/li>\r\n \u003Cli>Une fois mis à jour, les documents attachés à un compositeur sont valables trois ans, après lesquels le processus décrit ci-dessus reprend.\u003C/li>\r\n\u003C/ul>\t\r\n\r\n\u003Ch3>Erreurs ou omissions\u003C/h3>\t\r\n\t\t\t\t\r\n\u003Cp style=\"text-align: justify;\">\r\nSi la mise à jour est déjà effectuée (date postérieure à juin 2007) : nous invitons les musicologues, les compositeurs (ou leur éditeur) à nous signaler toute erreur ou omission importante. Elle sera corrigée, dans la mesure du possible, au cours du trimestre suivant. De même, nous les invitons à nous faire connaître leurs œuvres nouvelles, en mentionnant tous les éléments nécessaires à la création d’une fiche œuvre nouvelle.\r\n\u003C/p>\r\n\u003Cp style=\"text-align: justify;\">\t\t\r\nSi la mise à jour n’est pas encore effectuée (indication : « mise à jour à venir ») : Les compositeurs peuvent nous signaler des erreurs ou omissions importantes. Ces indications seront prises en compte au moment de la mise à jour à venir. Un compositeur peut également demander le retrait de sa biographie dans l’attente de la mise à jour.\r\n\u003C/p>\r\n\u003Cp style=\"text-align: justify;\">\r\nPour cela : \u003Ca href=\"mailto:brahms-contenu[at]ircam[dot]fr\">écrire\u003C/a> à l’administrateur de publication\r\n\u003C/p>\r\n",{"id":14,"url":15,"titleFr":16,"titleEn":11,"contentFr":17,"contentEn":11},"a3cd05aa-3447-487a-b4fc-213ba0f77e6b","/copyrights/","Mention Légale","La reproduction de contenus de ce site Web, en tout ou partie, est formellement interdite sans la permission écrite de l'Ircam. Les textes, images, logos, codes sources sont la propriété de l'Ircam, ou de détenteurs avec lesquels l'Ircam a négocié les droits de reproduction à sa seule fin d'utilisation dans le cadre du site Brahms. Tout contrevenant s'expose à des poursuites judiciaires. ",{"id":19,"url":20,"titleFr":21,"titleEn":11,"contentFr":22,"contentEn":11},"9162642e-ea99-48c3-8d3b-2dc2a3f8ba45","/repertoire/about/","Projet Répertoire Ircam","\u003Cp>Le Projet Répertoire Ircam est une collection d’analyses musicales en ligne d’environ 70 œuvres crées à l’Ircam et considérées comme représentatives de la culture de l’institut tant sur le plan artistique que technologique.\u003C/p>\r\n\r\n\u003Cp>Ce projet a débuté en 2006-2008 avec la création d’outils auteurs mises en œuvre par le département Interfaces Recherche/Création en collaboration avec le secteur recherche de l’institut. Les premières analyses ont été mises en ligne fin 2010 et il est prévu que la collection s’élargisse à un rythme de deux ou trois nouvelles analyse par an.\u003C/p>\r\n\r\n\u003Cp>Plusieurs objectifs sont poursuivis par ce projet :\u003C/p>\r\n\r\n\u003Cul>\r\n\t\u003Cli>faire connaître les œuvres produites à l’Ircam à un public plus large,\u003C/li>\r\n\t\u003Cli>montrer la relation entre l’idée musicale et les technologies utilisés,\u003C/li>\r\n\t\u003Cli>identifier les nouveaux éléments du vocabulaire musical qui émergent à travers ces œuvres,\u003C/li>\r\n\t\u003Cli>offrir un support d’information aux interprètes.\u003C/li>\r\n\u003C/ul>\r\n\r\n\u003Cp>Chaque analyse est structurée en trois parties :\u003C/p>\r\n\r\n\u003Col>\r\n\t\u003Cli>description générale de l’œuvre,\u003C/li>\r\n\t\u003Cli>analyse des extraits de l’œuvre avec mise en relation de l’idée musicale et de l’écriture électronique,\u003C/li>\r\n\t\u003Cli>la liste de ressources spécifiques (type de problème musical abordé, technologies utilisées, œuvres abordant le même type de problématique) et générales (biographique, historique, technique).\u003C/li>\r\n\u003C/ol>\r\n\r\n\u003Cp>Les analyses seront également mises en relation avec :\u003C/p>\r\n\r\n\u003Cul>\r\n\t\u003Cli>Brahms : une base de données encyclopédique en ligne de compositeurs de musique contemporaine de toutes les nationalités dont les œuvres ont été créées après 1945. Cette base contient actuellement environ 600 références. Pour chaque compositeur, il y a une partie biographique accompagnée des sources d’information, et une autre partie qui situe l’orientation esthétique, les phases principales et le contexte historique de l’œuvre.\u003C/li>\r\n\t\u003Cli>Images d’une œuvre : une collection des interviews filmés des compositeurs.\u003C/li>\r\n\t\u003Cli>Sidney : une base de données qui contient les éléments techniques (programmes informatiques, sons etc. ) nécessaires pour l’exécution de l’œuvre.\u003C/li>\r\n\u003C/ul>\r\n\r\n\u003Cp>A plus long terme, les analyses des nouvelles œuvres créés à l’Ircam viendront se rajouter au corpus donné dans l’annexe citée ci-dessus.\u003C/p>",{"data":24},{"eventBySlug":25},{"title":26,"start":27,"end":11,"type":11,"slug":11,"programnote":28,"location":11,"medias":31},"Workshop MIR (Music Information Research) and Creation","2012-06-02",{"title":26,"program":29},{"getUrl":30},"https://storage.ressources.ircam.fr/ressources/media/5c479f74-03f3-4900-a2da-1bc9daa42b84.pdf?response-cache-control=public%2C%20max-age%3D31536000%2C%20immutable&response-content-disposition=attachment%3Bfilename%3D%22LO44993-01.pdf%22&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ressources%2F20250921%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250921T155516Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=64316d18e8dcc6c2ea873c32bda46a8bf2741237cdad1aa9a481717dcb80823b",[32,38,44,50,55,61,67,72,77],{"id":33,"slug":34,"title":35,"description":11,"duration":36,"type":37,"timestamp":27},"a5bd98df-38f6-4729-899b-5ba7181516f5","x8d0847_introduction","Introduction - Geoffroy Peeters","16 min","video",{"id":39,"slug":40,"title":41,"description":42,"duration":43,"type":37,"timestamp":27},"48596766-243d-4fcb-b08f-383311b36003","x9d729d_descripteurs-audio-un-enjeux-majeur-pour","Descripteurs audio : un enjeux majeur pour la composition en temps réel","Abstract: Un des aspects les plus prospectifs de la relation instrument/machine réside dans le développement des moyens d’analyse acoustique des sons instrumentaux et vocaux en temps réel que l’on appelle descripteurs audio. Le nombre de ces descripteurs ne cesse de s’accroître au fil du temps : centroid, spread, skewness, kurtosis, roll-off,fundamental frequency, noisiness, inharmonicity, odd-to-even energy ratio, deviation, loudness, roughness, … Ces termes désignent des attributs du son que l’on n’a découverts que récemment. Notre connaissance du son se complexifie. L’extraction de paramètres constitutifs du son en vue de leur utilisation en tant qu’éléments compositionnels permet de créer une cohérence musicale entre les sons acoustiques et les sons électroniques. Cependant, seul un petit nombre de ces descripteurs sont aujourd’hui réellement adaptés aux impératifs musicaux réels. Le perfectionnement de ces descripteurs est un axe de recherche indispensable si l’on souhaite réduire encore l’écart qui sépare le monde instrumental de celui des sons électroniques.\r\n\r\nBio: Allergic to academic studies and a sworn autodidact, Philippe Manoury presented his first compostions to Gérard Condé who introduced him to Max Deutsch, a former student of Arnold Schoenberg. He initially studied composition at the École normale de musique de Paris where he also worked on harmony and counterpoint before going on to study at the Cnsmdp under Ivo Malec, Michel Philippot (composition), and Claude Ballif (analysis). It is the premiere of Cryptophonos performed by the pianist Claude Helffer at the Metz Festival in 1974 that introduced the public to Manoury’s works. In 1978, he moved to Brazil where he taught classes and conferences on contemporary music in a number of universities (Sao Paulo, Brasilia, Rio de Janeiro, Salvador). In 1981, Manoury returned to France where he was a guest researcher at IRCAM. Since this era, he has continued to be a part of the activities at the institute, as a composer or as a professor. At IRCAM Manoury developed a range of research in the domain of real-time interaction between acoustic instruments and new technologies in connection with computer-music in collaboration with the mathematician Miller Puckette. From this work, his series of interactive works for a range of instruments was born: Sonus ex machina, comprenant Jupiter, Pluton, La Partition du Ciel et de l’Enfer, and Neptune. From 1983 to 1987, Philippe Manoury was the head of Education with the Ensemble intercontemporain. He was a composition and electronic music professor at the CNSMD de Lyon from 1987 to 1997. After numerous residencies in diverse institutions in France and abroad, Philippe Manoury decided in 2004 to divide his time between Europe and the United States where he teaches composition at the University of California San Diego.","56 min",{"id":45,"slug":46,"title":47,"description":48,"duration":49,"type":37,"timestamp":27},"b901e3dd-a327-4bcd-8104-a7b7f7b3bc1d","xbec879_virtualband-a-mir-approach-to-interactive","VirtualBand, a MIR-approach to interactive improvisation - François Pachet","Abstract: This talk introduces a new music interaction system based on style modeling, in a MIR-oriented perspective called VirtualBand. VirtualBand aims at combining musical realism and quality with real-time interaction, by capturing essential elements of the style of a musician and by reusing these elements during the musical improvisation of the user so that an interactive, real-time musical engagement takes place just as it happens with a real band of responsive musicians. To enable this to take place, we address style modeling from the new perspective of combinatorial statistical modeling. Markov chains provide a definition of style, though rudimentary, as the set of local patterns of a given fixed length. However, Markov chain approaches suffer from a latent “control problem”: control constraints are not compatible with Markov models, as they induce long-range dependencies that violate the Markov hypothesis of limited memory. To overcome this problem, we have reformulated Markov generation in the framework of constraint satisfaction, and have demonstrated that this approach solves the control problem, and opens the door to fully malleable representations of style. VirtualBand uses this technology to provide interactive jazz accompaniment. VirtualBand proceeds in two steps: a recording and a playing phase. First, recordings of professional musicians are analyzed to extract musical metadata (such as harmony, energy, or rhythm) to build a style database. When the musician plays, VirtualBand explores the style database, for each virtual musician, to produce music that matches the players’ own performance features (e.g., volume, density of notes, pitch). Thanks to this adaptive behavior, the playing experience is unique: every time the user plays with the system the rhythm section adapts to the performance and generates a new accompaniment.\r\n\r\nBio: François Pachet received his Ph.D. and Habilitation degrees from Paris 6 University (UPMC). He is a Civil Engineer (Ecole des Ponts and Chaussées) and was Assistant Professor in Artificial Intelligence and Computer Science, at Paris 6 University, until 1997. He then set up the music research team at SONY Computer Science Laboratory Paris, where he developed the vision that metadata can greatly enhance the musical experience in all its dimensions, from listening to performance. His team conducts research in interactive music listening and performance and musical metadata and developed several innovative technologies (constraint-based spatialization, intelligent music scheduling using metadata) and award winning systems (MusicSpace, PathBuilder, The Continuator for Interactive Music Improvisation, etc.). He is the author of over 80 scientific publications in the fields of musical metadata and interactive instruments. His current research focuses on creativity and content generation, as he was recently awarded an ERC Advanced Grant to develop the concepts and technologies of \"flow machines\": a new generation of content generation tools that help users find and develop their own \"style\".\r\n\r\n","54 min",{"id":51,"slug":52,"title":53,"description":11,"duration":54,"type":37,"timestamp":27},"c1cb0622-31aa-46cc-9721-7084ccfb3ebb","xb34ce7_table-ronde","Table ronde","44 min",{"id":56,"slug":57,"title":58,"description":59,"duration":60,"type":37,"timestamp":27},"07d39694-ecf9-48fe-92a1-e8208a34a970","xbd5af6_information-retrieval-and-deployment-in-in","Information retrieval and deployment in interactive improvisation systems - Gérard Assayag","Abstract: Interactive Improvisation Systems involve at least three cooperating and concurrent expert agents: machine listening, machine learning, model based generation. Machine listening may occur during the initial learning stage (off-line or real-time in live situations) and during the generation stage as well in order to align the computer production with current live input. Machine learning can be based on any statistical model capturing significant signal or symbolic stream of features that can be exploited in the generation stage. In particular, the OMax interactive computational improvisation environment will be presented.\r\n\r\nBio: Gerard Assayag is head of the Music Representation Research Group at IRCAM (Institut de Recherche et de Coordination Acoustique/Musique) in Paris, and head of the STMS (Sciences and Technologies of Music and Sound) Ircam/CNRS Lab. Born in 1960, he studied computer science, music and linguistics. In 1980, while still a student, he won research awards in \"Art and the Computer\", a national software contest launched in 1980 by the French Ministry of Research, and another one in the \"Concours Micro\", a contest in computing in the arts using early micro-computers. In the mid-eighties, he wrote the first IRCAM environment for score-oriented Computer Assisted Composition. In the mid-nineties he created, with Carlos Agon, the OpenMusic environment which is currently the standard for computational composition and musicology. . The concept behind OpenMusic is to provide a visual counterpart of major programming paradigms (such as functional, object and logical programming) along with an extensive set of musical classes and methods, plus an original metaphor for representing musical time in its logical, as well as chronological, aspects. Recently Gerard Assayag has created with other colleagues the OMax computaitonal improvisation system based on machine listening and machine learning and has become a widely recognized reference in the field. Gerard Assayag's research interests center on music representation issues, and include computer language paradigms, machine learning, constraint and visual programming, computational musicology, music modeling, and computer-assisted composition. His research results are regularly published in proceedings, books and journals.\r\n\r\n","59 min",{"id":62,"slug":63,"title":64,"description":65,"duration":66,"type":37,"timestamp":27},"bff6e6ff-8f08-4468-9bf9-574cf38fea7d","x48a3d4_gestural-re-embodiment-of-digitized-sound","Gestural Re-Embodiment of Digitized Sound and Music - Norbert Schnell","Abstract: Over the past years, music information research has elaborated powerful tools for creating a new generation of applications that redefine the boundaries of music listening and music making. The recent availability of affordable motion capture technology has not just allowed for creating novel musical instruments, but also for integrating the study of bodily motion and gesture into the mainstream of music information research. We will present a variety of playful real-time interactive applications based on analysis techniques and models combining digitized sounds, movements, and symbolic representations.\r\n\r\nBio: Norbert Schnell is researcher and developer at the Real-Time Musical Interactions team at IRCAM focussing on real-time digital audio processing techniques for interactive music applications. He studied Telecommunications and Music in Graz/Austria and worked as studio assistant at the IEM. At IRCAM he initiated and participated in numerous international research and development projects as well as artistic works in the field of interactive audio-visual installations, music pedagogy, and sound simulation. He chaired the 6th International Conference on New Interfaces for Musical Expression (NIME) in 2006 and held the DAAD Edgard Varèse Guest Professorship for Electronic Music at the Technische Universität Berlin in 2007. Currently he focuses on his PhD on the animation of digitized sounds and their re-embodiment by bodily movements and gestures.","28 min",{"id":68,"slug":69,"title":70,"description":71,"duration":60,"type":37,"timestamp":27},"37fdfbd6-3e20-401a-a645-b5a6592afbea","x71862f_playing-with-music","Playing with Music - Tristan Jehan","Abstract: For the past 60 years, machines have been involved in all aspects of music: playing, recording, processing, editing, mixing, composing, analyzing, and synthesizing. However, in software terms, music is nothing but a sequence of numbers and functions describing waveforms (what to play) and scores (when to play). It doesn't have a notion of what music sounds like, and how it is perceived and received by listeners, in its context, time and space. The Echo Nest is a music intelligence company that provides a deep and granular level of musical information at scale, on both content and context. By listening to every song (tempo, rhythm, timbre, harmony), and reading every piece of music text online (blog posts, news, reviews), the \"musical brain\" constantly learns to reverse engineer music. Its knowledge on 35 million unique songs and 2 million artists was generated automatically and dynamically over the past 6 years. Through many examples and live demos, we demonstrate the power of big-data driven software in the context of personalized listening experiences and music creation.\r\n\r\nBio: Tristan earned a doctorate in Media Arts and Sciences from MIT in 2005. His academic work combined machine listening and machine learning technologies in teaching computers how to hear and make music. He first earned an MS in Electrical Engineering and Computer Science from the University of Rennes in France, later working on music signal parameter extraction at the Center for New Music and Audio Technologies at U.C. Berkeley. He has worked with leading research and development labs in the U.S. and France as a software and hardware engineer in areas of machine listening and audio analysis. He is a co-founder and the Chief Science Officer of Music Intelligence company The Echo Nest, which powers smarter music applications for a wide range of customers including MTV, Spotify, The BBC, MOG, eMusic, Clear Channel, Rdio, EMI, and a community of more than 12,000 independent application developers.\r\n\r\n",{"id":73,"slug":74,"title":75,"description":76,"duration":54,"type":37,"timestamp":27},"94ef0caf-ae6f-4ab6-89ba-7abb6f52a2fc","x3acf31_interactive-exploration-of-sound-corpora-f","Interactive Exploration of Sound Corpora for Music Performance and Composition - Diemo Schwarz","Abstract: The wealth of tools developed in music information retrieval (MIR) for the description, indexation, and retrieval of music and sound can be easily (ab)used for the creation of new musical material and sound design. Based on automated audio description and selection, corpus-based concatenative synthesis allows to exploit large collections of sound to compose novel timbral and harmonic structures. The metaphor for musical creation is here an explorative navigation through the sonic landscape of the corpus. We will present examples and applications of real-time interactive corpus-based concatenative synthesis for music composition, sound design, installations, and interactive performance.\r\n\r\nBio: Diemo Schwarz is researcher–developer at the Real-Time Music Interaction (IMTR) team at Ircam, working on sound analysis and interactive corpus-based concatenative synthesis in multiple research and musical projects at the intersection between computer science, music technology, and audio-visual creation. He holds a PhD in computer science applied to music from the University of Paris, awarded in 2004 for the development of a new method of concatenative musical sound synthesis by unit selection from a large database.",{"id":78,"slug":79,"title":80,"description":81,"duration":82,"type":37,"timestamp":27},"a265695a-3f96-4a77-8877-5ed1cbf9f338","x437b03_mir-beyond-retrieval-music-performance-m","MIR beyond retrieval: Music Performance, Multimodality and Education - Sergi Jorda","Abstract: Although MIR did arguably not start as a research discipline for promoting creativity and music performance, this trend has begun to gain importance in recent years. The possibilities of MIR for supporting musical creation and musical education are indeed many-folded. While the use of MIR techniques for real-time music creation may indeed help both experts and complete novices to explore new creative musical universes laying somewhere in between micro-level synthesis control and macro-level remixing, the application of MIR tools oriented to music education and to children's musical performance seems another unexplored area with radical new possibilities. In this talk we will describe some recent applications of MIR techniques to music and multimodal creation recently developed at the MTG and at Reactable Systems, and we will explore the potential of MIR for children music education and performance.\r\n\r\nBio: Sergi Jordà (1961) holds a B.S. in Fundamental Physics and a Ph.D. in Computer Science and Digital Communication. He is a researcher in the Music Technology Group of Universitat Pompeu Fabra in Barcelona, where he specializes in real-time interaction and tabletop interfaces, and an Associate Professor in the same university, where he teaches computer music, HCI, and interactive media arts. He has written many articles, books, given workshops and lectured though Europe, Asia and America, always trying to bridge HCI, music performance and interactive media arts. He has received several international awards, including the prestigious Ars Electronica’s Golden Nica in 2008. He is currently best known as one of the inventors of the Reactable, a tabletop musical instrument that in 2007 accomplished mass popularity after being integrated in Icelandic artist Bjork’s Volta world Tour. He is also one of the founding partners of Reactable Systems, a spin-off company created in 2009 (www.reactable.com).","57 min",["Reactive",84],{"$si18n:cached-locale-configs":85,"$si18n:resolved-locale":91,"$snuxt-seo-utils:routeRules":92,"$ssite-config":93},{"fr":86,"en":89},{"fallbacks":87,"cacheable":88},[],true,{"fallbacks":90,"cacheable":88},[],"",{"head":-1,"seoMeta":-1},{"_priority":94,"currentLocale":98,"defaultLocale":98,"description":99,"env":100,"name":101,"url":102},{"name":95,"env":96,"url":97,"description":95,"defaultLocale":97,"currentLocale":97},-3,-15,-2,"fr-FR","Ressources IRCAM est une plateforme de ressources musicales et sonores, développée par l'IRCAM, pour les artistes, les chercheurs et les passionnés de musique.","production","Ressources IRCAM","https://ressources.ircam.fr",["Set"],["ShallowReactive",105],{"/event/workshop-mir-(music-information-research)-and-creation-2012/":-1,"flat pages":-1},"/fr/event/workshop-mir-(music-information-research)-and-creation-2012"]