Powered by OpenAIRE graph
Found an issue? Give us feedback

IRCAM

Institut de Recherche et Coordination Acoustique Musique
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
63 Projects, page 1 of 13
  • Funder: French National Research Agency (ANR) Project Code: ANR-13-CORD-0011
    Funder Contribution: 980,872 EUR

    The goal of the project is to develop a synthesis system for high quality singing voices, which can be used by the general public musician. The system should not be limited to sing vowels, but allow to generate complete songs including arbitrary texts. Such a system does not exist in the French language. The synthesizer will operate in two modes: "text to singing", in which the user must enter the text and the notes of the score (lengths and heights), that the machine will then sing, and "virtual singer" in which the user operates the real-time control interface to control the synthesizer as a musical instrument. To achieve the synthesizer, we propose in this project a combination of advanced voice transformation techniques, including analysis and processing of the parameters of the vocal tract and the glottal source, with state of the art know how about unit selection for concatenative speech synthesis, rules based singing synthesis systems, and innovative gesture control interfaces. A central objective for the synthesizer to be developed is the ability to capture and reproduce the variety of singing styles (opera/classical, popular/song). Besides evaluation techniques that are commonly used for speech synthesis systems, the usability of the systems will be evaluated in particular with respect to the creative aspects that they allow (evaluation in form of mini-concerts and compositional mini-projects using the developed control interface, virtual choirs and/or virtual soloists). The prototype system for singing synthesis that will be developed in the project will be used by partners to offer products including singing voice synthesis as well as virtual singer instruments. These functions are currently lacking or exist only in a very limited form. Thus, the project will provide for performing musicians, composers and the general public an artistic new singing synthesis and new means to create interactive experiences using the vocals.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-16-CE23-0017
    Funder Contribution: 734,706 EUR

    Deezer, Spotify, Pandora or Apple Music enrich the listening of music with data such as biography or albums by the same artist, and offer suggestions to listen to other artists or songs "similar" (without similarity explicitly defined). A journalist or a radio presenter often uses the Web and media data to prepare its programs. A professor of Master in Engineering uses analytical tools to explain production techniques to his students. All use basic knowledge from the most empirical sources (the press, Google) to the more formalized ones that are also accessible by machines (Spotify uses LastFM, MusicBrainz, DBPedia and audio extractors from the startup The Echo Nest acquired by Spotify in 2014). The need for richer musical knowledge bases and for operating tools is important. WASABI's originality consists in mixing several approaches and in offering methods for enriching results, and it is this joint implementation which aims to produce a richer and better equipped Knowledge Base : 1) By leveraging Semantic Web databases (eg DBPedia, MusicBrainz, LastFM), you can extract structured data, link the song with elements such as the producer, the studio where it was recorded, the composer, the year, the lyrics, a description from the WikiPedia page of the song, etc. 2) By analyzing the data in free text (words of the song, pages of text related to this song), you can extract non-explicit data (themes of the song, places, people, events, dates, emotions conveyed). The data obtained by these four methods may be linked, faced, confirmed or refuted based on assumptions. For example, the description of a rock band and a producer can be used to configure the initial settings of audio analysis and facilitate the unmixing. 3) By using jointly this information from the Semantic Web and the analysis of the words altogether with the information contained in the audio signal, you can improved automatic the extraction of music information (the time structure, the presence and characterization of the voice, the musical emotion or the presence of plagiarism). 4) When a song is available with separate tracks, we can perform a more accurate analysis and extract audio richer data (notes, instruments, reverberation type, etc.). We will study in this project how unmixing can be achieved and how results can be used in the context of the browser, even when it is imperfect. 5) We can also encourage serendipity and find non-trivial data with a tool such as Discovery Hub (and answer questions like: what connects Radiohead to Pink Floyd) From use cases specified by the project and co-designed with collaborators users of our research, WASABI wants to offer a suite of open source software components and open data online services for: 1) audio metadata visualization of results of Music Information Retrieval and listening to separate track songs, with tools that run in a Web context, 2) the automatic processing of song lyrics, recognition of linked named entities, annotation and collaborative correction, 3) access to a Web service with an API offering a musical similarities study environment made possible from audio analysis on one hand and from the semantic, textual analysis on the other hand. These software modules will allow us to develop demonstrators formalized with the help of external collaborators: composers, musicologists, journalists (Radio France), engineers from a leading nonline streaming service (Deezer).

    more_vert
  • Funder: European Commission Project Code: 644862
    Overall Budget: 2,682,590 EURFunder Contribution: 2,330,000 EUR

    RAPID-MIX brings together 3 leading research institutions with 4 dynamic creative industries SMEs and 1 leading wearable technology SME in a technology transfer consortium to bring to market innovative interface products for music, gaming, and e-Health applications. RAPID-MIX uses an intensely user-centric development process to gauge industry pull and end-user desire for new modes of interaction that integrate physiological human sensing, gesture and body language, and smart information analysis and adaptation. Physiological biosignals (EEG, EMG) are used in multimodal hardware configurations with motion sensors and haptic actuators. Advanced machine learning software adapts to expressive human variation, allowing fluid interaction and personalized experience. An iterative, rapid development cycle of hardware prototyping, software development, and application integration accelerates the availability of advanced interface technologies to industry partners. An equally user-centric evaluation phase assures market validation and end-user relevance and usability, feeding back to subsequent design cycles and informing ultimate market deployment. The RAPID-MIX consortium leverages contemporary dissemination channels such as crowd funding, industry trade shows, and contributions to the DIY community to raise awareness across the professional and consumer landscapes of novel interface technologies. Project output is encapsulated in an Open Source RAPID-API exposing application level access to software libraries, hardware designs, and middleware layers. This will enable creative partner SMEs to build a new range of products called Multimodal Interactive eXpressive systems (MIX). It also allows broader industries such as quantified self, and DIY communities, to use the API in their own products in cost effective ways. This assures the legacy of RAPID-MIX and marks its contribution to European competitiveness in rapidly evolving markets for embodied interaction technologies.

    more_vert
  • Funder: European Commission Project Code: 951962
    Overall Budget: 5,183,000 EURFunder Contribution: 4,994,710 EUR

    MediaFutures will set up a virtual, European data innovation hub, including funding, mentoring and support for entrepreneurial and creative projects to reshape the media value chain through responsible, innovative uses of data. We will: • explore the critical factors that impact how people engage with bottom-up quality journalism, science education and digital citizenship; • define a participatory, inclusive innovation programme, leveraging impulses from multiple disciplines, as well as synergies between entrepreneurs and creatives; • organise a competition addressing pressing technical, economic and societal challenges in the media value chain to identify promising digital entrepreneurs, creatives and data-empowered solutions; • provide data and experimentation facilities for the winners of this competition to test and nurture their ideas; • support 51 businesses and 43 artists by solving common concerns around funding and access to mentoring in technical, legal, business, media and sustainability matters; and • create toolkits and best practices for innovators, creatives, and other stakeholders to achieve greater traction for their citizen-centric initiatives, and empower them to communicate through data in inspiring, informative and engaging ways. Drawing on the experience of the consortium - ZABALA, ODI and SOTON (instrumental to delivering several flagship Horizon 2020 data incubators); IRCAM (leading the way in publicly funded art-tech-science residencies programmes); EUT and LUH (2 accomplished DIHs and BDVA i-Spaces); NMA (Europe’s largest media accelerator); LUISS (renowned school of journalism and digital startup accelerator); KU Leuven (legal and ethical expert) and DEN (one of Europe’s social innovation pioneers) - we will establish a Europe-wide, virtual data-driven innovation ecosystem, supported and promoted by an international network of 28 organisations that have confirmed their intention to join MediaFutures as members of our stakeholder cluster.

    more_vert
  • Funder: European Commission Project Code: 761634
    Overall Budget: 2,898,880 EURFunder Contribution: 2,249,150 EUR

    Music is one of the fastest evolving media industries, currently undergoing a transformation at the nexus of music streaming, social media and convergence technologies. As a result, the music industry has become a mixed economy of diverse consumer channels and revenue streams, as well as disruptive innovations based on new services and content distribution models. In this setting, music companies encounter daunting challenges in dealing successfully with the transition to the new field that is shaped by streaming music, social media and media convergence. The availability of huge music catalogues and choices has rendered the problems of recommendation and discovery as key in the competition for audience, while the continuous access to multiple sources of music consumption have resulted in a dynamic audience, characterized by a highly diverse set of tastes and volatility in preferences which also depend on the context of music consumption. To serve the increasingly complex needs of the music ecosystem, FuturePulse will develop and pilot test a novel, close to market music platform in three high-impact use cases: a) Record Labels, b) Live Music, c) Online Music Platforms. The project will help music companies leverage a variety of music data and content, ranging from broadcasters (TV, radio) and music streaming data, to sales statistics and streams of music-focused social media discussions, interactions and content, through sophisticated analytics and predictive modelling services to make highly informed business decisions, to better understand their audience and the music trends of the future, and ultimately to make music distribution more effective and profitable. FuturePulse will offer these capabilities over a user-friendly, highly intuitive and visual web solution that will enable the immersion of music professionals in the realm of music data, and will support them to make highly informed and effective business decisions.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.