Powered by OpenAIRE graph
Found an issue? Give us feedback

LSP

Laboratoire des Systèmes Perceptifs
21 Projects, page 1 of 5
  • Funder: French National Research Agency (ANR) Project Code: ANR-22-FRAL-0003
    Funder Contribution: 300,128 EUR

    When listening to speech sounds, for example during a conversation, our brain first tries to isolate the attended speech from other sound sources, and to track its dynamics. This is not an easy task, as the rhythms of speech are multiple and intricately linked (e.g. the rate of phonemes, syllables or words in an utterance). How do we successfully track and comprehend speech in naturalistic situations? The current hypothesis is that the human brain entrains to slow temporal modulations of speech running at about 3-4 Hz (cycles per second). However, this approach is not sufficient to capture the dynamics of spontaneous speech, as speech rhythms are highly irregular due silences, breaks and restarts, and highly variable across speakers, speaking styles and language characteristics. The overarching research hypothesis of this proposal is that speech rhythm perception by the human brain is far from a one-to-one association between particular modulation frequency in the speech signal and linguistic units: the rhythmic patterns that give rise to the perception of a sequence of phonemes, syllables or words show in fact a very large variability across stimuli (lack of invariance problem). Hence, in this project we challenge the assumption that there exists a special rhythm within a narrow frequency range for entraining to spoken human languages, and propose instead a novel theoretical and experimental framework to capture: 1) the mechanisms by which slow temporal modulations in speech change at an individual participant level; 2) the effect of language-specific temporal characteristics on speech dynamics, using French as an example of syllable-timed language, and German as an example of stress-timed language; 3) how the human brain tracks and encodes such dynamics, including interruptions. The project's three main lines of research feed into one another, and provide a cohesive and timed workflow. At the border between speech signal processing, psycholinguistics and neurolinguistics, the output of the DRhyaDS project has the potential to profoundly change both the theoretical tenets of speech perception, and the best practices in spontaneous speech analysis. For this to be possible, the high complementarity of expertise within the Franco-German consortium - cutting-edge psychophysics from the French side and novel neural data analyses from the German side -, as well as the possibility of directly testing native speakers of both French and German languages at each stage of the project, are of the essence.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-11-BSH2-0004
    Funder Contribution: 372,264 EUR

    Almost 5 to 8 million people suffer from cochlear hearing loss in European countries such as France, Great Britain or Germany. Most of these people complain about strong difficulties in understanding speech in adverse listening conditions, even when clinical audiometry indicates a mild form of hearing loss. Unfortunately, current rehabilitation devices such as conventional hearing aids and cochlear implants cannot restore normal perception of speech in these conditions, although recent electroacoustical (E-A) devices combining amplified acoustic hearing and electrical stimulation show promising results. The HEARFIN project aims to investigate whether these difficulties in understanding speech in adverse listening conditions originate from an abnormal representation of “temporal fine structure” (TFS) information at central stages of the auditory system, resulting from acute loss of auditory nerve fibers and cochlear nucleus neurons. This project will use a multidisciplinary approach (psychoacoustics, electrophysiology and computer modelling) to demonstrate central deficits in TFS processing in regions of mild hearing loss. Part of this research, conducted in collaboration with an industrial partner, will lead to the development of a novel clinical test for auditory screening and a novel method quantifying the efficacy of hearing aids and E-A systems.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-15-CE37-0009
    Funder Contribution: 231,608 EUR

    Human language had been, for a long time, viewed as an abstract, discrete and symbolic mental system divorced from its physical implementations. While fruitful and productive when describing the mature language faculty, this view left open the question of how such a system might be acquired from a limited, concrete and continuous physical input, such as speech—a logical conundrum known as the ‘linking problem’. The current project proposes to break new ground by linking the earliest language acquisition mechanisms to basic auditory perception. Recent advances in the understanding of the neural coding and information processing properties of the mammalian auditory system make the time ripe for such a rethinking of the logical problem of language acquisition. Indeed, the speech signal encoded by the auditory system serves as the input for language learning. Importantly, auditory processing transforms this signal by organizing it into different representational patterns. The project investigates the general hypothesis that these transformations have a direct impact on language learning. The general objective of the project is thus to understand how the developing auditory system encodes the speech signal, providing representations that contribute to language acquisition. The project is thus organized around two, closely related specific objectives: (i) to analyze and characterize speech and other speech-like signals in terms of computational and mathematical principles of neural coding and information processing in the auditory system; and (ii) to identify and describe early perceptual abilities present at the onset of language development allowing human infants to recognize speech as a relevant signal for language acquisition. To achieve these objectives, the project is grounded in an integrative view of the mind and the brain, synthesizing hitherto rarely combined disciplines, such as language acquisition research, psychoacoustics and the study of neural coding. It provides a novel approach to foundational questions such as “Why is language special?” through the cross-fertilization of developmental cognitive neuropsychology, psychophysics and information theory. The project, which will run for a duration of 36 months and involves three leading research laboratories, the LPP, the LSP and the LPS, is broken down into two tasks. The first, corresponding to the first objective involves the computational modeling of speech and speech-like signals, such as the native language, an unfamiliar language, monkey calls and sine-wave speech. The second, corresponding to the second objective, comprises electrophysiological (EEG) and metabolic (near-infrared spectroscopy) measures of newborn infants’ brain responses to these sound categories, thereby assessing the role of prenatal experience as well as the specificity of the early neural specialization for speech and language processing. The expected result is a theoretical and empirical breakthrough in the understanding of how our auditory and cognitive systems develop to sustain speech and language. By identifying the physical and acoustic properties of speech that trigger language-related processing and the neural mechanisms underlying these, the current project opens up the way for the future development of new applications supporting individuals with speech processing and language impairments.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-18-CE28-0015
    Funder Contribution: 509,097 EUR

    Visual confidence refers to our ability to estimate the correctness of our visual perceptual decisions. As compared to other forms of metacognition, meta-perception has attracted a burst of studies recently, no doubt because perception already benefits from strong theoretical frameworks. We have recently refined these existing frameworks by proposing to clearly distinguish sensory evidence from some “confidence evidence” that drives the confidence decision. The problem now is to characterize the properties and consequences of this confidence evidence, and this is the aim of the present proposal. As the number of studies grows, it becomes clear that visual confidence is not simply a noisy estimate of the perceptual decision, but instead depends on a large number of factors. We have identified four axes that we believe will contribute to shape confidence evidence: (1) individual variability, (2) task accessibility, (3) global confidence, and (4) perceptual learning. The purpose of the first axis is to understand which cues are used for confidence, and for this purpose, we will study confidence variability across individuals. Some of the idiosyncratic variability in confidence judgment efficiency might come from a variable temptation to exaggerate the impact of stimulus noise on the estimation of one own performance. In the second axis, we will try to understand what in a task determines the accessibility to visual confidence. In particular, we will test the hypothesis that more high-level tasks, such as face identification, lead to better confidence efficiency that low-level tasks, such as detecting whether two line segments are aligned. The aim of the third axis is to understand how individuals construct a sense of confidence for a task as a whole, not for a single isolated judgment. We will start by carefully studying how confidence builds up within a set of stimuli and compare how such a global confidence compares with a single decision confidence. Finally, in the fourth axis, we will study how perceptual learning benefits from visual confidence. In particular, we will test the extent to which confidence evidence can be seen as an internal error signal that can act as a proxy for an external feedback. We believe that a better understanding of these four fundamental aspects of confidence evidence will help us derive an accurate and useful model of visual confidence, and ultimately of metacognition.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-24-CE19-6197
    Funder Contribution: 675,816 EUR

    Many neurodevelopmental disorders (NDDs) are associated with perturbations of functional interactions between cerebral areas. This makes the quantitative and non-invasive assessment of those interactions a potential efficient biomarker to ensure early detection and better prognosis of these pathologies. However, limitations in the currently available neuroimaging techniques in terms of spatiotemporal resolutions and clinical applicability has hindered the wide clinical deployement of such functional connectivity-based diagnosis. Our project tackles this challenge by developing spatiotemporal functional connectivity (STFC), an innovative characterisation of multiscale (mesoscopic to large scale) brain activity expected to offer unprecedented sensitivity and specificity. It leverages functional UltraSound (fUS) neuroimaging, a cutting-edge brain imaging modality with high translational potential, and innovative post-processing techniques focusing on propagative brain activity. The ultimate goal is to identify potential fUS-based STFC biomarkers of NDDs suitable both for preclinical and clinical neonatal imaging. First, we will develop a set of descriptor tools tailored to specifically capture propagative activity in neural (Optical and fUS) data and further derive a STFC processing technique, relying on these descriptors, to estimate repetitive propagative spatiotemporal patterns. Secondly, we will benchmark the performance of these methods in scoring spontaneous cortical dynamics associated with distinct animal states (e.g., active, quiet, sleep phases, sedation), using calcium and intrinsic optical imaging data recorded synchronously in GCaMP6 mice. This will be a crucial step to retain the most pertinent (robustness, reproducibility) spatiotemporal descriptor of cortical propagative activity extracted from both neuronal and hemodynamic-related signals. It will also enable to better characterize how hemodynamics, which can be non-invasively observed through functional ultrasound (fUS) are tied to underlying neuronal activity (calcium imaging). Furthermore the technique will also be used to score animal states in ferrets using fUS as ferrets have a mid-sized brain and folded cortex with closer homology with humans for frontal and sensory regions. Our final aim is to validate that fUS-based STFC can efficiently score pathological brain states. We will assess the potential of STFC applied on fUS data as an early biomarker for NDDs in a relevant cross-species (mice and ferrets) preclinical model of NDDs. We will implement the maternal immune activation (MIA) model in those two species and then track specific STFC signatures evidencing alterations in spontaneous cerebral dynamics in the young adult (mice and ferrets) and at perinatal stages (ferret pups at P5-P12). Building from the biomarkers defined in ferrets, we will then search for comparable STFC signatures in fUS data acquired in human neonates (via a clinical trial external to this ANR proposal). This project will bridge the gap between animal models and human neonates to reveal early indicators of NDDs with a new type of analytical biomarkers, offering the potential for timely intervention and improved long-term outcomes for affected individuals.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right
3 Organizations, page 1 of 1

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.