
This project aims to redefine how position is coded in the brain, to propose that position is a product of the attention and eye movements systems and to show that it offers a new window on attention research. How does the brain code location? At first glance, it seems straightforward: we see things where they are because of where they fall on our retina (adding in some corrections every time we move our eyes or head). However, previous and ongoing work shows that position is not determined solely by retinal coordinates: there are large shifts in perceived location created by either the movements of the eyes or the movements of the target, even with eyes steady. We propose instead that the eye movement and attention system provides the explicit location code for attended targets: this code is computed for the purposes of guiding eye movements but we now claim that it also specifies where the targets will be perceived, whether or not an eye movement is executed. If the predictions of our proposal hold up, these results will show that perception of position is a core function of visual attention, overthrowing the view that action and perception have independent representations of the world, and offering new measures of the functions and architecture of attention.
Human language had been, for a long time, viewed as an abstract, discrete and symbolic mental system divorced from its physical implementations. While fruitful and productive when describing the mature language faculty, this view left open the question of how such a system might be acquired from a limited, concrete and continuous physical input, such as speech—a logical conundrum known as the ‘linking problem’. The current project proposes to break new ground by linking the earliest language acquisition mechanisms to basic auditory perception. Recent advances in the understanding of the neural coding and information processing properties of the mammalian auditory system make the time ripe for such a rethinking of the logical problem of language acquisition. Indeed, the speech signal encoded by the auditory system serves as the input for language learning. Importantly, auditory processing transforms this signal by organizing it into different representational patterns. The project investigates the general hypothesis that these transformations have a direct impact on language learning. The general objective of the project is thus to understand how the developing auditory system encodes the speech signal, providing representations that contribute to language acquisition. The project is thus organized around two, closely related specific objectives: (i) to analyze and characterize speech and other speech-like signals in terms of computational and mathematical principles of neural coding and information processing in the auditory system; and (ii) to identify and describe early perceptual abilities present at the onset of language development allowing human infants to recognize speech as a relevant signal for language acquisition. To achieve these objectives, the project is grounded in an integrative view of the mind and the brain, synthesizing hitherto rarely combined disciplines, such as language acquisition research, psychoacoustics and the study of neural coding. It provides a novel approach to foundational questions such as “Why is language special?” through the cross-fertilization of developmental cognitive neuropsychology, psychophysics and information theory. The project, which will run for a duration of 36 months and involves three leading research laboratories, the LPP, the LSP and the LPS, is broken down into two tasks. The first, corresponding to the first objective involves the computational modeling of speech and speech-like signals, such as the native language, an unfamiliar language, monkey calls and sine-wave speech. The second, corresponding to the second objective, comprises electrophysiological (EEG) and metabolic (near-infrared spectroscopy) measures of newborn infants’ brain responses to these sound categories, thereby assessing the role of prenatal experience as well as the specificity of the early neural specialization for speech and language processing. The expected result is a theoretical and empirical breakthrough in the understanding of how our auditory and cognitive systems develop to sustain speech and language. By identifying the physical and acoustic properties of speech that trigger language-related processing and the neural mechanisms underlying these, the current project opens up the way for the future development of new applications supporting individuals with speech processing and language impairments.
Almost 5 to 8 million people suffer from cochlear hearing loss in European countries such as France, Great Britain or Germany. Most of these people complain about strong difficulties in understanding speech in adverse listening conditions, even when clinical audiometry indicates a mild form of hearing loss. Unfortunately, current rehabilitation devices such as conventional hearing aids and cochlear implants cannot restore normal perception of speech in these conditions, although recent electroacoustical (E-A) devices combining amplified acoustic hearing and electrical stimulation show promising results. The HEARFIN project aims to investigate whether these difficulties in understanding speech in adverse listening conditions originate from an abnormal representation of “temporal fine structure” (TFS) information at central stages of the auditory system, resulting from acute loss of auditory nerve fibers and cochlear nucleus neurons. This project will use a multidisciplinary approach (psychoacoustics, electrophysiology and computer modelling) to demonstrate central deficits in TFS processing in regions of mild hearing loss. Part of this research, conducted in collaboration with an industrial partner, will lead to the development of a novel clinical test for auditory screening and a novel method quantifying the efficacy of hearing aids and E-A systems.
A wealth of research has demonstrated that humans possess intuitions about number from infancy on; A wealth of research has demonstrated that humans possess intuitions about number from infancy on; later, these intuitions guide children’s learning of formal arithmetic. Much less is known however of the intuitive foundations of another branch of mathematics: geometry. In my past research, I started investigating this question focusing on Euclidean geometry (the so-called “natural geometry”), and in particular angle, a central Euclidean concept. Results however showed little to no sensitivity to angle in young children and infants, and systematic errors persisting even in adults. The present project addresses the question of the foundations of geometry further, in four different axes of research. First, we will conduct an in-depth analysis of the mechanisms underlying angle perception in adults and children. Indeed, with a description of the mechanisms involved, we will be in a better position to understand the failures we have observed, especially in children. In the second axis of research, we take a broader perspective to probe intuitions beyond Euclidean geometry: if Euclidean geometry, and angles, are not intuitive, is there a ‘natural’ geometry? We will test infants’, children’s and adults’ perception of geometric properties at different levels of invariance: from the low-level property of shape orientation, to properties belonging to affine or projective geometry. In a third axis, we will depart from perception and assess the functional properties of children’s geometric representations, namely whether infants can engage in geometric reasoning. Lastly, the fourth axis of research will probe processes of conceptual learning, using a test case known to be counter-intuitive: spherical geometry. We will ask whether conceptual learning involves processes of “incubation” that operate subconsciously, learning being experienced instead as a sudden flash of insight (“I get it!”).