Powered by OpenAIRE graph
Found an issue? Give us feedback

GIPSA

Grenoble Images Parole Signal Automatique
62 Projects, page 1 of 13
  • Funder: French National Research Agency (ANR) Project Code: ANR-20-CE33-0009
    Funder Contribution: 518,862 EUR

    If electrolocation or echolocation have been the subject of robotic studies, photolocation has never been studied. Based on an intermittent and aperiodic light flash illumination, the Dark-NAV project proposes to develop an active photolocation sensor and the associated "event based" SLAM and navigation algorithms dedicated to UAVs. The intermittency of the illumination will allow the increase of the embarkable light power. Dark-NAV aims to develop a complete chain of navigation from the sensors to a fully aperiodic control. The Dark-NAV project relies on a consortium of laboratories (GIPSA-lab, ICube and ISM) with recognized experience in the field of vision and control for robotics. The project also involves the SUEZ industrial company in the field of water purification, which aims to use autonomous or semi-autonomous drones for the inspection and maintenance of its water pipelines and tanks.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-17-CE19-0015
    Funder Contribution: 547,867 EUR

    MicroVoice’s objective is to gain an in-depth understanding of the link between the micromechanics of vocal-fold tissues and their unique vibratory performances, and take the next step towards the development of new biomimetic oscillators. The strategy is: i. to investigate the vocal-fold 3D fibrous architecture and micromechanics using unprecedented synchrotron X-ray in situ microtomography; ii. to use these data to mimic and process fibrous biomaterials with tailored structural and biomechanical properties; iii. to characterise the vibro-mechanical properties of these biomaterials at different scales (macro/micro) and frequencies (low/high), using Dynamic Mechanical Analysis and Laser Doppler Vibrometry. iv. to validate their oscillating properties under “realistic” aero-acoustical conditions using in vitro and ex vivo testbeds. MicroVoice will provide a solid framework for the innovative design of fibrous phonatory implants.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-19-CE45-0015
    Funder Contribution: 422,388 EUR

    The TOPACS project aims at large scale analysis of 3D medical images stored in hospitals. The primary goal is large scale study of the Human Anatomy (Computational anatomy). One major challenge is the size of the data to analyse, while respecting anonymity of the individuals. Keypoint extraction comes as a solution to this problem. Indeed keypoints offer a compact summary of an image, storing only important features present in the image. Each keypoint is associated with a feature vector describing the local neighborhood around the keypoint, and an efficient comparison can be computed between keypoints by measuring the distance between their respective feature vectors. During this project, we plan to be able to analyse more than 10000 individuals. The TOPACS project falls into four parts: The first part will address keypoint extraction from 3D medical images. For 2D images, many keypoint approaches have been proposed, such as SIFT, SURF, KAZE, and recent advances in machine learning have resulted in better kepyoint algorithms, such as LIFT. But few works have proposed keypoint techniques in the field of 3D medical image processing. In this first task, we will propose new keypoint algorithms tailored to medical images, by studying both hand-crafted approaches and machine learning approaches. The proposed methods should exhibit specific characteristics : robustness to large inter-patient variability, ability to compare data extracted from different imaging modalities. We plan to extract keypoints from three hospitals, in Lyon, Saint-Etienne and Geneva. The second part consists in devising new approaches for registration and segmentation using keypoints. A major difficulty is large scale groupwise registration. In this context, groupwise registration appears as a better means to register a large set of images, as choosing or building a single reference model would introduce a severe bias. Current approaches can register about hundreds of images together. Our goal is then to propose approaches with much larger capacities. This task will also address keypoint-based segmentation, for which few works have been proposed. The third part will deal with statistical representations of large population as well as inference at the single-subject level. Manifold learning techniques will be considered to capture both geometric and textural normal/pathological variabilities. Classification / regression methods on manifold will then be developed to infer prediction for a given individual. The fourth part will link the theoretical works of the first three parts to medical applications. A first task will consist in extracting data from the three hospitals, where a computer will be installed in each hospital and extract large databases of keypoints. We plan to mainly extract data from 3D CT and MRI images. A second task is the application of population analysis for anthropology, mainly for forensic science : estimating the profile of an unknown individual (gender, age, ...) or estimating the date of death. A third task will be the proposal of an online tool to provide access to the new general purpose algorithms : segmentation, registration. More generally, the TOPACS project aims at contributing to open science, by publishing algorithms and databases, while keeping the anonymity of data present in the databases.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-16-CE38-0011
    Funder Contribution: 679,991 EUR

    An@tomy2020 aims at developing an innovative educational platform to facilitate learning of functional anatomy. This platform will integrate recent advances in computer graphics, human-computer interaction together with recent insights in educational and cognitive sciences to design and test optimal scenarios for anatomy learning. It will also provide new advances in these respective areas of research. The approach is based on evidences that body movements could improve learning of different knowledge by “augmenting” or “enriching” traces in long-term memory. This “embodied” perspective is particularly relevant for learning of functional anatomy as the knowledge to acquire could be specifically related to the learner’s body in motion. An@tomy2020 will allow connecting learner’s body to anatomical knowledge by tackling technical challenges in relation to pedagogical challenges. Real-time animation of an anatomically realistic model of the user from data acquired with commodity depth sensors and suitable pieces of knowledge extracted from anatomical ontology will be associated with interaction techniques. These associations will favor the construction of 3D spatial representations of the body in motion and the embodiment of knowledge. Our educational tool will thus boost the learners’ spatial abilities helping them building a better spatial representation of the anatomical structure. Optimal interactive techniques will be evaluated. In parallel, the platform will be experienced with medicine and kinesiology students, in real learning conditions. The project is thus organized along three main scientific axes. The aim of the first axis is to provide scientific approaches and technical solutions to capture and animate an accurate model of the user’s body in real-time. A high quality anatomy transfer is required to generate augmented reality views of the user from a deformed generic model. The second axis is dedicated to interaction techniques for precise and seamless interaction in the mixed environment. The aim is to design optimal interaction techniques based on augmented reality methods. Users’ evaluations will be ran to measure their effects on users’ performances, usage convenience, but also on the way knowledge is acquired and represented in users’ memory. The third axis is related to the educational contents and the metrics used for evaluation of trainee abilities. The aim will be to design and test learning scenarios that will integrate the new tools and allow their adaptations to real needs. This will allow drawing the contours of the integration of this new tool in university courses. The project also includes technical challenges related to the integration of the different tools, results and resources in the platform. Six partners participate to this interdisciplinary project: the coordinator TIMC (Computer-Assisted Medical Intervention team) specializes in Computer Science and Applied Mathematics for Healthcare applications; TIMC will coordinate the project and will be mostly involved in user body modeling. Anatoscope is a start-up specialized in anatomy transfer and real-time animation; the company will contribute to user body modeling and will coordinate platform integration. Gipsa-Lab (speech and cognition dept.) studies the behavioral and cognitive processes underlying communicative interactions. It will evaluate the cognitive processes of embodied learning when using new interactive devices. LIBM will bring its knowledge in the study of cognitive processes involved in anatomy learning. LIBM members will also coordinate platform evaluation. LIG (Engineering Human-Computer Interaction team) has extensive experience in designing, developing and evaluating interaction techniques and will coordinate the tasks related to interaction techniques in augmented reality. LJK (Applied Mathematics and Computer Science laboratory) will coordinate formatting and accessibility of the anatomical knowledge and educational content.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-16-CE19-0005
    Funder Contribution: 614,673 EUR

    In France, 300 000 people suffer from a strong speech disorder and more than 5 millions worldwide, often following brain strokes but also in case of severe tetraplegia, locked-in syndrome, neurodegenerative disease such as Amyotrophic Lateral Sclerosis or Parkinson’s disease, myopathies, or coma. The BrainSpeak project aims at developing the proof of concept of a complete paradigm to restore speech using a brain-computer interface (BCI). This speech BCI will be developed in patients undergoing a presurgical evaluation of their pharmaco-resistant epilepsy using intracranial recordings. Cortical signals will be recorded using high-density electrocorticographic and/or intracortical microelectrode arrays from motor speech areas, and decoded to control an artificial speech synthesizer in real time. The INSERM partner will implement a speech BCI clinical trial (for which a CPP approval and an ANSM authorization have recently been obtained) with CHU Grenoble in collaboration with GIPSA-Lab. To this end, we will characterize the cortical dynamics underlying speech production and imagination and decode these signals to predict overt and covert speech. Decoding will be first tested offline and then implemented online to allow subjects to control a speech synthesizer in real time. In parallel, we will carry out methodological developments to improve the BCI experimental chain. For this purpose, novel machine learning algorithms will be developed by GIPSA-Lab in collaboration with INSERM for better speech synthesis and cortical signal decoding. These algorithms will provide more intelligible speech synthesis and more efficient decoding methods adapted to the transcoding of neural recordings into speech features (articulatory or spectral trajectories). The speech BCI approach will be developed here as a proof of concept in patients having preserved brain areas and able to speak, in order to pave new routes for future speech rehabilitation in patients who cannot communicate verbally (e.g. locked-in people).

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right
1 Organizations, page 1 of 1

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.