Powered by OpenAIRE graph
Found an issue? Give us feedback

LATIM

Laboratory of Medical Information Processing
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
21 Projects, page 1 of 5
  • Funder: CHIST-ERA Project Code: CHIST-ERA-19-XAI-007

    Deep neural networks (DNNs) have achieved outstanding performance and broad implementation in computer vision tasks such as classification, denoising, segmentation and image synthesis. However, DNN-based models and algorithms have seen limited adaptation and development within radiomics which aim to improve diagnosis or prognosis of cancer. Traditionally, medical practitioners have used expert-derived features such as intensity, shape, textual, and others. We hypothesize that, despite the potential of DNNs to improve oncological classification performances in radiomics, a lack of interpretability of such models prevents their broad utilization, performance, and generalizability. Therefore, the INFORM consortium proposes to investigate explainable artificial intelligence (XAI) with a dual aim of building high performance DNN-based classifiers and developing novel interpretability techniques for radiomics. First, in order to overcome the limited data typically available in radomic studies, we will investigate Monte Carlo methods and generative adversarial networks (GAN) for realistic simulation that can aid building and training DNN architectures. Second, we tackle the interpretability of DNN-based feature engineering and latent variable modeling with innovative developments of saliency maps and related visualization techniques. Both supervised and unsupervised learning will be used to generate features, which can be interpreted in terms of input pixels and expert-derived features. Third, we propose to build explainable AI models that incorporate both expert-derived and DNN-based features. By quantitatively understanding the interplay between expert-derived and DNN-based features, our models will be readily understood and translated into medical applications. Fourth, evaluation will be carried out by clinical collaborators with a focus on lung, cervical and rectal cancer. These proposed DNN models, specifically developed to reveal their innerworkings, will leverage the robustness and trustworthiness of expert-derived features that medical practitioners are familiar with, while providing quantitative and visual feedback. Overall, our methodological research will advance interpretability of feature engineering, generative models, and DNN classifiers with applications in radiomics and broad medical imaging. With this project we aim at maximizing the impact on the patient management of ML and DL techniques by developing novel methods to facilitate training of decision-aid systems for clinical treatment strategies optimization. The methodological approaches we propose in this specific area will play a major role in facilitating the acceptability of DL-based decision-aid systems relying on medical imaging for oncology. The proposed validated predictive models in various cancer types within the context of this project might subsequently be used to drive future prospective clinical studies in which patients could be offered alternative treatment strategies based on the results of these predictive models. Such a clinical and social potential is further enhanced by the public-private collaboration proposed in this project, where the developed methodologies will find their way in products. The multidisciplinarity of INFORM is key to meet the target challenges and achieve the proposed goals. All partners have their individual world-leading qualifications and additional scientific expertise providing all the prerequisites for the efficient implementation of INFORM’s approach. The successful implementation of this project will have a large and prolonged impact both in the Medical/Oncology and the Computing/ Artificial Intelligence field of predictive radiomics model, as well as the same methodology could be extended to other diagnostic and therapeutic medical applications.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-24-CE19-5968
    Funder Contribution: 560,336 EUR

    Computer-navigated surgery has become widely used in orthopedics in recent decades, thanks to its proven effectiveness in knee, hip and shoulder arthroplasty. This solution provides real-time assistance to the surgeon in the operating room, optimizing implant sizing and positioning. During the surgery, a specific module localizes the patient's anatomical structures thanks to markers located on the bone structures and the surgeon's instruments. Navigation software displays relevant clinical information to assist the surgeon. Existing solutions are mainly based on bi-ocular optical cameras that localize markers fixed to the areas of interest. The size and weight of the markers make these solutions unsuitable for extremity surgery, in particular trapeziometacarpal arthroplasty, where the size of the incision and bones is very small (of the order of a cm). In this context, the XtremLoc project will offer the first integrated solution to assist the surgeon and to accurately guide the fitting of a metacarpal prosthesis. The pathology targeted here is rhizarthrosis, that is osteoarthritis affecting the trapeziometacarpal joint at the base of the thumb. It is the most common form of osteoarthritis in the hand and has increased significantly in recent years, due to the more regular use of keyboards and smartphones. The main objective of the XtremLoc project is to develop a complete navigation system to repair these small joints. This device will guide the surgeon during the implantation of a trapeziometacarpal prosthesis and contribute to optimize its positioning, enabling the patient to regain better mobility and limit post-operative complications. The solution proposed in this project is based on three innovative aspects. The first involves the design of non-invasive three-dimensional optical localization system, having a micrometric accurate and compatible with the operating room environment. It incorporates mini optical retroreflectors (volume in the order of mm3) attached to bones and surgical instruments, to localize them using high-speed (kHz) laser beam scanning technology. This scanning is performed by MEMS mirrors that can rotate around two orthogonal axes. By exploiting the reflections of the laser beams on these retroreflectors, it is possible to localize them and the structures that carry them in real time (i.e. the patient's anatomical features and the surgeon's probing instruments), in terms of both position and orientation. The second component is a software suite orchestrating planning and guidance. It is based on automatic segmentation and modelling of bone structures from scanner images, enabling detection of patient-specific anatomical features and calculation of optimal implant positioning. A precise registration method between intraoperative information from optical sensors and planning data enables intraoperative guidance of the surgeon. The third part of the project involves the integration of a navigated surgery prototype combining the above hardware and software bricks. The physical implementation of the localization module will be brought into line with the rules governing housing watertightness, instrument sterilization and ocular safety, so that it can be integrated into the surgical workflow. Navigation tests will be carried out on the surgical platform PLaTIMed, enabling a complete surgery to be performed on anatomical specimens, and the final demonstrator to be validated in a realistic environment.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-21-CE19-0036
    Funder Contribution: 650,157 EUR

    The innovative Opaque-Technology (OTech) derived from the LiquidO detection opens an unprecedented synergy between leading experts in medical and neutrino here proposing a new paradigm for medical imaging based on high precision antimatter ß+ detection. We propose to construct the first opaque liquid positron emission tomography system (LPET) in order to demonstrate and quantify its ability to fully characterise the annihilation pattern of both ß+ and ?-ß+ sources exploiting the latest machine learning techniques for maximal performance. The additional prompt-? will further improve the reconstructed annihilation origin while enabling the potential for direct tissue probing thanks to the accurate study of the positronium formation rate and lifetime dependent on the ß+ contact to the tissue structure including the development of metabolic disorders. Thus, our LPET prototype will explore the limits of today’s PET imaging while articulating a unique innermost tissue insight.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-23-CE19-0017
    Funder Contribution: 437,300 EUR

    Smart orthopedic implants open up very interesting prospects, particularly for the improvement of post-surgical follow-up. However, nowadays, the technologies available are not adapted to power fully-metallic prostheses used in orthopedics. This project aims to exploit a power transmission solution based on acoustic waves to transmit power in a knee implant. A knee joint model will be developed using new statistical modeling methods integrating acoustic parameters. In addition, the admissible input power levels will be studied to limit the physical mechanisms (thermal, cavitations) and remain below the values set by the standards and used by the commercial ultrasound equipments. This model and the input data will then be used to design, optimize the power transmission solution with both analytical and multi-physics Finite Element modeling methods for a tibial knee implant embedding piezoelectric transducers. We expect the acoustically powered system to receive an amount of electrical power within the 1 mW to 10 mW range at the receiver side while being compliant with medical standards and using commercial ultrasound probes at the transmitter side. Prototypes will be assembled and tested first on five knee phantoms elaborated within the project and then on three cadaveric specimens at the anatomical laboratory of the Brest CHRU. The proofs of concept (PoCs) will then allow to power a new generation of smart orthopaedic implants embedding sensors, that are more robust and more reliable, facilitating industrialization and ultimately allowing better clinical management.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-20-CE45-0020
    Funder Contribution: 498,351 EUR

    Multimodal medical imaging (e.g., PET/CT, PET/MRI) plays an important role for diagnosis and research. Multimodal machine learning (MML) aims at develop methods that can process information from multimodal imaging, thus allowing “to learn” the dependencies between the modalities. Image reconstruction with MML can take advantage of the dependencies between the images and reduce the image noise, thus allowing for patient dose reduction while preserving image quality. However, conventional dictionary learning techniques are memory consuming and cannot be applied to multimodal image reconstruction. The objective of the project MultiRecon are: (i) to develop new mathematical techniques for less memory-consuming convolutional dictionary learning (CDL) techniques for multimodal image joint reconstruction in PET/CT and PET/MRI, (ii) to further extend these methodologies for dynamic imaging (motion estimation/compensation and kinetics), and (iii) to disseminate our research by implementing the developed methodologies on the open-source reconstruction platform CASToR.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right
1 Organizations, page 1 of 1
1 Organizations, page 1 of 1

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.