Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other ORP type . 2024
License: CC BY
Data sources: ZENODO
ZENODO
Other ORP type . 2024
License: CC BY
Data sources: Datacite
ZENODO
Other ORP type . 2024
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Fast, Low-resource, Accurace, and Robust Organ and Pan-cancer Segmentation

Authors: Ma, Jun; Wang, Bo;

Fast, Low-resource, Accurace, and Robust Organ and Pan-cancer Segmentation

Abstract

Organ and cancer segmentation in medical images, especially from 3D CT and MR scans, is fundamentally important for accurate diagnosis, treatment planning, and monitoring the progression of diseases. Precise segmentation results of organs and pathological lesions can aid clinicians in formulating personalized treatment strategies, which are essential for optimal patient outcomes. Deep learning-based methods have significantly revolutionized these tasks, achieving unprecedented levels of accuracy and automation compared to traditional model-based methods. However, a notable limitation is that most deep learning models are tailored for specific types of cancer, such as brain cancer, lung cancer, liver cancer, and so on. As a result, the generalizability of these algorithms across various cancer types remains a challenge. Another main barrier that hinders the real-world deployment of existing methods is the algorithm's efficiency because deep learning models usually require considerable computing for running, such as GPU, CPU, and RAM. During the past three years, we have organized three challenges to address these limitations with community efforts. FLARE 2021: four abdominal organs segmentation in CT scans; Data: 511 CT scans FLARE 2022: 13 abdominal organs segmentation in CT scans; Data: 2300 CT scans FLARE 2023: 13 abdominal organs and pan-cancer segmentation in CT scans; Data: 4500 CT scans Nowadays, the winning solution can simultaneously segment 13 organs and various abdominal lesions within 10 seconds for a 3D CT scan with over 1,000,000 voxels, which significantly improved the segmentation accuracy, efficiency, and generalization ability. In FLARE 2024, we aim to further promote the development of pan-cancer segmentation and model deployment on low-resource settings (e.g., no GPU available) by extending the challenge to the following three tasks: Subtask 1: Pan-cancer segmentation in CT scans Subtask 2: Abdominal CT organ segmentation on laptop Subtask 3: Unsupervised domain adaptation for abdominal organ segmentation in MRI Scans We will provide a comprehensive and large-scale dataset with more than 10,000 CT scans and 1,000 abdominal MR scans, which is a multi-racial, multi-center, multi-disease, multi-phase, multi-manufacturer, and multi-modality dataset. This challenge would mark a significant stride for universal cancer segmentation models and applicable toolsets for CT and MR image analysis.

Keywords

Segmentation, Efficient, MICCAI 2024 challenges, Cross-modality, Organs, Pan-cancer

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Related to Research communities
Cancer Research