Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other ORP type . 2024
License: CC BY
Data sources: ZENODO
ZENODO
Other ORP type . 2024
License: CC BY
Data sources: Datacite
ZENODO
Other ORP type . 2024
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Self-supervised learning for 3D light-sheet microscopy image segmentation

Authors: Erturk, Ali; Höher, Luciano; Al-Maskari, Rami; Horvath, Izabela; Ali, Mayar; Paetzold, Johannes C.; Chen, Ying; +6 Authors

Self-supervised learning for 3D light-sheet microscopy image segmentation

Abstract

In the realm of modern biological research, the ability to visualize and understand complex structures within tissues and organisms is crucial. Traditional imaging methods often face challenges in providing detailed, 3D views without compromising sample integrity. Light-sheet microscopy (LSM) after tissue clearing and specific structure staining overcomes these limitations, making it an efficient, high contrast, and ultra-high resolution method for visualizing a wide array of biological structures in diverse samples, such as cellular and subcellular structures, organelles and processes. In the structure staining step, various dyes, fluorophores, or antibodies can be employed to selectively label specific biological structures within samples and enhance their contrast under microscopy. In the tissue clearing step, while preserving sample integrity and fluorescence of labeled structures, inherently opaque biological samples are rendered transparent, allowing light to penetrate deeper into the tissue. Integrating with structure staining and tissue clearing steps, LSM provides researchers with unprecedented capabilities to visualize intricate biological structures with high spatial resolution, offering new insights into various biomedical research fields such as neuroscience, immunology, oncology and cardiology. To analyze LSM images in different biomedical research fields, segmentation plays a pivotal and essential role in identifying and distinguishing different biological structures. For very small-scale LSM images, image segmentation can be done manually. However, in whole-organ or body LSM cases, manual segmentation is time-intensive, single images can have 10000^3 voxel, hence automatic segmentation methods are highly demanded. Recent strides in deep learning-based segmentation methods offer promising solutions for automated segmentation of LSM images. Although these methods reached segmentation performances comparable to expert human annotators, their success largely relies on supervised learning from extensive training sets of manually annotated images which are specific to one kind of structure staining. However, large-scale annotation for diverse LSM image segmentation tasks poses a great challenge. Self-supervised learning proves advantageous in this context, as it allows deep learning models to pretrain on large-scale, unannotated datasets, learning useful and general representations of LSM image data. Subsequently, the model can be fine-tuned on a smaller labeled dataset for specific segmentation tasks. Notably, self-supervised learning has not been extensively explored within the LSM field, despite the presence of vast sets of LSM data of different biological structures. Some of the properties of LSM images e.g. the high signal-to-noise-ratio makes the data specifically well suited for self-supervised learning. In this challenge, we aim to host an inaugural MICCAI challenge on self-supervised learning for 3D LSM image segmentation, encouraging the community to develop self-supervised learning methods for general segmentation of various structures in 3D LSM images. With an effective self-supervised learning method, extensive 3D LSM images with no annotations can be leveraged to pretrain segmentation models. This encourages models to capture high-level representations that are generalizable across different biological structures. Subsequently, the pretrained models can be finetuned on substantially smaller annotated datasets, thereby significantly minimizing the annotation efforts in various 3D LSM segmentation applications. Each participant will receive a training dataset comprising two sets. The first set includes a large (> 6x10^11 voxels, equivalent to > 35000 images of 256x256x256 voxels) of whole-brain 3D LSM images of both mice and human samples without annotations, facilitating model pretraining through self-supervised learning. This dataset will be one of the largest datasets ever provided to a MICCAI challenge. The second set consists of cropped patches from whole-brain (of humans and mice) 3D LSM images with precise annotations, enabling the fine-tuning of the model for segmentation tasks.

Keywords

MICCAI 2024 challenges, Light-sheet microscopy, self-supervised learning, deep learning, pretraining, image segmentation, 3D image

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average