Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2023
License: CC BY
Data sources: Datacite
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2023
License: CC BY
Data sources: ZENODO
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

M4Raw: A multi-contrast, multi-repetition, multi-channel MRI k-space dataset for low-field MRI research

Authors: Lyu, Mengye; Mei, Lifeng; Huang, Shoujin; Liu, Sixing; Li, Yi; Yang, Kexin; Liu, Yilong; +3 Authors

M4Raw: A multi-contrast, multi-repetition, multi-channel MRI k-space dataset for low-field MRI research

Abstract

Recently, low-field magnetic resonance imaging (MRI) has gained renewed interest to promote MRI accessibility and affordability worldwide. The presented M4Raw dataset aims to facilitate methodology development and reproducible research in this field. The dataset comprises multi-channel brain k-space data collected from 183 healthy volunteers using a 0.3 Tesla whole-body MRI system, and includes T1-weighted, T2-weighted, and fluid attenuated inversion recovery (FLAIR) images with in-plane resolution of ~1.2 mm and through-plane resolution of 5 mm. Importantly, each contrast contains multiple repetitions, which can be used individually or to form multi-repetition averaged images. After excluding motion-corrupted data, the partitioned training and validation subsets contain 1024 and 240 volumes, respectively. To demonstrate the potential utility of this dataset, we trained deep learning models for image denoising and parallel imaging tasks and compared their performance with traditional reconstruction methods. This M4Raw dataset will be valuable for the development of advanced data-driven methods specifically for low-field MRI. It can also serve as a benchmark dataset for general MRI reconstruction algorithms. Imaging protocol A total of 183 healthy volunteers were enrolled in the study. The majority of the subjects were college students (aged 18 to 32, mean = 20.1, standard deviation (std) = 1.5; 116 males, 67 females). Axial brain MRI data were obtained from each subject using a clinical 0.3T scanner (Oper-0.3, Ningbo Xingaoyi) equipped with a four-channel head coil. This scanner is a classical open-type permanent magnet-based whole-body system. Three common sequences were used: T1w, T2w, and FLAIR, each acquiring 18 slices with a thickness of 5 mm and an in-plane resolution of 0.94x1.23 mm2. To facilitate flexible research applications, T1w and T2w data were acquired with three individual repetitions and FLAIR with two repetitions. Data processing The k-space data from individual repetitions were exported from the scanner console without averaging. The corresponding raw images were in scanner coordinate space and may be off-centered due to patient positioning. To correct this, an off-center distance was estimated along the left-right direction for each subject using the vendor DICOM images, and the k-space data were multiplied by a corresponding linear phase modulation. The k-space matrices were then converted to Hierarchical Data Format Version 5 (H5) format, with imaging parameters stored in the H5 file header in an ISMRMRD-compatible format. Subset partition The dataset was divided into three subsets: training, validation, and motion-corrupted. After motion estimation, 26 subjects were placed in the motion-corrupted subset, and the remaining data were randomly split into a training subset of 128 subjects (1024 volumes) and a validation subset of 30 subjects (240 volumes). Data Records The training, validation, and motion-corrupted subsets are separately compressed into three zip files, containing 1024, 240, and 200 H5 files, respectively. Among the 200 files in the motion-corrupted subset, 64 files are placed in the “inter-scan_motion” sub-directory and 136 files in the “intra-scan_motion” sub-directory. All the H5 files are named in the format of "study-id_contrast_repetition-id.h5" (e.g., "2022061003_FLAIR01.h5"). In each file, the imaging parameters, multi-channel k-space, and the single-repetition images can be accessed via the dictionary keys of "ismrmrd_header", "kspace", and "reconstruction_rss", respectively. The k-space dimensions are arranged in the order of slice, coil channel, frequency encoding, and phase encoding, following the convention of the fastMRI dataset.

{"references": ["Lyu, M., Mei, L., Huang, S. et al. M4Raw: A multi-contrast, multi-repetition, multi-channel MRI k-space dataset for low-field MRI research. Sci Data 10, 264 (2023). https://doi.org/10.1038/s41597-023-02181-4"]}

Related Organizations
Keywords

k-space, deep learning, low-field MRI

  • BIP!
    Impact byBIP!
    citations
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    2
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 10%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 290
    download downloads 256
  • 290
    views
    256
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
citations
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
2
Top 10%
Average
Average
290
256