<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::7b43dbbc992416b3c9851163c983114b&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::7b43dbbc992416b3c9851163c983114b&type=result"></script>');
-->
</script>
A Neurodegenerative Disease (ND) is progressive damage to brain neurons, which the human body cannot repair or replace. The well-known examples of such conditions are Dementia and Alzheimer’s Disease (AD), which affect millions of lives each year. Although conducting numerous researches, there are no effective treatments for the mentioned diseases today. However, early diagnosis is crucial in disease management. Diagnosing NDs is challenging for neurologists and requires years of training and experience. So, there has been a trend to harness the power of deep learning, including state-of-the-art Convolutional Neural Network (CNN), to assist doctors in diagnosing such conditions using brain scans. The CNN models lead to promising results comparable to experienced neurologists in their diagnosis. But, the advent of transformers in the Natural Language Processing (NLP) domain and their outstanding performance persuaded Computer Vision (CV) researchers to adapt them to solve various CV tasks in multiple areas, including the medical field. This research aims to develop Vision Transformer (ViT) models using Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset to classify NDs. More specifically, the models can classify three categories (Cognitively Normal (CN), Mild Cognitive Impairment (MCI), Alzheimer’s Disease (AD)) using brain Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) scans. Also, we take advantage of Automated Anatomical Labeling (AAL) brain atlas and attention maps to develop explainable models. We propose three ViTs, the best of which obtains an accuracy of 82% on the test dataset with the help of transfer learning. Also, we encode the AAL brain atlas information into the best performing ViT, so the model outputs the predicted label, the most critical region in its prediction, and overlaid attention map on the input scan with the crucial areas highlighted. Furthermore, we develop two CNN models with 2D and 3D convolutional kernels as baselines to classify NDs, which achieve accuracy of 77% and 73%, respectively, on the test dataset. We also conduct a study to find out the importance of brain regions and their combinations in classifying NDs using ViTs and the AAL brain atlas. This thesis was awarded a prize of 50,000 SEK by Getinge Sterilization for projects within Health Innovation.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______681::fafcebc342658fb7f6e5017e419950f6&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______681::fafcebc342658fb7f6e5017e419950f6&type=result"></script>');
-->
</script>
ABSTRACT INTRODUCTION Computational brain network modeling using The Virtual Brain (TVB) simulation platform acts synergistically with machine learning and multi-modal neuroimaging to reveal mechanisms and improve diagnostics in Alzheimer’s disease. METHODS We enhance large-scale whole-brain simulation in TVB with a cause-and-effect model linking local Amyloid β PET with altered excitability. We use PET and MRI data from 33 participants of Alzheimer’s Disease Neuroimaging Initiative (ADNI3) combined with frequency compositions of TVB-simulated local field potentials (LFP) for machine-learning classification. RESULTS The combination of empirical neuroimaging features and simulated LFPs significantly outperformed the classification accuracy of empirical data alone by about 10% (weighted F1-score empirical 64.34% vs. combined 74.28%). Informative features showed high biological plausibility regarding the Alzheimer’s-typical spatial distribution. DISCUSSION The cause-and-effect implementation of local hyperexcitation caused by Amyloid β can improve the machine-learning-driven classification of Alzheimer’s and demonstrates TVB’s ability to decode information in empirical data employing connectivity-based brain simulation. RESEARCH IN CONTEXT SYSTEMATIC REVIEW . Machine-learning has been proven to augment diagnostics of dementia in several ways. Imaging-based approaches enable early diagnostic predictions. However, individual projections of long-term outcome as well as differential diagnosis remain difficult, as the mechanisms behind the used classifying features often remain unclear. Mechanistic whole-brain models in synergy with powerful machine learning aim to close this gap. INTERPRETATION . Our work demonstrates that multi-scale brain simulations considering Amyloid β distributions and cause-and-effect regulatory cascades reveal hidden electrophysiological processes that are not readily accessible through measurements in humans. We demonstrate that these simulation-inferred features hold the potential to improve diagnostic classification of Alzheimer’s disease. FUTURE DIRECTIONS . The simulation-based classification model needs to be tested for clinical usability in a larger cohort with an independent test set, either with another imaging database or a prospective study to assess its capability for long-term disease trajectories.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3430::b82e6b01dbabf1d0f034667f0a0e97c5&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3430::b82e6b01dbabf1d0f034667f0a0e97c5&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::551f20099ef0777326ed974eb1c7e7f5&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::551f20099ef0777326ed974eb1c7e7f5&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::d9229a8272d7fb6ceb37afad44cb1a32&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::d9229a8272d7fb6ceb37afad44cb1a32&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::ac03d2aab347f6b7eb82259388ae0765&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::ac03d2aab347f6b7eb82259388ae0765&type=result"></script>');
-->
</script>
Recent advances in multi-atlas based algorithms address many of the previous limitations in model-based and probabilistic segmentation methods. However, at the label fusion stage, a majority of algorithms focus primarily on optimizing weight-maps associated with the atlas library based on a theoretical objective function that approximates the segmentation error. In contrast, we propose a novel method—Autocorrecting Walks over Localized Markov Random Fields (AWoL-MRF)—that aims at mimicking the sequential process of manual segmentation, which is the gold-standard for virtually all the segmentation methods. AWoL-MRF begins with a set of candidate labels generated by a multi-atlas segmentation pipeline as an initial label distribution and refines low confidence regions based on a localized Markov random field (L-MRF) model using a novel sequential inference process (walks). We show that AWoL-MRF produces state-of-the-art results with superior accuracy and robustness with a small atlas library compared to existing methods. We validate the proposed approach by performing hippocampal segmentations on three independent datasets: (1) Alzheimer's Disease Neuroimaging Database (ADNI); (2) First Episode Psychosis patient cohort; and (3) A cohort of preterm neonates scanned early in life and at term-equivalent age. We assess the improvement in the performance qualitatively as well as quantitatively by comparing AWoL-MRF with majority vote, STAPLE, and Joint Label Fusion methods. AWoL-MRF reaches a maximum accuracy of 0.881 (dataset 1), 0.897 (dataset 2), and 0.807 (dataset 3) based on Dice similarity coefficient metric, offering significant performance improvements with a smaller atlas library (< 10) over compared methods. We also evaluate the diagnostic utility of AWoL-MRF by analyzing the volume differences per disease category in the ADNI1: Complete Screening dataset. We have made the source code for AWoL-MRF public at: https://github.com/CobraLab/AWoL-MRF.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=frontiers___::cab9b86928c47dbf811ea10eef90b162&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=frontiers___::cab9b86928c47dbf811ea10eef90b162&type=result"></script>');
-->
</script>
In healthy individuals, behavioral outcomes are highly associated with the variability on brain regional structure or neurochemical phenotypes. Similarly, in the context of neurodegenerative conditions, neuroimaging reveals that cognitive decline is linked to the magnitude of atrophy, neurochemical declines, or concentrations of abnormal protein aggregates across brain regions. However, modeling the effects of multiple regional abnormalities as determinants of cognitive decline at the voxel level remains largely unexplored by multimodal imaging research, given the high computational cost of estimating regression models for every single voxel from various imaging modalities. VoxelStats is a voxel-wise computational framework to overcome these computational limitations and to perform statistical operations on multiple scalar variables and imaging modalities at the voxel level. VoxelStats package has been developed in Matlab® and supports imaging formats such as Nifti-1, ANALYZE, and MINC v2. Prebuilt functions in VoxelStats enable the user to perform voxel-wise general and generalized linear models and mixed effect models with multiple volumetric covariates. Importantly, VoxelStats can recognize scalar values or image volumes as response variables and can accommodate volumetric statistical covariates as well as their interaction effects with other variables. Furthermore, this package includes built-in functionality to perform voxel-wise receiver operating characteristic analysis and paired and unpaired group contrast analysis. Validation of VoxelStats was conducted by comparing the linear regression functionality with existing toolboxes such as glim_image and RMINC. The validation results were identical to existing methods and the additional functionality was demonstrated by generating feature case assessments (t-statistics, odds ratio, and true positive rate maps). In summary, VoxelStats expands the current methods for multimodal imaging analysis by allowing the estimation of advanced regional association metrics at the voxel level.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=frontiers___::5be874a271966f69032e37344afd7e1c&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=frontiers___::5be874a271966f69032e37344afd7e1c&type=result"></script>');
-->
</script>
handle: 10067/1242260151162165141
Abstract: APOE ε4, the most significant genetic risk factor for Alzheimer disease (AD), may mask effects of other loci. We re-analyzed genome-wide association study (GWAS) data from the International Genomics of Alzheimers Project (IGAP) Consortium in APOE ε4+ (10 352 cases and 9207 controls) and APOE ε4− (7184 cases and 26 968 controls) subgroups as well as in the total sample testing for interaction between a single-nucleotide polymorphism (SNP) and APOE ε4 status. Suggestive associations (P<1 × 10-4) in stage 1 were evaluated in an independent sample (stage 2) containing 4203 subjects (APOE ε4+: 1250 cases and 536 controls; APOE ε4−: 718 cases and 1699 controls). Among APOE ε4− subjects, novel genome-wide significant (GWS) association was observed with 17 SNPs (all between KANSL1 and LRRC37A on chromosome 17 near MAPT) in a meta-analysis of the stage 1 and stage 2 data sets (best SNP, rs2732703, P=5·8 × 10−9). Conditional analysis revealed that rs2732703 accounted for association signals in the entire 100-kilobase region that includes MAPT. Except for previously identified AD loci showing stronger association in APOE ε4+ subjects (CR1 and CLU) or APOE ε4− subjects (MS4A6A/MS4A4A/MS4A6E), no other SNPs were significantly associated with AD in a specific APOE genotype subgroup. In addition, the finding in the stage 1 sample that AD risk is significantly influenced by the interaction of APOE with rs1595014 in TMEM106B (P=1·6 × 10−7) is noteworthy, because TMEM106B variants have previously been associated with risk of frontotemporal dementia. Expression quantitative trait locus analysis revealed that rs113986870, one of the GWS SNPs near rs2732703, is significantly associated with four KANSL1 probes that target transcription of the first translated exon and an untranslated exon in hippocampus (Pless than or equal to1.3 × 10-8), frontal cortex (Pless than or equal to1.3 × 10-9) and temporal cortex (Pless than or equal to1.2 × 10−11). Rs113986870 is also strongly associated with a MAPT probe that targets transcription of alternatively spliced exon 3 in frontal cortex (P=9.2 × 10−6) and temporal cortex (P=2.6 × 10−6). Our APOE-stratified GWAS is the first to show GWS association for AD with SNPs in the chromosome 17q21.31 region. Replication of this finding in independent samples is needed to verify that SNPs in this region have significantly stronger effects on AD risk in persons lacking APOE ε4 compared with persons carrying this allele, and if this is found to hold, further examination of this region and studies aimed at deciphering the mechanism(s) are warranted.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10067/1242260151162165141&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10067/1242260151162165141&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::7b43dbbc992416b3c9851163c983114b&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::7b43dbbc992416b3c9851163c983114b&type=result"></script>');
-->
</script>
A Neurodegenerative Disease (ND) is progressive damage to brain neurons, which the human body cannot repair or replace. The well-known examples of such conditions are Dementia and Alzheimer’s Disease (AD), which affect millions of lives each year. Although conducting numerous researches, there are no effective treatments for the mentioned diseases today. However, early diagnosis is crucial in disease management. Diagnosing NDs is challenging for neurologists and requires years of training and experience. So, there has been a trend to harness the power of deep learning, including state-of-the-art Convolutional Neural Network (CNN), to assist doctors in diagnosing such conditions using brain scans. The CNN models lead to promising results comparable to experienced neurologists in their diagnosis. But, the advent of transformers in the Natural Language Processing (NLP) domain and their outstanding performance persuaded Computer Vision (CV) researchers to adapt them to solve various CV tasks in multiple areas, including the medical field. This research aims to develop Vision Transformer (ViT) models using Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset to classify NDs. More specifically, the models can classify three categories (Cognitively Normal (CN), Mild Cognitive Impairment (MCI), Alzheimer’s Disease (AD)) using brain Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) scans. Also, we take advantage of Automated Anatomical Labeling (AAL) brain atlas and attention maps to develop explainable models. We propose three ViTs, the best of which obtains an accuracy of 82% on the test dataset with the help of transfer learning. Also, we encode the AAL brain atlas information into the best performing ViT, so the model outputs the predicted label, the most critical region in its prediction, and overlaid attention map on the input scan with the crucial areas highlighted. Furthermore, we develop two CNN models with 2D and 3D convolutional kernels as baselines to classify NDs, which achieve accuracy of 77% and 73%, respectively, on the test dataset. We also conduct a study to find out the importance of brain regions and their combinations in classifying NDs using ViTs and the AAL brain atlas. This thesis was awarded a prize of 50,000 SEK by Getinge Sterilization for projects within Health Innovation.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______681::fafcebc342658fb7f6e5017e419950f6&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______681::fafcebc342658fb7f6e5017e419950f6&type=result"></script>');
-->
</script>
ABSTRACT INTRODUCTION Computational brain network modeling using The Virtual Brain (TVB) simulation platform acts synergistically with machine learning and multi-modal neuroimaging to reveal mechanisms and improve diagnostics in Alzheimer’s disease. METHODS We enhance large-scale whole-brain simulation in TVB with a cause-and-effect model linking local Amyloid β PET with altered excitability. We use PET and MRI data from 33 participants of Alzheimer’s Disease Neuroimaging Initiative (ADNI3) combined with frequency compositions of TVB-simulated local field potentials (LFP) for machine-learning classification. RESULTS The combination of empirical neuroimaging features and simulated LFPs significantly outperformed the classification accuracy of empirical data alone by about 10% (weighted F1-score empirical 64.34% vs. combined 74.28%). Informative features showed high biological plausibility regarding the Alzheimer’s-typical spatial distribution. DISCUSSION The cause-and-effect implementation of local hyperexcitation caused by Amyloid β can improve the machine-learning-driven classification of Alzheimer’s and demonstrates TVB’s ability to decode information in empirical data employing connectivity-based brain simulation. RESEARCH IN CONTEXT SYSTEMATIC REVIEW . Machine-learning has been proven to augment diagnostics of dementia in several ways. Imaging-based approaches enable early diagnostic predictions. However, individual projections of long-term outcome as well as differential diagnosis remain difficult, as the mechanisms behind the used classifying features often remain unclear. Mechanistic whole-brain models in synergy with powerful machine learning aim to close this gap. INTERPRETATION . Our work demonstrates that multi-scale brain simulations considering Amyloid β distributions and cause-and-effect regulatory cascades reveal hidden electrophysiological processes that are not readily accessible through measurements in humans. We demonstrate that these simulation-inferred features hold the potential to improve diagnostic classification of Alzheimer’s disease. FUTURE DIRECTIONS . The simulation-based classification model needs to be tested for clinical usability in a larger cohort with an independent test set, either with another imaging database or a prospective study to assess its capability for long-term disease trajectories.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3430::b82e6b01dbabf1d0f034667f0a0e97c5&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3430::b82e6b01dbabf1d0f034667f0a0e97c5&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::551f20099ef0777326ed974eb1c7e7f5&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::551f20099ef0777326ed974eb1c7e7f5&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::d9229a8272d7fb6ceb37afad44cb1a32&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::d9229a8272d7fb6ceb37afad44cb1a32&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::ac03d2aab347f6b7eb82259388ae0765&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______322::ac03d2aab347f6b7eb82259388ae0765&type=result"></script>');
-->
</script>