publication . Conference object . Preprint . 2018

Feature Learning and Classification in Neuroimaging: Predicting Cognitive Impairment from Magnetic Resonance Imaging

Shi, Shan; Nathoo, Farouk;
Open Access
  • Published: 17 Jun 2018
  • Publisher: IEEE
Abstract
Due to the rapid innovation of technology and the desire to find and employ biomarkers for neurodegenerative disease, high-dimensional data classification problems are routinely encountered in neuroimaging studies. To avoid over-fitting and to explore relationships between disease and potential biomarkers, feature learning and selection plays an important role in classifier construction and is an important area in machine learning. In this article, we review several important feature learning and selection techniques including lasso-based methods, PCA, the two-sample t-test, and stacked auto-encoders. We compare these approaches using a numerical study involving...
Subjects
free text keywords: Data classification, Machine learning, computer.software_genre, computer, Feature learning, Dimensionality reduction, Neuroimaging, Feature extraction, Lasso (statistics), Artificial intelligence, business.industry, business, Magnetic resonance imaging, medicine.diagnostic_test, medicine, Classifier (linguistics), Computer science, Statistics - Machine Learning, Computer Science - Learning
Funded by
NSERC
Project
  • Funder: Natural Sciences and Engineering Research Council of Canada (NSERC)
,
NIH| Alzheimers Disease Neuroimaging Initiative
Project
  • Funder: National Institutes of Health (NIH)
  • Project Code: 1U01AG024904-01
  • Funding stream: NATIONAL INSTITUTE ON AGING
Communities
Neuroinformatics
35 references, page 1 of 3

[1] An L, Adeli E, Liu M, Zhang J, Lee SW, Shen D. A Hierarchical Feature and Sample Selection Framework and Its Application for Alzheimer's Disease Diagnosis. Scientific reports. 2017 Mar 30;7:45269.

[2] Bair E, Hastie T, Paul D, Tibshirani R. Prediction by supervised principal components. Journal of the American Statistical Association. 2006 Mar 1;101(473):119-37.

[3] Barshan E, Ghodsi A, Azimifar Z, Jahromi MZ. Supervised principal component analysis: Visualization, classification and regression on subspaces and submanifolds. Pattern Recognition. 2011 Jul 1;44(7):1357- 71.

[4] Bengio Y, Lamblin P, Popovici D, Larochelle H. Greedy layer-wise training of deep networks. In Advances in neural information processing systems 2007 (pp. 153-160).

[5] Bengio Y, Courville A, Vincent P. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence. 2013 Aug;35(8):1798-828.

[6] Boureau YL, Cun YL. Sparse feature learning for deep belief networks. InAdvances in neural information processing systems 2008 (pp. 1185- 1192).

[7] Chandrashekar G, Sahin F. A survey on feature selection methods. Computers & Electrical Engineering. 2014 Jan 1;40(1):16-28.

[8] Dash M, Liu H. Feature selection for classification. Intelligent data analysis. 1997 Jan 1;1(3):131-56.

[9] Fan J, Li R. Statistical challenges with high dimensionality: Feature selection in knowledge discovery. arXiv preprint math/0602133. 2006 Feb 7.

[10] Fan J, Fan Y. High dimensional classification using features annealed independence rules. Annals of statistics. 2008;36(6):2605.

[11] Fan J, Samworth R, Wu Y. Ultrahigh dimensional feature selection: beyond the linear model. Journal of machine learning research. 2009;10(Sep):2013-38.

[12] Fan J, Lv J. A selective overview of variable selection in high dimensional feature space. Statistica Sinica. 2010 Jan;20(1):101.

[13] Fan J, Fan Y, Wu Y. High-dimensional classification. InHighdimensional data analysis 2011 (pp. 3-37).

[14] Fan J. Features of big data and sparsest solution in high confidence set. Past, present, and future of statistical science. 2013:507-23.

[15] Fan J, Liao Y. Endogeneity in high dimensions. Annals of statistics. 2014 Jun 1;42(3):872.

35 references, page 1 of 3
Powered by OpenAIRE Research Graph
Any information missing or wrong?Report an Issue