
Sliced inverse regression (SIR) is an innovative and effective method for dimension reduction and data visualization of high-dimensional problems. It replaces the original variables with low-dimensional linear combinations of predictors without any loss of regression information and without the need to prespecify a model or an error distribution. However, it suffers from the fact that each SIR component is a linear combination of all the original predictors; thus, it is often difficult to interpret the extracted components. By representing SIR as a regression-type optimization problem, we propose in this article a new method, called sparse SIR, that combines the shrinkage idea of the lasso with SIR to produce both accurate and sparse solutions. The efficacy of the proposed method is verified by simulation, and a real data example is given.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 54 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
