
AbstractA reciprocal LASSO (rLASSO) regularization employs a decreasing penalty function as opposed to conventional penalization approaches that use increasing penalties on the coefficients, leading to stronger parsimony and superior model selection relative to traditional shrinkage methods. Here we consider a fully Bayesian formulation of the rLASSO problem, which is based on the observation that the rLASSO estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters are assigned independent inverse Laplace priors. Bayesian inference from this posterior is possible using an expanded hierarchy motivated by a scale mixture of double Pareto or truncated normal distributions. On simulated and real datasets, we show that the Bayesian formulation outperforms its classical cousin in estimation, prediction, and variable selection across a wide range of scenarios while offering the advantage of posterior inference. Finally, we discuss other variants of this new approach and provide a unified framework for variable selection using flexible reciprocal penalties. All methods described in this article are publicly available as an R package at: https://github.com/himelmallick/BayesRecipe.
FOS: Computer and information sciences, MCMC, Bayes Theorem, Machine Learning (stat.ML), Statistics - Computation, Applications of statistics to biology and medical sciences; meta analysis, Methodology (stat.ME), Statistics - Machine Learning, reciprocal Lasso, Bayesian regularization, Linear Models, Humans, penalized regression, nonlocal priors, Statistics - Methodology, Computation (stat.CO), variable selection
FOS: Computer and information sciences, MCMC, Bayes Theorem, Machine Learning (stat.ML), Statistics - Computation, Applications of statistics to biology and medical sciences; meta analysis, Methodology (stat.ME), Statistics - Machine Learning, reciprocal Lasso, Bayesian regularization, Linear Models, Humans, penalized regression, nonlocal priors, Statistics - Methodology, Computation (stat.CO), variable selection
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 12 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
