
Big Data is one of the major challenges of statistical science and has numerous consequences from algorithmic and theoretical viewpoints. Big Data always involve massive data but they also often include online data and data heterogeneity. Recently some statistical methods have been adapted to process Big Data, like linear regression models, clustering methods and bootstrapping schemes. Based on decision trees combined with aggregation and bootstrap ideas, random forests were introduced by Breiman in 2001. They are a powerful nonparametric statistical method allowing to consider in a single and versatile framework regression problems, as well as two-class and multi-class classification problems. Focusing on classification problems, this paper proposes a selective review of available proposals that deal with scaling random forests to Big Data problems. These proposals rely on parallel environments or on online adaptations of random forests. We also describe how related quantities -- such as out-of-bag error and variable importance -- are addressed in these methods. Then, we formulate various remarks for random forests in the Big Data context. Finally, we experiment five variants on two massive datasets (15 and 120 millions of observations), a simulated one as well as real world data. One variant relies on subsampling while three others are related to parallel implementations of random forests and involve either various adaptations of bootstrap to Big Data or to "divide-and-conquer" approaches. The fifth variant relates on online learning of random forests. These numerical experiments lead to highlight the relative performance of the different variants, as well as some of their limitations.
Big Data, Random Forests, FOS: Computer and information sciences, Computer Science - Machine Learning, 330, Mathematics - Statistics Theory, Machine Learning (stat.ML), [MATH] Mathematics [math], Statistics Theory (math.ST), [INFO] Computer Science [cs], 510, Machine Learning (cs.LG), Big data, Parallel Computing, [STAT.ML]Statistics [stat]/Machine Learning [stat.ML], Bag of Little Bootstraps, [MATH.MATH-ST]Mathematics [math]/Statistics [math.ST], Statistics - Machine Learning, FOS: Mathematics, On-line Learning, [MATH.MATH-ST] Mathematics [math]/Statistics [math.ST], Random forests;Big data;Statistics, Statistics, Random forests, random forest;big data;parallel computing;bag of little bootstraps;on-line learning;R, [STAT.ML] Statistics [stat]/Machine Learning [stat.ML]
Big Data, Random Forests, FOS: Computer and information sciences, Computer Science - Machine Learning, 330, Mathematics - Statistics Theory, Machine Learning (stat.ML), [MATH] Mathematics [math], Statistics Theory (math.ST), [INFO] Computer Science [cs], 510, Machine Learning (cs.LG), Big data, Parallel Computing, [STAT.ML]Statistics [stat]/Machine Learning [stat.ML], Bag of Little Bootstraps, [MATH.MATH-ST]Mathematics [math]/Statistics [math.ST], Statistics - Machine Learning, FOS: Mathematics, On-line Learning, [MATH.MATH-ST] Mathematics [math]/Statistics [math.ST], Random forests;Big data;Statistics, Statistics, Random forests, random forest;big data;parallel computing;bag of little bootstraps;on-line learning;R, [STAT.ML] Statistics [stat]/Machine Learning [stat.ML]
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 299 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 0.1% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 1% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 1% |
