
We propose a notion of central mean dimension reduction subspace for time series {xt} which does not require specification of a model but seeks to find a p×d matrix Φd, d≤p, so that the d×1 vector ΦdTXt−1, where Xt−1=(xt−1, …, xt−p)T for some p≥1, includes all the information about xt that is available from E(xt|Xt−1). For known p and d, we estimate the mean central subspace through the Nadaraya–Watson kernel smoother and establish the strong consistency of our estimator. In addition, we propose estimation of d and p using a modified Schwarz Bayesian criterion, if either of d and p is unknown. Finally, we examine the performance of all the estimators extensively through a variety of simulations and provide a new analysis of the well-known Canadian lynx data. Supplemental materials for this article are available online.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 16 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
