
Consideration is given to minimum variance unbiased estimation when the choice of estimators is restricted to a finite-dimensional linear space. The discussion gives generalizations and minor extensions of known results in linear model theory utilizing both the coordinate-free approach of Kruskal and the usual parametric representations. Included are (i) a restatement of a theorem on minimum variance unbiased estimation by Lehmann and Scheffe; (ii) a minor extension of a theorem by Zyskind on best linear unbiased estimation; (iii) a generalization of the covariance adjustment procedure described by Rao; (iv) a generalization of the normal equations; and (v) criteria for existence of minimum variance unbiased estimators by means of invariant subspaces. Illustrative examples are included.
Linear regression; mixed models, Point estimation
Linear regression; mixed models, Point estimation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 54 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 1% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
