
Among linear DR methods, principal component analysis (PCA) perhaps is the most important one. In linear DR, the dissimilarity of two points in a data set is defined by the Euclidean distance between them, and correspondingly, the similarity is described by their inner product. Linear DR methods adopt the global neighborhood system: the neighbors of a point in the data set consist of all of other points. Let the original data set be H= {x 1 ··· x n ⊂ ℝ D and the DR data set of H be a d-dimensional set Y. Under the Euclidean measure, PCA finds a linear projection T: ℝ D → ℝ d so that the DR data Y = T(H) maximize the data energy. PCA is widely used in many applications. The present chapter is organized as follows. In Section 5.1, we discuss the description of PCA. In Section 5.2, we present the PCA algorithms. Some real-world applications of PCA are introduced in Section 5.3.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
