
doi: 10.1007/11499145_53
A number of manifold learning algorithms have been recently proposed, including locally linear embedding (LLE). These algorithms not only merely reduce data dimensionality, but also attempt to discover a true low dimensional structure of the data. The common feature of the most of these algorithms is that they operate in a batch or offline mode. Hence, when new data arrive, one needs to rerun these algorithms with the old data augmented by the new data. A solution for this problem is to make a certain algorithm online or incremental so that sequentially coming data will not cause time consuming recalculations. In this paper, we propose an incremental version of LLE and experimentally demonstrate its advantages in terms of topology preservation. Also, compared to the original (batch) LLE, the incremental LLE needs to solve a much smaller optimization problem.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 13 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
