
doi: 10.1049/cit2.12367
Abstract Network embedding (NE) tries to learn the potential properties of complex networks represented in a low‐dimensional feature space. However, the existing deep learning‐based NE methods are time‐consuming as they need to train a dense architecture for deep neural networks with extensive unknown weight parameters. A sparse deep autoencoder (called SPDNE) for dynamic NE is proposed, aiming to learn the network structures while preserving the node evolution with a low computational complexity. SPDNE tries to use an optimal sparse architecture to replace the fully connected architecture in the deep autoencoder while maintaining the performance of these models in the dynamic NE. Then, an adaptive simulated algorithm to find the optimal sparse architecture for the deep autoencoder is proposed. The performance of SPDNE over three dynamical NE models (i.e. sparse architecture‐based deep autoencoder method, DynGEM, and ElvDNE) is evaluated on three well‐known benchmark networks and five real‐world networks. The experimental results demonstrate that SPDNE can reduce about 70% of weight parameters of the architecture for the deep autoencoder during the training process while preserving the performance of these dynamical NE models. The results also show that SPDNE achieves the highest accuracy on 72 out of 96 edge prediction and network reconstruction tasks compared with the state‐of‐the‐art dynamical NE algorithms.
QA76.75-76.765, network embedding, Computational linguistics. Natural language processing, dynamic networks, low‐dimensional feature space, deep autoencoder, Computer software, P98-98.5, sparse structure
QA76.75-76.765, network embedding, Computational linguistics. Natural language processing, dynamic networks, low‐dimensional feature space, deep autoencoder, Computer software, P98-98.5, sparse structure
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
