
Variational autoencoder (VAE) is a popular latent variable model for data generation. However, in natural language applications, VAE suffers from the posterior collapse in optimization procedure where the model posterior likely collapses to a standard Gaussian prior which disregards latent semantics from sequence data. The recurrent decoder accordingly generates du-plicate or noninformative sequence data. To tackle this issue, this paper adopts the Gaussian mixture prior for latent variable, and simultaneously fulfills the amortized regularization in encoder and skip connection in decoder. The noise robust prior, learned from the amortized encoder, becomes semantically meaningful. The prediction of sequence samples, due to skip connection, becomes contextually precise at each time. The amortized mixture prior (AMP) is then formulated in construction of variational recurrent autoencoder (VRAE) for sequence generation. Experiments on different tasks show that AMP-VRAE can avoid the posterior collapse, learn the meaningful latent features and improve the inference and generation for semantic representation.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 6 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
