
Recent years have witnessed the unprecedented success in single image synthesis by the means of convolutional neural networks (CNNs). High-level synthesis of facial image such as expression translation and attribute swap is still a challenging task due to high non-linearity. Previous methods suffer from the limitations that being unable to transfer multiple face attributes simultaneously, or incapability of transferring an attribute to another by a continuously changing way. To address this problem, we propose a two-discriminator adversarial autoencoder network (TAAN). The latent-discriminator is trained to disentangle an input image from its original facial attribute, while the pixel-discriminator is trained to make the output image attach to the target facial attribute. By controlling the attribute values, we can choose which and how much a specific attribute can be perceivable in the generated image. Quantitative and qualitative evaluations are conducted on the celebA and KDEF datasets, and the comparison with the state-of-the-art methods shows the competency of our proposed TAAN.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
