
handle: 2123/29858
Artificial intelligence systems are seeking to learn better representations. One of the most desirable properties in these representations is disentanglement. Disentangled representations show merits of interpretability and generalizability. Through these representations, the world around us can be decomposed into explanatory factors of variation, and can thus be more easily understood by not only machines but humans. Disentanglement is akin to the reverse engineering process of a video game, where based on exploring the beautiful open world we need to figure out what underlying controllable factors that actually render/generate these fantastic dynamics. This thesis mainly discusses the problem of how such "reverse engineering" can be achieved using deep learning techniques in the computer vision domain. Although there have been plenty of works tackling this challenging problem, this thesis shows that an important ingredient that is highly effective but largely neglected by existing works is the modeling of visual variation. We show from various perspectives that by integrating the modeling of visual variation in generative models, we can achieve superior unsupervised disentanglement performance that has never been seen before. Specifically, this thesis will cover various novel methods based on technical insights such as variation consistency, variation predictability, perceptual simplicity, spatial constriction, Lie group decomposition, and contrastive nature in semantic changes. Besides the proposed methods, this thesis also touches on topics such as variational autoencoders, generative adversarial networks, latent space examination, unsupervised disentanglement metrics, and neural network architectures. We hope the observations, analysis, and methods presented in this thesis can inspire and contribute to future works in disentanglement learning and related machine learning fields.
generative models, disentanglement learning, variation consistency, Lie Group VAE, 006, interpretable representation, disentangled representation
generative models, disentanglement learning, variation consistency, Lie Group VAE, 006, interpretable representation, disentangled representation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
