β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

Latent space

Abstract

Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.

Publication
5th International Conference on Learning Representations
  • Accepted as a Conference Track paper at ICLR 2017.
  • Paper and reviews are available on OpenReview.

A modification to the VAE framework that successfully learns disentangled factors of variations in a fully unsupervised manner.

Uses dSprites as one of the ground-truth dataset.

Loic Matthey
Loic Matthey
Staff Research Scientist in Machine Learning

ex-Neuroscientist working on Artificial General Intelligence at Google DeepMind. Unsupervised learning, structured generative models, concepts and how to make AI actually generalize is what I do.