Disentanglement using Vaes resembles distance learning and requires overlapping data

Date
2022
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Learning disentangled representations with variational autoencoders (VAEs) is often attributed to the regularisation component of the loss. In this work, we highlight the interaction between data and the reconstruction term of the loss as the main contributor to disentanglement in VAEs. We note that standardised benchmark datasets are constructed in a way that is conducive to learning what appear to be disentangled representations. We design an intuitive adversarial dataset that exploits this mechanism to break existing state-of-the-art disentanglement frameworks. We provide solutions in the form of a modified reconstruction loss suggesting that VAEs are distance learners, we also show that these loss functions can be learnt. From this idea, we introduce new scores that measure if disentangled representations using distances have been discovered. We then solve these scores by introducing a supervised metric learning framework that encourages disentanglement. Finally, we present various considerations for disentanglement research based on the subjective nature of disentanglement itself and the results from our work which suggest that VAE disentanglement is largely accidental
Description
A dissertation submitted in fulfilment of the requirements for the degree of Master of Science to the Faculty of Science, University of the Witwatersrand, Johannesburg, 2022
Keywords
Learning disentangled, Variational autoencoders (VAEs), Distance learning
Citation
Collections