Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/17373
Title: Discovering Influential Factors in Variational Autoencoder.
Authors: Liu, S
Liu, J
Zhao, Q
Cao, X
Li, H
Meng, H
Liu, S
Meng, D
Issue Date: 2018
Citation: CoRR, 2018, abs/1809.01804
Abstract: In the field of machine learning, it is still a critical issue to identify and supervise the learned representation without manually intervention orintuitionassistance toextractuseful knowledge or serve for the latter tasks in machine learning. In this work, we focus on supervising the influential factors extracted by the variational autoencoder(VAE). The VAE is proposed to learn independent low dimension representation while facing the problem that sometimes pre-set factors are ignored. We argue that the mutual information of the input and each learned factor of the representation plays a necessary indicator. We find the VAE objective inclines to induce mutualinformationsparsityinfactordimensionoverthedata intrinsic dimension and results in some non-influential factors whose function on data reconstruction could be ignored. Weshowmutualinformationalsoinfluencesthelowerbound of VAE’s reconstruction error and latter classification task. To make such indicator applicable, we design an algorithm on calculating the mutual information for VAE and prove its consistency. Experimental results on Mnist, CelebA and Deap datasets show that mutual information can help determine influential factors, of which some are interpretable and canbeusedtofurthergenerationandclassificationtasks,and helpdiscoverthevariantthatconnectswithemotiononDeap dataset.
URI: http://bura.brunel.ac.uk/handle/2438/17373
Appears in Collections:Dept of Electronic and Computer Engineering Research Papers

Files in This Item:
File Description SizeFormat 
Fulltext.pdf8.26 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.