Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLiu, S-
dc.contributor.authorLiu, J-
dc.contributor.authorZhao, Q-
dc.contributor.authorCao, X-
dc.contributor.authorLi, H-
dc.contributor.authorMeng, H-
dc.contributor.authorLiu, S-
dc.contributor.authorMeng, D-
dc.identifier.citationCoRR, 2018, abs/1809.01804en_US
dc.description.abstractIn the field of machine learning, it is still a critical issue to identify and supervise the learned representation without manually intervention orintuitionassistance toextractuseful knowledge or serve for the latter tasks in machine learning. In this work, we focus on supervising the influential factors extracted by the variational autoencoder(VAE). The VAE is proposed to learn independent low dimension representation while facing the problem that sometimes pre-set factors are ignored. We argue that the mutual information of the input and each learned factor of the representation plays a necessary indicator. We find the VAE objective inclines to induce mutualinformationsparsityinfactordimensionoverthedata intrinsic dimension and results in some non-influential factors whose function on data reconstruction could be ignored. Weshowmutualinformationalsoinfluencesthelowerbound of VAE’s reconstruction error and latter classification task. To make such indicator applicable, we design an algorithm on calculating the mutual information for VAE and prove its consistency. Experimental results on Mnist, CelebA and Deap datasets show that mutual information can help determine influential factors, of which some are interpretable and canbeusedtofurthergenerationandclassificationtasks,and helpdiscoverthevariantthatconnectswithemotiononDeap dataset.en_US
dc.titleDiscovering Influential Factors in Variational Autoencoder.en_US
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
Fulltext.pdf8.26 MBAdobe PDFView/Open

Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.