Please use this identifier to cite or link to this item:
|Title:||Discovering Influential Factors in Variational Autoencoder.|
|Citation:||CoRR, 2018, abs/1809.01804|
|Abstract:||In the ﬁeld of machine learning, it is still a critical issue to identify and supervise the learned representation without manually intervention orintuitionassistance toextractuseful knowledge or serve for the latter tasks in machine learning. In this work, we focus on supervising the inﬂuential factors extracted by the variational autoencoder(VAE). The VAE is proposed to learn independent low dimension representation while facing the problem that sometimes pre-set factors are ignored. We argue that the mutual information of the input and each learned factor of the representation plays a necessary indicator. We ﬁnd the VAE objective inclines to induce mutualinformationsparsityinfactordimensionoverthedata intrinsic dimension and results in some non-inﬂuential factors whose function on data reconstruction could be ignored. Weshowmutualinformationalsoinﬂuencesthelowerbound of VAE’s reconstruction error and latter classiﬁcation task. To make such indicator applicable, we design an algorithm on calculating the mutual information for VAE and prove its consistency. Experimental results on Mnist, CelebA and Deap datasets show that mutual information can help determine inﬂuential factors, of which some are interpretable and canbeusedtofurthergenerationandclassiﬁcationtasks,and helpdiscoverthevariantthatconnectswithemotiononDeap dataset.|
|Appears in Collections:||Dept of Electronic and Computer Engineering Research Papers|
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.