Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/19208
Title: Homogeneous Vector Capsules Enable Adaptive Gradient Descent in Convolutional Neural Networks
Authors: Byerly, A
Kalganova, T
Keywords: Adaptive Gradient Descent;Capsule;Convolutional Neural Network (CNN);Homogeneous Vector Capsules (HVCs);ImageNet;Inception
Issue Date: 20-Sep-2019
Publisher: Cornell University
Citation: Byers, A. and Kalganova, T. (2019) 'Homogeneous Vector Capsules Enable Adaptive Gradient Descent in Convolutional Neural Networks', arXiv:1906.08676 [cs.CV], pp. 1-9.
Abstract: Capsules are the name given by Geoffrey Hinton to vector-valued neurons. Neural networks traditionally produce a scalar value for an activated neuron. Capsules, on the other hand, produce a vector of values, which Hinton argues correspond to a single, composite feature wherein the values of the components of the vectors indicate properties of the feature such as transformation or contrast. We present a new way of parameterizing and training capsules that we refer to as homogeneous vector capsules (HVCs). We demonstrate, experimentally, that altering a convolutional neural network (CNN) to use HVCs can achieve superior classification accuracy without increasing the number of parameters or operations in its architecture as compared to a CNN using a single final fully connected layer. Additionally, the introduction of HVCs enables the use of adaptive gradient descent, reducing the dependence a model's achievable accuracy has on the finely tuned hyperparameters of a non-adaptive optimizer. We demonstrate our method and results using two neural network architectures. First, a very simple monolithic CNN that when using HVCs achieved a 63% improvement in top-1 classification accuracy and a 35% improvement in top-5 classification accuracy over the baseline architecture. Second, with the CNN architecture referred to as Inception v3 that achieved similar accuracies both with and without HVCs. Additionally, the simple monolithic CNN when using HVCs showed no overfitting after more than 300 epochs whereas the baseline showed overfitting after 30 epochs. We use the ImageNet ILSVRC 2012 classification challenge dataset with both networks.
URI: https://bura.brunel.ac.uk/handle/2438/19208
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
Pre-print.pdf789.54 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.