Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27481
Title: Posit and floating-point based Izhikevich neuron: A Comparison of arithmetic
Authors: Fernandez-Hart, TJ
Knight, JC
Kalganova, T
Keywords: floating-point arithmetic;posit arithmetic;spiking neural network;Izhikevich neuron model
Issue Date: 31-May-2024
Publisher: Elsevier
Citation: Fernandez-Hart, T..J., Knight, J.C. and Kalganova, T. (2024) 'Posit and floating-point based Izhikevich neuron: A Comparison of arithmetic', Neurocomputing, 597, 127903, pp. 1 - 15. doi: 10.1016/j.neucom.2024.127903.
Abstract: Reduced precision number formats have become increasingly popular in various fields of computational science, as they offer the potential to enhance energy efficiency, reduce silicon area, and improve processing speed. However, this is often at the expense of introducing arithmetic errors that can impact the accuracy of a system. The optimal balance must be struck, judiciously choosing a number format using as few bits as possible, while minimising accuracy loss. In this study, we examine one such format, posit arithmetic as a replacement for floating-point when conducting spiking neuron simulations, specifically using the Izhikevich neuron model. This model is capable of simulating complex neural firing behaviours, 20 of which were originally identified by Izhikevich and are used in this study. We compare the accuracy, spike count, and spike timing of the two arithmetic systems at different bit-depths against a 64-bit floating-point gold-standard. Additionally, we test a rescaled set of Izhikevich equations to mitigate against arithmetic errors by taking advantage of posit arithmetic’s tapered accuracy. Our findings indicate that there is no difference in performance between 32-bit posit, 32-bit floating-point, and our 64-bit reference for all but one of the tested firing types. However, at 16-bit, both arithmetic systems diverge from the 64-bit reference, albeit in different ways. For example, 16-bit posit demonstrates an 18× improvement in accumulated spike timing error over a 1000ms simulation compared to 16-bit floating-point when simulating regular (tonic) spiking. This finding holds particular importance given the prevalence of this particular firing type in specific regions of the brain. Furthermore, when we rescale the neuron equations, this error is eliminated altogether. Although current Posit Arithmetic Units are no smaller than Floating Point Units of the same bit-width, our results demonstrate that 64-bit floating-point can be replaced with 16-bit posit which could enable significant area savings in future systems.
Description: Data availability: No data was used for the research described in the article.
URI: https://bura.brunel.ac.uk/handle/2438/27481
DOI: https://doi.org/10.1016/j.neucom.2024.127903
ISSN: 0925-2312
Other Identifiers: ORCiD: Tim J. Fernandez-Hart https://orcid.org/0000-0002-8515-0002
ORCiD: James C. Knight https://orcid.org/0000-0003-0577-0074
ORCiD: Tatiana Kalganova https://orcid.org/0000-0003-4859-7152
127903
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2024 The Author(s). Published by Elsevier B.V. This is an open access article under a Creative Commons license (https://creativecommons.org/licenses/by/4.0/).1.59 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons