Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27049
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLiang, Y-
dc.contributor.authorLi, M-
dc.contributor.authorJiang, C-
dc.date.accessioned2023-08-24T15:44:43Z-
dc.date.available2023-08-24T15:44:43Z-
dc.date.issued2021-11-27-
dc.identifierORCID iD: Maozhen Li https://orcid.org/0000-0002-0820-5487-
dc.identifier.citationLiang, Y., Li, M. and Jiang, C. (2022) 'Generating self-attention activation maps for visual interpretations of convolutional neural networks', Neurocomputing, 2021, 490 pp. 206 - 216. doi: 10.1016/j.neucom.2021.11.084.en_US
dc.identifier.issn0925-2312-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/27049-
dc.description.abstractIn recent years, many interpretable methods based on class activation maps (CAMs) have served as an important judging basis for the predictions of convolutional neural networks (CNNs). However, these methods still suffer from the problems of gradient noise, weight distortion, and perturbation deviation. In this work, we present self-attention class activation map (SA-CAM) and shed light on how it uses the self-attention mechanism to refine the existing CAM methods. In addition to generating basic activation feature maps, SA-CAM adds an attention skip connection as a regularization item for each feature map which further refines the focus area of an underlying CNN model. By introducing an attention branch and constructing a new attention operator, SA-CAM greatly alleviates the limitations of the CAM methods. The experimental results on the ImageNet dataset show that SA-CAM can not only generate highly accurate and intuitive interpretation but also have robust stability in adversarial comparison with the state-of-the-art CAM methods.en_US
dc.description.sponsorshipThis research is supported by 2018YFB2100801, the Director Foundation Project of National Engineering Laboratory for Public Safety Risk Perception and Control by Big Data (PSRPC), and the Fundamental Research Funds for the Central Universities.en_US
dc.format.extent206 - 216-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.rightsCopyright © 2021 Elsevier. All rights reserved. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1016/j.neucom.2021.11.084, made available on this repository under a Creative Commons CC BY-NC-ND attribution licence (https://creativecommons.org/licenses/by-nc-nd/4.0/).-
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/-
dc.subjectinterpretable machine learningen_US
dc.subjectblack-box modelsen_US
dc.subjecttransparent modelsen_US
dc.subjectdeep learningen_US
dc.subjectexplainable artificial intelligenceen_US
dc.titleGenerating self-attention activation maps for visual interpretations of convolutional neural networksen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1016/j.neucom.2021.11.084-
dc.relation.isPartOfNeurocomputing-
pubs.publication-statusPublished-
pubs.volume490-
dc.identifier.eissn1872-8286-
dc.rights.holderElsevier-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2021 Elsevier. All rights reserved. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1016/j.neucom.2021.11.084, made available on this repository under a Creative Commons CC BY-NC-ND attribution licence (https://creativecommons.org/licenses/by-nc-nd/4.0/).2.01 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons