Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/27049
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liang, Y | - |
dc.contributor.author | Li, M | - |
dc.contributor.author | Jiang, C | - |
dc.date.accessioned | 2023-08-24T15:44:43Z | - |
dc.date.available | 2023-08-24T15:44:43Z | - |
dc.date.issued | 2021-11-27 | - |
dc.identifier | ORCID iD: Maozhen Li https://orcid.org/0000-0002-0820-5487 | - |
dc.identifier.citation | Liang, Y., Li, M. and Jiang, C. (2022) 'Generating self-attention activation maps for visual interpretations of convolutional neural networks', Neurocomputing, 2021, 490 pp. 206 - 216. doi: 10.1016/j.neucom.2021.11.084. | en_US |
dc.identifier.issn | 0925-2312 | - |
dc.identifier.uri | https://bura.brunel.ac.uk/handle/2438/27049 | - |
dc.description.abstract | In recent years, many interpretable methods based on class activation maps (CAMs) have served as an important judging basis for the predictions of convolutional neural networks (CNNs). However, these methods still suffer from the problems of gradient noise, weight distortion, and perturbation deviation. In this work, we present self-attention class activation map (SA-CAM) and shed light on how it uses the self-attention mechanism to refine the existing CAM methods. In addition to generating basic activation feature maps, SA-CAM adds an attention skip connection as a regularization item for each feature map which further refines the focus area of an underlying CNN model. By introducing an attention branch and constructing a new attention operator, SA-CAM greatly alleviates the limitations of the CAM methods. The experimental results on the ImageNet dataset show that SA-CAM can not only generate highly accurate and intuitive interpretation but also have robust stability in adversarial comparison with the state-of-the-art CAM methods. | en_US |
dc.description.sponsorship | This research is supported by 2018YFB2100801, the Director Foundation Project of National Engineering Laboratory for Public Safety Risk Perception and Control by Big Data (PSRPC), and the Fundamental Research Funds for the Central Universities. | en_US |
dc.format.extent | 206 - 216 | - |
dc.format.medium | Print-Electronic | - |
dc.language | English | - |
dc.language.iso | en_US | en_US |
dc.publisher | Elsevier | en_US |
dc.rights | Copyright © 2021 Elsevier. All rights reserved. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1016/j.neucom.2021.11.084, made available on this repository under a Creative Commons CC BY-NC-ND attribution licence (https://creativecommons.org/licenses/by-nc-nd/4.0/). | - |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | - |
dc.subject | interpretable machine learning | en_US |
dc.subject | black-box models | en_US |
dc.subject | transparent models | en_US |
dc.subject | deep learning | en_US |
dc.subject | explainable artificial intelligence | en_US |
dc.title | Generating self-attention activation maps for visual interpretations of convolutional neural networks | en_US |
dc.type | Article | en_US |
dc.identifier.doi | https://doi.org/10.1016/j.neucom.2021.11.084 | - |
dc.relation.isPartOf | Neurocomputing | - |
pubs.publication-status | Published | - |
pubs.volume | 490 | - |
dc.identifier.eissn | 1872-8286 | - |
dc.rights.holder | Elsevier | - |
Appears in Collections: | Dept of Electronic and Electrical Engineering Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | Copyright © 2021 Elsevier. All rights reserved. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1016/j.neucom.2021.11.084, made available on this repository under a Creative Commons CC BY-NC-ND attribution licence (https://creativecommons.org/licenses/by-nc-nd/4.0/). | 2.01 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License