Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/30326
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTan, W-
dc.contributor.authorZhang, H-
dc.contributor.authorWang, Z-
dc.contributor.authorLi, H-
dc.contributor.authorGao, X-
dc.contributor.authorZeng, N-
dc.date.accessioned2024-12-06T14:08:39Z-
dc.date.available2024-12-06T14:08:39Z-
dc.date.issued2024-07-11-
dc.identifierORCiD: Weilong Tan https://orcid.org/0009-0000-1675-7188-
dc.identifierORCiD: Zidong Wang https://orcid.org/0000-0002-9576-7401-
dc.identifierORCiD: Han Li https://orcid.org/0000-0003-0276-9756-
dc.identifierORCiD: Xingen Gao https://orcid.org/0009-0000-7385-5825-
dc.identifierORCiD: Nianyin Zeng https://orcid.org/0000-0002-6957-2942-
dc.identifier108808-
dc.identifier.citationTan, W. et al. (2024) 'S<sup>3</sup>T-Net: A novel electroencephalogram signals-oriented emotion recognition model', Computers in Biology and Medicine, 179, 108808, pp. 1 - 9. doi: 10.1016/j.compbiomed.2024.108808.en_US
dc.identifier.issn0010-4825-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/30326-
dc.description.abstractIn this paper, a novel skipping spatial–spectral–temporal network (S<sup>3</sup>T-Net) is developed to handle intra-individual differences in electroencephalogram (EEG) signals for accurate, robust, and generalized emotion recognition. In particular, aiming at the 4D features extracted from the raw EEG signals, a multi-branch architecture is proposed to learn spatial–spectral cross-domain representations, which benefits enhancing the model generalization ability. Time dependency among different spatial–spectral features is further captured via a bi-directional long-short term memory module, which employs an attention mechanism to integrate context information. Moreover, a skip-change unit is designed to add another auxiliary pathway for updating model parameters, which alleviates the vanishing gradient problem in complex spatial–temporal network. Evaluation results show that the proposed S<sup>3</sup>T-Net outperforms other advanced models in terms of the emotion recognition accuracy, which yields an performance improvement of 0.23% , 0.13%, and 0.43% as compared to the sub-optimal model in three test scenes, respectively. In addition, the effectiveness and superiority of the key components of S<sup>3</sup>T-Net are demonstrated from various experiments. As a reliable and competent emotion recognition model, the proposed S<sup>3</sup>T-Net contributes to the development of intelligent sentiment analysis in human–computer interaction (HCI) realm.en_US
dc.description.sponsorshipThis work was supported in part by the Natural Science Foundation of China under Grant 62073271, the Fundamental Research Funds for the Central Universities of China under Grant 20720220076, the Natural Science Foundation for Distinguished Young Scholars of the Fujian Province under Grant 2023J06010, and the National Science and Technology Major Project under Grant J2019-I-0013-0013.en_US
dc.format.extent1 - 9-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/-
dc.subjecthuman–computer interaction (HCI)en_US
dc.subjectEEG signalsen_US
dc.subjectemotion recognitionen_US
dc.subjectspatial–temporal networken_US
dc.subjectskip-change uniten_US
dc.titleS<sup>3</sup>T-Net: A novel electroencephalogram signals-oriented emotion recognition modelen_US
dc.typeArticleen_US
dc.date.dateAccepted2024-06-24-
dc.identifier.doihttps://doi.org/10.1016/j.compbiomed.2024.108808-
dc.relation.isPartOfComputers in Biology and Medicine-
pubs.publication-statusPublished-
pubs.volume179-
dc.identifier.eissn1879-0534-
dc.rights.licensehttps://creativecommons.org/licenses/by-nc-nd/4.0/legalcode.en-
dc.rights.holderElsevier Ltd.-
Appears in Collections:Dept of Computer Science Embargoed Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfEmbargoed until 11 July 2025. Copyright © 2024 Elsevier Ltd. All rights are reserved. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/ (see: https://www.elsevier.com/about/policies/sharing ).6.13 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons