Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/31141
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMohana, M-
dc.contributor.authorSubashini, P-
dc.contributor.authorGhinea, G-
dc.date.accessioned2025-05-04T18:27:14Z-
dc.date.available2025-05-04T18:27:14Z-
dc.date.issued2025-03-10-
dc.identifierORCiD: M. Mohana https://orcid.org/0000-0002-3566-0995-
dc.identifierORCiD: P. Subashini https://orcid.org/0000-0002-5786-4497-
dc.identifierORCiD: George Ghinea https://orcid.org/0000-0003-2578-5580-
dc.identifierArticle number 394-
dc.identifier.citationMohana M., Subashini, P. and Ghinea, G. (2025) 'XAI-DSCSA: explainable-AI-based deep semi-supervised convolutional sparse autoencoder for facial expression recognition', Signal, Image and Video Processing, 19 (5), 394, pp. 1 - 18. doi: 10.1007/s11760-025-03984-1.en_US
dc.identifier.issn1863-1703-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/31141-
dc.descriptionData availability: No datasets were generated or analysed during the current study.en_US
dc.descriptionAcknowledgements: The authors sincerely thank the ISO Certified (ISO/IEC 20000-1:2018) Centre for Machine Learning and Intelligence (CMLI), funded by the Department of Science and Technology (DST-CURIE), India, for providing the facility to carry out this research study.-
dc.description.abstractFacial expression recognition (FER) continues to be a vibrant research field, driven by the increasing need for its practical applications in areas such as e-learning, healthcare, candidate interview analysis, and more. Most deep learning approaches in supervised FER systems heavily rely on large, labeled datasets. Implementing FER in Convolutional Neural Networks (CNNs) often requires many layers, leading to extended training times and difficulties in finding optimal parameters. This can result in challenges in creating distinct facial expression patterns for classification, leading to poor real-time emotion classification In this paper, we propose a novel approach known as the Deep Semi-supervised Convolutional Sparse Autoencoder to address the aforementioned issues and enhance FER performance and prediction accuracy. This approach comprises two parts: (i) Initially, a deep convolutional sparse autoencoder is trained with unlabeled samples of facial expressions. Here, sparsity is introduced in the convolutional block to enforce penalties, focusing on more relevant features for feature representation in the latent space. (ii) A trained encoder with a feature map is connected to a fully connected layer with softmax for final fine-tuning with learned weights and labeled facial expression samples in a semi-supervised approach for emotion classification. This approach was experimented with two benchmark datasets, namely CK + and JAFFE, and achieved significant results of 98.98% and 93.10% accuracy, respectively. The results were analyzed using established state-of-the-art techniques. Additionally, eXplainable AI (XAI) methods like Grad-CAM and image-LIME were employed to interpret the performance and prediction outcomes of the DSCSA model.en_US
dc.description.sponsorshipThis research study received no external funding.en_US
dc.format.extent1 - 18-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherSpringer Natureen_US
dc.rightsCopyright © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025. This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/s11760-025-03984-1 (see: https://www.springernature.com/gp/open-research/policies/journal-policies).-
dc.rights.urihttps://www.springernature.com/gp/open-research/policies/journal-policies-
dc.subjectautoencoderen_US
dc.subjectconvolutional sparse autoencoderen_US
dc.subjectfacial expression recognitionen_US
dc.subjectfeature representationen_US
dc.subjectunsupervised learningen_US
dc.subjectsemi-supervised learningen_US
dc.titleXAI-DSCSA: explainable-AI-based deep semi-supervised convolutional sparse autoencoder for facial expression recognitionen_US
dc.typeArticleen_US
dc.date.dateAccepted2025-02-20-
dc.identifier.doihttps://doi.org/10.1007/s11760-025-03984-1-
dc.relation.isPartOfSignal, Image and Video Processing-
pubs.issue5-
pubs.publication-statusPublished-
pubs.volume19-
dc.identifier.eissn1863-1711-
dcterms.dateAccepted2025-02-20-
dc.rights.holderThe Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature-
Appears in Collections:Dept of Computer Science Embargoed Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfEmbargoed until 10 March 2026. Copyright © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025. This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/s11760-025-03984-1 (see: https://www.springernature.com/gp/open-research/policies/journal-policies).1.54 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.