Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/31141
Title: | XAI-DSCSA: explainable-AI-based deep semi-supervised convolutional sparse autoencoder for facial expression recognition |
Authors: | Mohana, M Subashini, P Ghinea, G |
Keywords: | autoencoder;convolutional sparse autoencoder;facial expression recognition;feature representation;unsupervised learning;semi-supervised learning |
Issue Date: | 10-Mar-2025 |
Publisher: | Springer Nature |
Citation: | Mohana M., Subashini, P. and Ghinea, G. (2025) 'XAI-DSCSA: explainable-AI-based deep semi-supervised convolutional sparse autoencoder for facial expression recognition', Signal, Image and Video Processing, 19 (5), 394, pp. 1 - 18. doi: 10.1007/s11760-025-03984-1. |
Abstract: | Facial expression recognition (FER) continues to be a vibrant research field, driven by the increasing need for its practical applications in areas such as e-learning, healthcare, candidate interview analysis, and more. Most deep learning approaches in supervised FER systems heavily rely on large, labeled datasets. Implementing FER in Convolutional Neural Networks (CNNs) often requires many layers, leading to extended training times and difficulties in finding optimal parameters. This can result in challenges in creating distinct facial expression patterns for classification, leading to poor real-time emotion classification In this paper, we propose a novel approach known as the Deep Semi-supervised Convolutional Sparse Autoencoder to address the aforementioned issues and enhance FER performance and prediction accuracy. This approach comprises two parts: (i) Initially, a deep convolutional sparse autoencoder is trained with unlabeled samples of facial expressions. Here, sparsity is introduced in the convolutional block to enforce penalties, focusing on more relevant features for feature representation in the latent space. (ii) A trained encoder with a feature map is connected to a fully connected layer with softmax for final fine-tuning with learned weights and labeled facial expression samples in a semi-supervised approach for emotion classification. This approach was experimented with two benchmark datasets, namely CK + and JAFFE, and achieved significant results of 98.98% and 93.10% accuracy, respectively. The results were analyzed using established state-of-the-art techniques. Additionally, eXplainable AI (XAI) methods like Grad-CAM and image-LIME were employed to interpret the performance and prediction outcomes of the DSCSA model. |
Description: | Data availability:
No datasets were generated or analysed during the current study. Acknowledgements: The authors sincerely thank the ISO Certified (ISO/IEC 20000-1:2018) Centre for Machine Learning and Intelligence (CMLI), funded by the Department of Science and Technology (DST-CURIE), India, for providing the facility to carry out this research study. |
URI: | https://bura.brunel.ac.uk/handle/2438/31141 |
DOI: | https://doi.org/10.1007/s11760-025-03984-1 |
ISSN: | 1863-1703 |
Other Identifiers: | ORCiD: M. Mohana https://orcid.org/0000-0002-3566-0995 ORCiD: P. Subashini https://orcid.org/0000-0002-5786-4497 ORCiD: George Ghinea https://orcid.org/0000-0003-2578-5580 Article number 394 |
Appears in Collections: | Dept of Computer Science Embargoed Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | Embargoed until 10 March 2026. Copyright © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025. This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/s11760-025-03984-1 (see: https://www.springernature.com/gp/open-research/policies/journal-policies). | 1.54 MB | Adobe PDF | View/Open |
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.