Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/29704
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMohajelin, F-
dc.contributor.authorSheykhivand, S-
dc.contributor.authorShabani, A-
dc.contributor.authorDanishvar, M-
dc.contributor.authorDanishvar, S-
dc.contributor.authorLahijan, LZ-
dc.date.accessioned2024-09-11T12:37:30Z-
dc.date.available2024-09-11T12:37:30Z-
dc.date.issued2024-09-10-
dc.identifierORCiD: Sobhan Sheykhivand https://orcid.org/0000-0002-2275-8133-
dc.identifierORCiD: Sebelan Danishvar https://orcid.org/0000-0002-8258-0437-
dc.identifierORCiD: Morad Danishvar https://orcid.org/0000-0002-7939-9098-
dc.identifier5883-
dc.identifier.citationMohajelin, F. et al. (2024) 'Automatic Recognition of Multiple Emotional Classes from EEG Signals through the Use of Graph Theory and Convolutional Neural Networks', Sensors, 24 (18), 5883, pp. 1 - 20. doi: 10.3390/s24185883.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/29704-
dc.descriptionData Availability Statement: The data are private and the University Ethics Committee does not allow public access to the data.en_US
dc.description.abstractEmotion is a complex state caused by the functioning of the human brain in relation to various events, for which there is no scientific definition. Emotion recognition is traditionally conducted by psychologists and experts based on facial expressions—the traditional way to recognize something limited and is associated with errors. This study presents a new automatic method using electroencephalogram (EEG) signals based on combining graph theory with convolutional networks for emotion recognition. In the proposed model, firstly, a comprehensive database based on musical stimuli is provided to induce two and three emotional classes, including positive, negative, and neutral emotions. Generative adversarial networks (GANs) are used to supplement the recorded data, which are then input into the suggested deep network for feature extraction and classification. The suggested deep network can extract the dynamic information from the EEG data in an optimal manner and has 4 GConv layers. The accuracy of the categorization for two classes and three classes, respectively, is 99% and 98%, according to the suggested strategy. The suggested model has been compared with recent research and algorithms and has provided promising results. The proposed method can be used to complete the brain-computer-interface (BCI) systems puzzle.en_US
dc.description.sponsorshipThis research received no external funding.en_US
dc.format.extent1 - 20-
dc.format.mediumElectronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherMDPIen_US
dc.rightsCopyright © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectBCIen_US
dc.subjectCNNen_US
dc.subjectEEGen_US
dc.subjectemotionen_US
dc.subjectgraphen_US
dc.titleAutomatic Recognition of Multiple Emotional Classes from EEG Signals through the Use of Graph Theory and Convolutional Neural Networksen_US
dc.typeArticleen_US
dc.date.dateAccepted2024-09-05-
dc.identifier.doihttps://doi.org/10.3390/s24185883-
dc.relation.isPartOfSensors-
pubs.issue18-
pubs.publication-statusPublished online-
pubs.volume24-
dc.identifier.eissn1424-8220-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dc.rights.holderThe authors-
Appears in Collections:Dept of Computer Science Research Papers
Dept of Civil and Environmental Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).6.02 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons