Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/32729
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSelcuk, C-
dc.contributor.authorBoulgouris, NV-
dc.date.accessioned2026-01-26T13:13:34Z-
dc.date.available2026-01-26T13:13:34Z-
dc.date.issued2025-12-30-
dc.identifierORCiD: Cengiz Selcuk https://orcid.org/0009-0007-9309-3590-
dc.identifierORCiD: Nikolaos V. Boulgouris https://orcid.org/0000-0002-5382-6856-
dc.identifierArticle number: 066043-
dc.identifier.citationSelcuk, C. and Boulgouris, N.V. (2025) 'Dynamic graph representation of EEG signals for speech imagery recognition', Journal of Neural Engineering, 22 (6), 066043, pp. 1 - 15. doi: 10.1088/1741-2552/ae2ccb.en_US
dc.identifier.issn1741-2560-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/32729-
dc.descriptionData availability statement: No new data were created or analysed in this study.en_US
dc.description.abstractObjective. Speech imagery recognition from electroencephalography (EEG) signals is an emerging challenge in brain-computer interfaces, and has important applications, such as in the interaction with locked-in patients. In this work, we use graph signal processing for developing a more effective representation of EEG signals in speech imagery recognition.Approach. We propose a dynamic graph representation that uses multiple graphs constructed based on the time-varying correlations between EEG channels. Our methodology is particularly suitable for signals that exhibit fluctuating correlations, which cannot be adequately modeled through a static (single graph) model. The resultant representation provides graph frequency features that compactly capture the spatial patterns of the underlying multidimensional EEG signal as well as the evolution of spatial relationships over time. These dynamic graph features are fed into an attention-based long short-term memory network for speech imagery recognition. A novel EEG data augmentation method is also proposed for improving training robustness.Main results. Experimental evaluation using a range of experiments shows that the proposed dynamic graph features are more effective than conventional time-frequency features for speech imagery recognition. The overall system outperforms current state-of-the-art approaches, yielding accuracy gains of up to 10%.Significance. The dynamic graph representation captures time-varying spatial relationships in EEG signals, overcoming limitations of static graph models and conventional feature extraction. Combined with data augmentation and attention-based classification, it demonstrates substantial improvements over existing methods in speech imagery recognition.en_US
dc.format.extent1 - 15-
dc.format.mediumPrint-Electronic-
dc.language.isoenen_US
dc.publisherIOP Publishingen_US
dc.rightsCreative Commons Attribution 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectbrain-computer interfacesen_US
dc.subjectelectroencephalographyen_US
dc.subjectgraph signal processingen_US
dc.subjectspeech imageryen_US
dc.titleDynamic graph representation of EEG signals for speech imagery recognitionen_US
dc.typeArticleen_US
dc.date.dateAccepted2025-12-15-
dc.identifier.doihttps://doi.org/10.1088/1741-2552/ae2ccb-
dc.relation.isPartOfJournal of Neural Engineering-
pubs.issue6-
pubs.publication-statusPublished-
pubs.volume22-
dc.identifier.eissn1741-2552-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dcterms.dateAccepted2025-12-15-
dc.rights.holderThe Author(s)-
dc.contributor.orcidSelcuk, Cengiz [0009-0007-9309-3590]-
dc.contributor.orcidBoulgouris, Nikolaos V. [0000-0002-5382-6856]-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2025 The Author(s). Published by IOP Publishing Ltd. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license (https://creativecommons.org/licenses/by/4.0/). Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.2.33 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons