Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/29080
Full metadata record
DC FieldValueLanguage
dc.contributor.authorShuai, C-
dc.contributor.authorShi, C-
dc.contributor.authorGan, L-
dc.contributor.authorLiu, H-
dc.coverage.spatialDublin, Ireland-
dc.date.accessioned2024-05-30T12:21:11Z-
dc.date.available2024-05-30T12:21:11Z-
dc.date.issued2023-08-20-
dc.identifierORCiD: Lu Gan https://orcid.org/0000-0003-1056-7660-
dc.identifier.citationShuai, C. et al. (2023) 'mdctGAN: Taming transformer-based GAN for speech super-resolution with Modified DCT spectra', Proceedings of INTERSPEECH 2023, Dublin, Ireland, 20-24 August, pp. 5112 - 5116. doi: 10.21437/interspeech.2023-113.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/29080-
dc.descriptionAnnual Conference of the International Speech Communication Association-
dc.description.abstractSpeech super-resolution (SSR) aims to recover a high resolution (HR) speech from its corresponding low resolution (LR) counterpart. Recent SSR methods focus more on the reconstruction of the magnitude spectrogram, ignoring the importance of phase reconstruction, thereby limiting the recovery quality. To address this issue, we propose mdctGAN, a novel SSR framework based on modified discrete cosine transform (MDCT). By adversarial learning in the MDCT domain, our method reconstructs HR speeches in a phase-aware manner without vocoders or additional post-processing. Furthermore, by learning frequency consistent features with self-attentive mechanism, mdctGAN guarantees high quality speech reconstruction. For VCTK corpus dataset, the experiment results show that our model produces natural auditory quality with high MOS and PESQ scores. It also achieves the state-of-the-art log-spectral-distance (LSD) performance on 48 kHz target resolution from various input rates. Code is available from https://github.com/neoncloud/mdctGANen_US
dc.format.extent5112 - 5116-
dc.format.mediumElectronic-
dc.language.isoen_USen_US
dc.publisherISCAen_US
dc.rightsCopyright © 2023 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.sourceProceedings of INTERSPEECH 2023-
dc.sourceProceedings of INTERSPEECH 2023-
dc.subjectspeech super-resolutionen_US
dc.subjectphase informationen_US
dc.subjectGANen_US
dc.titlemdctGAN: Taming transformer-based GAN for speech super-resolution with Modified DCT spectraen_US
dc.typeConference Paperen_US
dc.date.dateAccepted2023-05-17-
dc.identifier.doihttps://doi.org/10.21437/interspeech.2023-113-
dc.relation.isPartOfINTERSPEECH 2023-
pubs.finish-date2023-08-24-
pubs.finish-date2023-08-24-
pubs.publication-statusPublished online-
pubs.start-date2023-08-20-
pubs.start-date2023-08-20-
dc.identifier.eissn2308-457X-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dc.rights.holderThe Author(s)-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2023 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).1.3 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons