Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/30554
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhang, S-
dc.contributor.authorChen, Y-
dc.contributor.authorSun, Y-
dc.contributor.authorWang, F-
dc.contributor.authorYang, J-
dc.contributor.authorBai, L-
dc.contributor.authorGao, S-
dc.date.accessioned2025-01-23T14:48:29Z-
dc.date.available2025-01-23T14:48:29Z-
dc.date.issued2024-11-17-
dc.identifierORCiD: Yaoru Sun https://orcid.org/0000-0001-6052-2781-
dc.identifierORCiD: Fang Wang https://orcid.org/0000-0003-1987-9150-
dc.identifierORCiD: Shangce Gao https://orcid.org/0000-0001-5042-3261-
dc.identifier128895-
dc.identifier.citationZhang, S. et al. (2025) 'Superpixel semantics representation and pre-training for vision-language tasks', Neurocomputing, 615, 128895, pp. 1 - 13. doi: 10.1016/j.neucom.2024.128895.en_US
dc.identifier.issn0925-2312-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/30554-
dc.descriptionData availability: Data will be made available on request.en_US
dc.descriptionA Preprint version submitted to Neurocomputing, October 2, 2024, is available at: arXiv.2310.13447 [v3] (https://arxiv.org/abs/2310.13447). It has not been certified by peer review.-
dc.description.abstractThe key to integrating visual language tasks is to establish a good alignment strategy. Recently, visual semantic representation has achieved fine-grained visual understanding by dividing grids or image patches. However, the coarse-grained semantic interactions in image space should not be ignored, which hinders the extraction of complex contextual semantic relations at the scene boundaries. This paper proposes superpixels as comprehensive and robust visual primitives, which mine coarse-grained semantic interactions by clustering perceptually similar pixels, speeding up the subsequent processing of primitives. To capture superpixel-level semantic features, we propose a Multiscale Difference Graph Convolutional Network (MDGCN). It allows parsing the entire image as a fine-to-coarse visual hierarchy. To reason actual semantic relations, we reduce potential noise interference by aggregating difference information between adjacent graph nodes. Finally, we propose a multi-level fusion rule in a bottom-up manner to avoid understanding deviation by mining complementary spatial information at different levels. Experiments show that the proposed method can effectively promote the learning of multiple downstream tasks. Encouragingly, our method outperforms previous methods on all metrics.en_US
dc.description.sponsorshipThis work was supported by the National Natural Science Foundation of China (91748122).en_US
dc.format.extent1 - 13-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.relation.urihttps://arxiv.org/abs/2310.13447-
dc.rightsCopyright © 2024 Elsevier B.V. All rights reserved. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/ (see: https://www.elsevier.com/about/policies/sharing).-
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/-
dc.subjectsuperpixel representationen_US
dc.subjectmultiscale difference graph convolutional network (MDGCN)en_US
dc.subjectmulti-level fusion ruleen_US
dc.subjectvision and language (VL)en_US
dc.titleSuperpixel semantics representation and pre-training for vision-language tasksen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1016/j.neucom.2024.128895-
dc.relation.isPartOfNeurocomputing-
pubs.publication-statusPublished-
pubs.volume615-
dc.identifier.eissn1872-8286-
dc.rights.licensehttps://creativecommons.org/licenses/by-nc-nd/4.0/legalcode.en-
dcterms.dateAccepted2024-11-06-
dc.rights.holderElsevier B.V.-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfEmbargoed until 17 November 20252.98 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons