Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/31661
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLuo, W-
dc.contributor.authorChen, M-
dc.contributor.authorGao, J-
dc.contributor.authorZhu, Y-
dc.contributor.authorWang, F-
dc.contributor.authorZhu, C-
dc.date.accessioned2025-08-01T12:16:57Z-
dc.date.available2025-08-01T12:16:57Z-
dc.date.issued2025-07-01-
dc.identifierORCiD: Fang Wang https://orcid.org/0000-0003-1987-9150-
dc.identifierArticle number: 20452-
dc.identifier.citationLuo, W. et al. (2025) ‘Multi-view affinity-based projection alignment for unsupervised domain adaptation via locality preserving optimization', Scientific Reports, 15 (1), 20452, pp. 1 - 20. doi: 10.1038/s41598-025-05331-3.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/31661-
dc.descriptionData availability: The datasets generated and/or analysed during the current study are available in the Office-Home repository, https://www.hemanthdv.org/officeHomeDataset.html, VisDA-2017 repository, https://ai.bu.edu/visda-2017/ and ImageCLEF repository, https://www.imageclef.org/datasets.en_US
dc.descriptionCode availability: The code will be made available after the manuscript is accepted.-
dc.description.abstractUnsupervised Domain Adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain with differing data distributions. However, it remains difficult due to noisy pseudo-labels in the target domain, inadequate modeling of local geometric structure, and reliance on a single input view that limits representational diversity in challenging tasks. We propose a framework named Multi-view Affinity-based Projection Alignment (MAPA) that uses a teacher–student network and multi-view augmentation to stabilize pseudo-labels and enhance feature diversity. MAPA transforms each sample into multiple augmented views, constructs a unified affinity matrix that combines semantic cues from pseudo-labels with feature-based distances, and then learns a locality-preserving projection to align source and target data in a shared low-dimensional space. An iterative strategy refines pseudo-labels by discarding low-confidence samples, thereby raising label quality and strengthening supervision for the target domain. MAPA also employs a consistency-weighted fusion mechanism to merge predictions from multiple views, improving stability under domain shift. Finally, MAPA leverages class-centric and cluster-level relationships in the projected space to further refine label assignments, enhancing the overall adaptation process. Experimental results on Office-Home, ImageCLEF, and VisDA-2017 show that MAPA surpasses recent state-of-the-art methods, and it maintains robust performance across backbones including ResNet-50, ResNet-101, and Vision Transformer (ViT).en_US
dc.description.sponsorshipCNPC Innovation Fund (No.2024DQ02-0501), Royal Society (IEC_NSFC_233444), Natural Science Foundation of the Higher Education Institutions of Jiangsu Province (No. 22KJB520012) and Postgraduate Research and Practice Innovation Project of Jiangsu Province (No. KYCX24_3227).en_US
dc.format.mediumElectronic-
dc.languageEnglish-
dc.publisherSpringer Natureen_US
dc.rightsCreative Commons Attribution 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectunsupervised domain adaptationen_US
dc.subjectmulti-view learningen_US
dc.subjectlocality preserving projectionen_US
dc.subjectfeature alignmenten_US
dc.subjectpseudo-labelingen_US
dc.subjectvision transformeren_US
dc.titleMulti-view affinity-based projection alignment for unsupervised domain adaptation via locality preserving optimizationen_US
dc.typeArticleen_US
dc.date.dateAccepted2025-06-02-
dc.identifier.doihttps://doi.org/10.1038/s41598-025-05331-3-
dc.relation.isPartOfScientific Reports-
pubs.issue1-
pubs.publication-statusPublished online-
pubs.volume15-
dc.identifier.eissn2045-2322-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dcterms.dateAccepted2025-06-02-
dc.rights.holderThe Author(s)-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © The Author(s) 2025. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.3.89 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons