Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/31661
Title: | Multi-view affinity-based projection alignment for unsupervised domain adaptation via locality preserving optimization |
Authors: | Luo, W Chen, M Gao, J Zhu, Y Wang, F Zhu, C |
Keywords: | unsupervised domain adaptation;multi-view learning;locality preserving projection;feature alignment;pseudo-labeling;vision transformer |
Issue Date: | 1-Jul-2025 |
Publisher: | Springer Nature |
Citation: | Luo, W. et al. (2025) ‘Multi-view affinity-based projection alignment for unsupervised domain adaptation via locality preserving optimization', Scientific Reports, 15 (1), 20452, pp. 1 - 20. doi: 10.1038/s41598-025-05331-3. |
Abstract: | Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain with differing data distributions. However, it remains difficult due to noisy pseudo-labels in the target domain, inadequate modeling of local geometric structure, and reliance on a single input view that limits representational diversity in challenging tasks. We propose a framework named Multi-view Affinity-based Projection Alignment (MAPA) that uses a teacher–student network and multi-view augmentation to stabilize pseudo-labels and enhance feature diversity. MAPA transforms each sample into multiple augmented views, constructs a unified affinity matrix that combines semantic cues from pseudo-labels with feature-based distances, and then learns a locality-preserving projection to align source and target data in a shared low-dimensional space. An iterative strategy refines pseudo-labels by discarding low-confidence samples, thereby raising label quality and strengthening supervision for the target domain. MAPA also employs a consistency-weighted fusion mechanism to merge predictions from multiple views, improving stability under domain shift. Finally, MAPA leverages class-centric and cluster-level relationships in the projected space to further refine label assignments, enhancing the overall adaptation process. Experimental results on Office-Home, ImageCLEF, and VisDA-2017 show that MAPA surpasses recent state-of-the-art methods, and it maintains robust performance across backbones including ResNet-50, ResNet-101, and Vision Transformer (ViT). |
Description: | Data availability:
The datasets generated and/or analysed during the current study are available in the Office-Home repository, https://www.hemanthdv.org/officeHomeDataset.html, VisDA-2017 repository, https://ai.bu.edu/visda-2017/ and ImageCLEF repository, https://www.imageclef.org/datasets. Code availability: The code will be made available after the manuscript is accepted. |
URI: | https://bura.brunel.ac.uk/handle/2438/31661 |
DOI: | https://doi.org/10.1038/s41598-025-05331-3 |
Other Identifiers: | ORCiD: Fang Wang https://orcid.org/0000-0003-1987-9150 Article number: 20452 |
Appears in Collections: | Dept of Computer Science Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | Copyright © The Author(s) 2025. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 3.89 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License