Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/30885
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhan, Y-
dc.contributor.authorYang, R-
dc.contributor.authorYou, J-
dc.contributor.authorHuang, M-
dc.contributor.authorLiu, W-
dc.contributor.authorLiu, X-
dc.date.accessioned2025-03-09T17:10:09Z-
dc.date.available2025-03-09T17:10:09Z-
dc.date.issued2025-02-26-
dc.identifierORCiD: Weibo Liu https://orcid.org/0000-0002-8169-3261-
dc.identifierORCiD: Xiaohui Liu https://orcid.org/0000-0003-1589-1267-
dc.identifierArticle no. 2467083-
dc.identifier.citationZhan,Y. et al. (2025) 'A systematic literature review on incomplete multimodal learning: techniques and challenges', Systems Science & Control Engineering, 13 (1), 2467083, pp. 1 - 28. doi: 10.1080/21642583.2025.2467083.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/30885-
dc.descriptionData availability: The data that support the findings of this study are available from the corresponding author, R.Y., upon reasonable request.en_US
dc.description.abstractRecently, machine learning technologies have been successfully applied across various fields. However, most existing machine learning models rely on unimodal data for information inference, which hinders their ability to generalize to complex application scenarios. This limitation has resulted in the development of multimodal learning, a field that integrates information from different modalities to enhance models' capabilities. However, data often suffers from missing or incomplete modalities in practical applications. This necessitates that models maintain robustness and effectively infer complete information in the presence of missing modalities. The emerging research direction of incomplete multimodal learning (IML) aims to facilitate effective learning from incomplete multimodal training sets, ensuring that models can dynamically and robustly address new instances with arbitrary missing modalities during the testing phase. This paper offers a comprehensive review of methods based on IML. It categorizes existing approaches based on their information sources into two main types: based on internal information and external information methods. These categories are further subdivided into data-based, feature-based, knowledge transfer-based, graph knowledge enhancement-based, and human-in-the-loop-based methods. The paper conducts comparative analyses from two perspectives: comparisons among similar methods and comparisons among different types of methods. Finally, it offers insights into the research trends in IML.en_US
dc.description.sponsorshipThis work is supported by National Natural Science Foundation of China (72401233), Jiangsu Provincial Qinglan Project, Natural Science Foundation of Jiangsu Higher Education Institutions of China (23KJB520038), Suzhou Science and Technology Programme (SYG202106), and Research Enhancement Fund of XJTLU (REF-23-01-008).en_US
dc.format.extent1 - 28-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.rightsAttribution 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectincomplete multimodal learningen_US
dc.subjectmultimodal learningen_US
dc.subjectmodality missingen_US
dc.titleA systematic literature review on incomplete multimodal learning: techniques and challengesen_US
dc.typeArticleen_US
dc.date.dateAccepted2025-02-10-
dc.identifier.doihttps://doi.org/10.1080/21642583.2025.2467083-
dc.relation.isPartOfSystems Science & Control Engineering-
pubs.issue1-
pubs.publication-statusPublished online-
pubs.volume13-
dc.identifier.eissn2164-2583-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dcterms.dateAccepted2025-02-10-
dc.rights.holderThe Author(s)-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2025 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The terms on which this article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent.5.4 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons