Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/29613
Full metadata record
DC FieldValueLanguage
dc.contributor.authorIslam, T-
dc.contributor.authorMiron, A-
dc.contributor.authorLiu, X-
dc.contributor.authorLi, Y-
dc.date.accessioned2024-08-27T11:38:08Z-
dc.date.available2024-08-27T11:38:08Z-
dc.date.issued2024-08-16-
dc.identifierORCiD: Tasin Islam https://orcid.org/0000-0001-7568-9322-
dc.identifierORCiD: Alina Miron https://orcid.org/0000-0002-0068-4495-
dc.identifierORCiD: Xiaohui Liu https://orcid.org/0000-0003-1589-1267-
dc.identifierORCiD: Yongmin Li https://orcid.org/0000-0003-1668-2440-
dc.identifier117189-
dc.identifier.citationIslam, T. et al. (2024) 'Image-based virtual try-on: Fidelity and simplification', Signal Processing: Image Communication, 129, 117189, pp. 1 - 15. doi: 10.1016/j.image.2024.117189.en_US
dc.identifier.issn0923-5965-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/29613-
dc.descriptionData availability: I have shared a link to my code and dataset via GitHub.en_US
dc.description.abstractWe introduce a novel image-based virtual try-on model designed to replace a candidate’s garment with a desired target item. The proposed model comprises three modules: segmentation, garment warping, and candidate-clothing fusion. Previous methods have shown limitations in cases involving significant differences between the original and target clothing, as well as substantial overlapping of body parts. Our model addresses these limitations by employing two key strategies. Firstly, it utilises a candidate representation based on an RGB skeleton image to enhance spatial relationships among body parts, resulting in robust segmentation and improved occlusion handling. Secondly, truncated U-Net is employed in both the segmentation and warping modules, enhancing segmentation performance and accelerating the try-on process. The warping module leverages an efficient affine transform for ease of training. Comparative evaluations against state-of-the-art models demonstrate the competitive performance of our proposed model across various scenarios, particularly excelling in handling occlusion cases and significant differences in clothing cases. This research presents a promising solution for image-based virtual try-on, advancing the field by overcoming key limitations and achieving superior performance.en_US
dc.description.sponsorshipEngineering and Physical Sciences Research Council (EPSRC) grant number EP/T518116/1.en_US
dc.format.extent1 - 15-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.rightsCopyright © 2024 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (https://creativecommons.org/licenses/by/4.0/ ).-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectvirtual try-on (VTON)en_US
dc.subjectgenerative adversarial network (GAN)en_US
dc.subjectfashion synthesisen_US
dc.subjectocclusion-handlingen_US
dc.subjecte-commerceen_US
dc.titleImage-based virtual try-on: Fidelity and simplificationen_US
dc.typeArticleen_US
dc.date.dateAccepted2024-07-27-
dc.identifier.doihttps://doi.org/10.1016/j.image.2024.117189-
dc.relation.isPartOfSignal Processing: Image Communication-
pubs.publication-statusPublished-
pubs.volume129-
dc.identifier.eissn1879-2677-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/ legalcode.en-
dc.rights.holderThe Authors-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2024 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (https://creativecommons.org/licenses/by/4.0/ ).3.79 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons