Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/29141
Full metadata record
DC FieldValueLanguage
dc.contributor.authorIslam, T-
dc.contributor.authorMiron, A-
dc.contributor.authorLiu, X-
dc.contributor.authorLi, Y-
dc.date.accessioned2024-06-07T12:20:17Z-
dc.date.available2024-06-07T12:20:17Z-
dc.date.issued2024-05-17-
dc.identifierORCiD: Alina Miron https://orcid.org/0000-0002-0068-4495-
dc.identifierORCiD: Xiaohui Liu https://orcid.org/0000-0003-1589-1267-
dc.identifierORCiD: Yongmin Li https://orcid.org/0000-0003-1668-2440-
dc.identifier127887-
dc.identifier.citationIslam, T. et al. (2024) 'StyleVTON: A multi-pose virtual try-on with identity and clothing detail preservation', Neurocomputing, 594, 127887, pp. 1 - 12. doi: 10.1016/j.neucom.2024.127887.en_US
dc.identifier.issn0925-2312-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/29141-
dc.descriptionData availability: The link to our code is shown in the manuscript.en_US
dc.description.abstractVirtual try-on models have been developed using deep learning techniques to transfer clothing product images onto a candidate. While previous research has primarily focused on enhancing the realism of the garment transfer, such as improving texture quality and preserving details, there is untapped potential to further improve the shopping experience for consumers. The present study outlines the development of an innovative multi-pose virtual try-on model, namely StyleVTON, to potentially enhance consumers’ shopping experiences. Our method synthesises a try-on image while also allowing for changes in pose. To achieve this, StyleVTON first predicts the segmentation of the target pose based on the target garment. Next, the segmentation layout guides the warping process of the target garment. Finally, the pose of the candidate is transferred to the desired posture. Our experiments demonstrate that StyleVTON can generate satisfactory images of candidates wearing the desired clothes in a desired pose, potentially offering a promising solution for enhancing the virtual try-on experience. Our findings reveal that StyleVTON outperforms other comparable methods, particularly in preserving the facial identity of the candidate and geometrically transforming the garments.en_US
dc.format.extent1 - 12-
dc.format.mediumPrint-Electronic-
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.rightsCrown Copyright © 2024 Published by Elsevier B.V. This is an open access article under the CC BY license (https://creativecommons.org/licenses/by/4.0/).-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectvirtual try-on (VTON)en_US
dc.subjectpose transferen_US
dc.subjectdeep learningen_US
dc.subjectgenerative adversarial network (GAN)en_US
dc.subjectImage synthesisen_US
dc.titleStyleVTON: A multi-pose virtual try-on with identity and clothing detail preservationen_US
dc.typeArticleen_US
dc.date.dateAccepted2024-05-13-
dc.identifier.doihttps://doi.org/10.1016/j.neucom.2024.127887-
dc.relation.isPartOfNeurocomputing-
pubs.publication-statusPublished-
pubs.volume594-
dc.identifier.eissn1872-8286-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dc.rights.holderCrown / The Authors-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCrown Copyright © 2024 Published by Elsevier B.V. This is an open access article under the CC BY license (https://creativecommons.org/licenses/by/4.0/).3.4 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons