StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On

Bibliographic Details
Title: StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On
Authors: Kim, Jeongho, Gu, Gyojung, Park, Minho, Park, Sunghyun, Choo, Jaegul
Publication Year: 2023
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition
More Details: Given a clothing image and a person image, an image-based virtual try-on aims to generate a customized image that appears natural and accurately reflects the characteristics of the clothing image. In this work, we aim to expand the applicability of the pre-trained diffusion model so that it can be utilized independently for the virtual try-on task.The main challenge is to preserve the clothing details while effectively utilizing the robust generative capability of the pre-trained model. In order to tackle these issues, we propose StableVITON, learning the semantic correspondence between the clothing and the human body within the latent space of the pre-trained diffusion model in an end-to-end manner. Our proposed zero cross-attention blocks not only preserve the clothing details by learning the semantic correspondence but also generate high-fidelity images by utilizing the inherent knowledge of the pre-trained model in the warping process. Through our proposed novel attention total variation loss and applying augmentation, we achieve the sharp attention map, resulting in a more precise representation of clothing details. StableVITON outperforms the baselines in qualitative and quantitative evaluation, showing promising quality in arbitrary person images. Our code is available at https://github.com/rlawjdghek/StableVITON.
Comment: 17 pages
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2312.01725
Accession Number: edsarx.2312.01725
Database: arXiv
More Details
Description not available.