virtual try-on, conditional image synthesis, human parsing, thin plate spline, semantic adaptation
Image-based virtual try-on systems have significant commercial value in online garment shopping. However, prior methods fail to appropriately handle details, so are defective in maintaining the original appearance of organizational items including arms, the neck, and in-shop garments. We propose a novel high fidelity virtual try-on network to generate realistic results. Specifically, a distributed pipeline is used for simultaneous generation of organizational items. First, the in-shop garment is warped using thin plate splines (TPS) to give a coarse shape reference, and then a corresponding target semantic map is generated, which can adaptively respond to the distribution of different items triggered by different garments. Second, organizational items are componentized separately using our novel semantic map-based image adjustment network (SMIAN) to avoid interference between body parts. Finally, all components are integrated to generatethe overall result by SMIAN. A priori dual-modalinformation is incorporated in the tail layers of SMIAN to improve the convergence rate of the network. Experiments demonstrate that the proposed method can retain better details of condition information than current methods. Our method achieves convincing quantitative and qualitative results on existing benchmark datasets.
Du, Chenghu; Yu, Feng; Jiang, Minghua; Hua, Ailing; Zhao, Yaxin; Wei, Xiong; Peng, Tao; and Hu, Xinrong
"High fidelity virtual try-on network via semantic adaptation and distributed componentization,"
Computational Visual Media: Vol. 8:
4, Article 9.
Available at: https://dc.tsinghuajournals.com/computational-visual-media/vol8/iss4/9