Dressing in the wild watching dance videos

Garment transfer, the process of transferring clothing over the image of a query person without changing the identity, is a task with great commercial potential. A recent paper published on arXiv.org explores the in-the-wild garment transfer problem.

Image credits: PXFuel, CC0 Public Domain

Image credits: PXFuel, CC0 Public Domain

Researchers suggest a self-monitored training plan that works on easily accessible dance videos. A novel generative network is proposed to facilitate arbitrary garment transfer under complex poses. It integrates the advantages of two methods currently in use: 2D pixel flow and 3D vertex flow. Cyclic online optimization has been designed to further enhance the synthesis quality.

A new large-scale video dataset has also been created to facilitate related human-centered research areas, which are not limited to virtual trials. This model successfully produces results with sharp textures and intact garment shapes.

While significant advances have been made in garment transfer, one of the most applicable directions of human-centered image formation, existing work ignores wild imagery, presenting severe garment-person misalignment as well as a noticeable decline in fine texture detail. We do. Therefore, this paper participates in the virtual attempt at real-world scenes and the necessary improvements in authenticity and naturalness, especially for loose apparel (eg, skirts, formal clothing), challenging poses (eg, cross arms, bent legs) brings. Disorganized background. In particular, we find that pixel flow excels at handling loose fabrics whereas top flow is preferred for difficult poses, and combining their benefits we propose a novel generative network called wFlow that is effective can significantly advance apparel transfer in an in-the-wild context. , Furthermore, the former approaches require paired images for training. Instead, we cut down on laboriousness by working on a newly created large-scale video dataset called Dance50k with self-supervised cross-frame training and an online cycle optimization. The proposed dance can promote virtual dressing of the real world by covering a variety of clothing under 50k dancing poses. Extensive experiments demonstrate the superiority of our wFlow in generating realistic garment transfer results for in-the-wild images without resorting to expensive paired datasets.

Research Article: Dong, X., “Dressing in the Wild by Watching Dance Videos”, 2022. Link of Paper: https://arxiv.org/abs/2203.15320
project site: https://awesome-wflow.github.io/


Leave a Reply

Your email address will not be published.