CSSNet: Image-based clothing style switch

Shao Pin Huang, Der Lor Way*, Zen Chung Shih

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We propose a framework, the CSSNet to exchange the upper clothes across people with different pose, body shape and clothing. We present an approach consists of three stages. (1) Disentangling the features, such as cloth, body pose and semantic segmentation from source and target person. (2) Synthesizing realistic and high resolution target dressing style images. (3) Transfer the complex logo from source clothing to target wearing. Our proposed end-to-end neural network architecture which can generate the specific person to wear the target clothing. In addition, we also propose a post process method to recover the complex logos on network outputs which are missing or blurring. Our results display more realistic and higher quality than previous methods. Our method can also preserve cloth shape and texture simultaneously.

Original languageEnglish
Title of host publicationInternational Workshop on Advanced Imaging Technology, IWAIT 2020
EditorsPhooi Yee Lau, Mohammad Shobri
PublisherSPIE
ISBN (Electronic)9781510638358
DOIs
StatePublished - 2020
EventInternational Workshop on Advanced Imaging Technology, IWAIT 2020 - Yogyakarta, Indonesia
Duration: 5 Jan 20207 Jan 2020

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume11515
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

ConferenceInternational Workshop on Advanced Imaging Technology, IWAIT 2020
CountryIndonesia
CityYogyakarta
Period5/01/207/01/20

Keywords

  • Generative adversarial network
  • Human parsing
  • Pose estimation
  • Style transfer
  • Virtual try-on

Fingerprint Dive into the research topics of 'CSSNet: Image-based clothing style switch'. Together they form a unique fingerprint.

Cite this