cp-vton-plus
Official implementation for "CP-VTON+: Clothing Shape and Texture Preserving Image-Based Virtual Try-On", CVPRW 2020
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.7%) to scientific vocabulary
Keywords
Repository
Official implementation for "CP-VTON+: Clothing Shape and Texture Preserving Image-Based Virtual Try-On", CVPRW 2020
Basic Info
- Host: GitHub
- Owner: minar09
- License: mit
- Language: Python
- Default Branch: master
- Homepage: https://minar09.github.io/cpvtonplus/
- Size: 419 KB
Statistics
- Stars: 372
- Watchers: 10
- Forks: 131
- Open Issues: 61
- Releases: 0
Topics
Metadata Files
README.md
CP-VTON+ (CVPRW 2020)
Official implementation for "CP-VTON+: Clothing Shape and Texture Preserving Image-Based Virtual Try-On" from CVPRW 2020.
Project page: https://minar09.github.io/cpvtonplus/.
Saved/Pre-trained models: Checkpoints
Dataset: VITON_PLUS
The code and pre-trained models are tested with pytorch 0.4.1, torchvision 0.2.1, opencv-python 4.1 and pillow 5.4 (Python 3 env).
Project page | Paper | Dataset | Model | Video
Usage
This pipeline combines consecutive training and testing of GMM and TOM. GMM generates the warped clothes according to the target human. Then, TOM blends the warped clothes outputs from GMM into the target human properties to generate the final try-on output.
1) Install the requirements
2) Download/Prepare the dataset
3) Train GMM network
4) Get warped clothes for the training set with a trained GMM network, and copy warped clothes & masks inside the data/train directory
5) Train TOM network
6) Test GMM for the testing set
7) Get warped clothes for the testing set, copy warped clothes & masks inside the data/test directory
8) Test TOM testing set
Installation
This implementation is built and tested in PyTorch 0.4.1.
Pytorch and torchvision are recommended to install with conda: conda install pytorch=0.4.1 torchvision=0.2.1 -c pytorch
For all packages, run pip install -r requirements.txt
Data Preparation
For training/testing VITON dataset, our full and processed dataset is available here: https://1drv.ms/u/s!Ai8t8GAHdzVUiQRFmTPrtrAy0ZP5?e=rS1aK8. After downloading, unzip to your data directory.
Training
Run python train.py with your specific usage options for the GMM and TOM stage.
For example, GMM: python train.py --name GMM --stage GMM --workers 4 --save_count 5000 --shuffle
Then run test.py for GMM network with the training dataset, which will generate the warped clothes and masks in the "warp-cloth" and "warp-mask" folders inside the "result/GMM/train/" directory. Copy the "warp-cloth" and "warp-mask" folders into your data directory, for example inside the "data/train" folder.
Run TOM stage, python train.py --name TOM --stage TOM --workers 4 --save_count 5000 --shuffle
Testing
Run 'python test.py' with your specific usage options.
For example, GMM: python test.py --name GMM --stage GMM --workers 4 --datamode test --data_list test_pairs.txt --checkpoint checkpoints/GMM/gmm_final.pth
Then run test.py for GMM network with the testing dataset, which will generate the warped clothes and masks in the "warp-cloth" and "warp-mask" folders inside the "result/GMM/test/" directory. Copy the "warp-cloth" and "warp-mask" folders into your data directory, for example inside the "data/test" folder.
Run TOM stage: python test.py --name TOM --stage TOM --workers 4 --datamode test --data_list test_pairs.txt --checkpoint checkpoints/TOM/tom_final.pth
Inference/Demo
Download the pre-trained models from here: https://1drv.ms/u/s!Ai8t8GAHdzVUiQA-o3C7cnrfGN6O?e=EaRiFP. Then run the same step as Testing to test/infer our model. The code and pre-trained models are tested with PyTorch 0.4.1, torchvision 0.2.1, opencv 4.1 and pillow 5.4.
Testing with custom images
to run the model with custom internet images, make sure you have the following:
1) image (image of a person, crop/resize to 192 x 256 (width x height) pixels) 2) image-parse (you can generate with CIHPPGN or Graphonomy pretrained networks from the person image. See this comment) 3) cloth (in-shop cloth image, crop/resize to 192 x 256 (width x height) pixels) 4) cloth-mask (binary mask of cloth image, you can generate it with simple pillow/opencv function) 5) pose (pose keypoints of the person, generate with openpose COCO-18 model (OpenPose from the official repository is preferred)) 6) Also, make a testpairs.txt file for your custom images. Follow the VITON dataset format to keep same arrangements, otherwise you can modify the code.
What to do in case of unexpected results
There are many factors that can cause distorted/unexpected results. Can you please do the following?
1) First try the original viton dataset and test pair combinations, check the intermediate results and the final output. Check if they are as expected. 2) If the original viton results are not as expected, please check the issues raised in this GitHub repo, people have already found several issues, and see how they solved them. 3) If the original viton test results are as expected, then run your custom test sets and check the intermediate results and debug where its going wrong. 4) If you are testing with custom images then check the GitHub repository readme and related issues on how to run with custom images.
Its difficult to understand your issue from only single image/output. As I mentioned, there are various factors. Please debug yourself step by step and see where its going wrong. Check all the available intermediate/final inputs/outputs visually, and check multiple cases to see if the issue is happening for all cases. Good luck to you!
Citation
Please cite our paper in your publications if it helps your research:
@InProceedings{Minar_CPP_2020_CVPR_Workshops,
title={CP-VTON+: Clothing Shape and Texture Preserving Image-Based Virtual Try-On},
author={Minar, Matiur Rahman and Thai Thanh Tuan and Ahn, Heejune and Rosin, Paul and Lai, Yu-Kun},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}
Acknowledgements
This implementation is largely based on the PyTorch implementation of CP-VTON. We are extremely grateful for their public implementation.
Owner
- Name: Matiur Rahman Minar
- Login: minar09
- Kind: user
- Location: Dhaka, Bangladesh
- Website: https://minar09.github.io/
- Twitter: minar09
- Repositories: 72
- Profile: https://github.com/minar09
There is no true God except Allah (SWT) and Muhammad (PBUH) is the Messenger of Allah (SWT).
Citation (CITATION.cff)
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: >-
Official implementation for "CP-VTON+: Clothing
Shape and Texture Preserving Image-Based Virtual
Try-On" from CVPRW 2020.
message: >-
If you use this code and dataset, please cite it
using the metadata from this file. Also, please
cite the paper separately as well.
type: software
authors:
- given-names: Matiur Rahman
family-names: Minar
email: minar09.bd@gmail.com
orcid: 'https://orcid.org/0000-0002-3128-2915'
- given-names: Thanh Tuan
family-names: Thai
email: thaithanhtuan1987@gmail.com
orcid: 'https://orcid.org/0000-0003-2748-0529'
identifiers:
- type: url
value: 'https://github.com/minar09/cp-vton-plus'
description: public URL for the code and dataset
repository-code: 'https://github.com/minar09/cp-vton-plus'
url: 'https://minar09.github.io/cpvtonplus/'
repository: 'https://github.com/minar09/cp-vton-plus'
abstract: >-
CP-VTON+ (CVPRW 2020)
Official implementation for "CP-VTON+: Clothing
Shape and Texture Preserving Image-Based Virtual
Try-On" from CVPRW 2020.
Project page:
https://minar09.github.io/cpvtonplus/.
Saved/Pre-trained models: Checkpoints
Dataset: VITON_PLUS
The code and pre-trained models are tested with
pytorch 0.4.1, torchvision 0.2.1, opencv-python 4.1
and pillow 5.4 (Python 3 env).
keywords:
- Virtual try-on
- Fashion
license: MIT
GitHub Events
Total
- Issues event: 4
- Watch event: 27
- Delete event: 1
- Issue comment event: 8
- Push event: 2
- Pull request review event: 2
- Pull request event: 4
- Fork event: 15
- Create event: 1
Last Year
- Issues event: 4
- Watch event: 27
- Delete event: 1
- Issue comment event: 8
- Push event: 2
- Pull request review event: 2
- Pull request event: 4
- Fork event: 15
- Create event: 1
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Matiur Rahman Minar | m****d@g****m | 75 |
| Thái Thanh Tuấn | t****7@y****m | 1 |
| SKO | 4****7 | 1 |
| av-keerthikumar | k****r@b****o | 1 |
| Ray Yuan | r****y@r****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 9 months ago
All Time
- Total issues: 117
- Total pull requests: 5
- Average time to close issues: 3 months
- Average time to close pull requests: about 21 hours
- Total issue authors: 95
- Total pull request authors: 5
- Average comments per issue: 4.18
- Average comments per pull request: 0.4
- Merged pull requests: 4
- Bot issues: 0
- Bot pull requests: 1
Past Year
- Issues: 5
- Pull requests: 3
- Average time to close issues: N/A
- Average time to close pull requests: about 1 hour
- Issue authors: 4
- Pull request authors: 3
- Average comments per issue: 1.4
- Average comments per pull request: 0.0
- Merged pull requests: 2
- Bot issues: 0
- Bot pull requests: 1
Top Authors
Issue Authors
- ikenaga530 (3)
- SarahH20 (3)
- RubReh (3)
- Bouncer51 (3)
- cv38 (2)
- koushiksb (2)
- arjuntechy (2)
- lokesh0606 (2)
- dnalexen (2)
- gamenerd457 (2)
- rafidgotit (2)
- ghost (2)
- sudip550 (2)
- tasinislam21 (2)
- AyeshaShafique (2)
Pull Request Authors
- minar09 (2)
- dependabot[bot] (2)
- av-keerthikumar (2)
- uyw4687 (1)
- abc123yuanrui (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- numpy *
- opencv-contrib-python *
- pillow *
- tensorboardX *
- torch ==0.4.1
- torchvision ==0.2.1