Coder Social home page Coder Social logo

ly015 / intrinsic_flow Goto Github PK

View Code? Open in Web Editor NEW
147.0 4.0 22.0 923 KB

Pytorch implementation of the CVPR 2019 paper Dense Intrinsic Appearance Flow for Human Pose Transfer.

Home Page: http://mmlab.ie.cuhk.edu.hk/projects/pose-transfer/

Python 99.47% Shell 0.53%

intrinsic_flow's Introduction

Dense Intrinsic Appearance Flow for Human Pose Transfer

This is a pytorch implementation of the CVPR 2019 paper Dense Intrinsic Appearance Flow for Human Pose Transfer.

fig_intro

Requirements

  • python 2.7
  • pytorch (0.4.0)
  • numpy
  • opencv
  • scikit-image
  • tqdm
  • imageio

Install dependencies:

pip install -r requirements.txt

Resources

Datasets

Download and unzip preprocessed datasets with the following scripts.

bash scripts/download_deepfashion.sh
bash scripts/download_market1501.sh

Or you can manually download them from the following links:

Pretrained Models

Download pretrained models with the following scripts.

bash scripts/download_models.sh

Pretrained models below will be downloaded into the folder ./checkpoints. You can manually donwload them from here.

Deepfashion Market-1501 Others
  • PoseTransfer_0.1 (w/o. dual encoder)
  • PoseTransfer_0.2 (w/o. flow)
  • PoseTransfer_0.3 (w/o. vis)
  • PoseTransfer_0.4 (w/o. pxiel warping)
  • PoseTransfer_0.5 (full)
  • PoseTransfer_m0.1 (w/o. dual encoder)
  • PoseTransfer_m0.2 (w/o. flow)
  • PoseTransfer_m0.3 (w/o. vis)
  • PoseTransfer_m0.4 (w/o. pxiel warping)
  • PoseTransfer_m0.5 (full)
  • Fasion_Inception(compute FashionIS)
  • Fasion_Attr(compute AttrRec-k)

Testing

DeepFashion

  1. Run scripts/test_pose_transfer.py to generate images and compute SSIM score.
python scripts/test_pose_transfer.py --gpu_ids 0 --id PoseTransfer_0.5 --which_epoch best --save_output
  1. Compute inception score with the following script. (Note that this script is derived from improved-gan and needs Tensorflow)
# python scripts/inception_score.py image_dir gpu_ids
python scripts/inception_score.py checkpoints/PoseTransfer_0.5/output/ 0
  1. Compute fashionIS and AttrRec-k with the following scripts.
# FashionIS
python scripts/fashion_inception_score.py --test_dir checkpoints/PoseTransfer_0.5/output/

# AttrRec-k
python scripts/fashion_attribute_score.py --test_dir checkpoints/PoseTransfer_0.5/output/

Market-1501

  1. Run scripts/test_pose_transfer.py to generate images and compute SSIM/masked-SSIM score.
python scripts/test_pose_transfer.py --gpu_ids 0 --id PoseTransfer_m0.5 --which_epoch best --save_output --masked
  1. Compute inception score or masked inception score with following scripts.
# IS
python scripts/inception_score.py checkpoints/PoseTransfer_m0.5/output/ 0

# masked-IS (only for market-1501)
python scripts/masked_inception_score.py checkpoints/PoseTransfer_m0.5/output/ 0

Training

DeepFashion

  1. Train flow regression module. (See all options in ./options/flow_regression_options.py)
python scripts/train_flow_regression_module.py --id id_flow --gpu_ids 0 --which_model unet --dataset_name deepfashion

You can alternativelly set --which_model unet_v2 to use a improved version of network architecture with fewer parameters (only tested on Market-1501).

  1. Train human pose transfer models. Set --pretrained_flow_id and --pretrained_flow_epoch to load the flow regression module. (See all options in ./options/pose_transfer_options.py)
# w/o. dual encoder
python scripts/train_pose_transfer_model.py --id id_pose_1 --gpu_ids 1 --dataset_name deepfashion --which_model_G unet

# w/o. flow
python scripts/train_pose_transfer_model.py --id id_pose_2 --gpu_ids 2 --dataset_name deepfashion --which_model_G dual_unet --G_feat_warp 0

# w/o. visibility
python scripts/train_pose_transfer_model.py --id id_pose_3 --gpu_ids 3 --dataset_name deepfashion --which_model_G dual_unet --G_feat_warp 1 --G_vis_mode none

# w/o. pixel warping
python scripts/train_pose_transfer_model.py --id id_pose_4 --gpu_ids 4 --dataset_name deepfashion --which_model_G dual_unet --G_feat_warp 1 --G_vis_mode residual

# full (need a pretrained pose transfer model without pixel warping)
python scripts/train_pose_transfer_model.py --id id_pose_5 --gpu_ids 5 --dataset_name deepfashion --G_pix_warp 1 --which_model_G dual_unet --pretrained_G_id id_pose_4 --pretrained_G_epoch 8

Market-1501

Set --dataset_name market to train models on Market-1501 dataset. Data related parameters will be automatically adjusted (see .auto_set() in ./options/flow_regression_options.py and ./options/pose_transfer_options.py for details).

Citation

@inproceedings{li2019dense,
  author = {Li, Yining and Huang, Chen and Loy, Chen Change},
  title = {Dense Intrinsic Appearance Flow for Human Pose Transfer},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
  year = {2019}}

intrinsic_flow's People

Contributors

dependabot[bot] avatar ly015 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

intrinsic_flow's Issues

About pixel warping

Thanks for sharing your work.
I am interested in the Pixel Warping(section 3.5) of your paper. As I understand, pixel warping would work when the pose does not change much, cause the visiable region of reference pose can offer some details. When the pose changes drastically, the visiable region from reference image is almost non-existent, how the details from reference image could be added to the final result as the Fig. 8 (row 2,3,4) shows?

Training and Testing issues

Hi ly015. Thanks for keeping the repository public available. Currently i am also working on same concept.
I have some doubts

  1. How can we test the flow_regression_module.py ??
  2. Can i train with my own data with some other segmentation concept like "https://github.com/Engineering-Course/CIHP_PGN"?
  3. How to test for posetranser_0.1 (dual)?
    with hmr, now i am able to create visibility and flow_maps for my own data. But struck at segmentation. So can i use above mentioned segmentation??
    Your help is more helpful to me to go further.

Thanks & Regards,
SandhyaLaxmi

How to select pretrained G epoch when training the full model

Thanks for your brilliant code.

When I try to train a full model as you suggested:
"# full (need a pretrained pose transfer model without pixel warping)"

How do I select the pre-trained G epoch? I the REDME.md, you selected the 8-th epoch.
"--pretrained_G_id id_pose_4 --pretrained_G_epoch 8"

But according to my results obtained after training the model without pixel warp, the best model is 5-th epoch, and the visualization results suggest that the results from last epoch are more better than previous models. So, with what metric do you select the pre-trained model?

small batch_size causes bad result

when I run the test scripts with batch_size=8:

python scripts/test_pose_transfer_model.py --gpu_ids 0 --id PoseTransfer_0.5 --batch_size 8 --n_vis 8 --which_epoch best --save_output

the results look well:
test_epochbest_bs8

but if I change batch_size to 1:

python scripts/test_pose_transfer_model.py --gpu_ids 0 --id PoseTransfer_0.5 --batch_size 1 --n_vis 8 --which_epoch best --save_output

the results are different and worse than batch_size=8
test_epochbest_bs1

how to test on our own pictures?

hello, I wonder if we can use our own pictures(176x256 size) instead of deepfashion_datasets.
And how should we pre-process our dataset to use your pre-trained model?

What is the variable d?

Thanks for sharing your brilliant codes.
Here is a question. What is the variable d in line 146 of 'data/pose_transfer_dataset.py'?

Python 3 Support?

Hello, Thank you for your great work
Can you provide this implementation in Python 3, Python 2 is no longer maintained and deprecated
Thank you

download links fail

Hi, all datasets and models cannot be downloaded anymore.
Could you update the links in readme.md ?
Thanks!

silhouette24, silhouette6 not giving correct segmentation

Hi. Thanks for the update.
By using the code for create_seg.py i am getting images for silhouette 24, silhouette6 like below, which is not giving accurately. Can you tell me how to get exact segmentation with silhouette. Can i change flength, near, far values in renderer function.
for silhouette 24, getting like this:
temp_orig
temp
For silhouette 6 getting like this:
1009803-1-4x_temp

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.