Coder Social home page Coder Social logo

homointerpgan's Introduction

HomoInterpGAN

Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation (CVPR 2019, oral)

Installation

The implementation is based on pytorch. Our model is trained and tested on version 1.0.1.post2. Please install relevant packages based on your own environment.

All other required packages are listed in "requirements.txt". Please run

pip install -r requirements.txt

to install these packages.

Dataset

Download the "Align&Cropped Images" of the CelebA dataset. If the original link is unavailable, you can also download it here.

Training

Firstly, cd to the project directory and run

export PYTHONPATH=./:$PYTHONPATH

before executing any script.

To train a model on CelebA, please run

python run.py train --data_dir CELEBA_ALIGNED_DIR -sp checkpoints/CelebA -bs 128 -gpu 0,1,2,3 

Key arguments

--data_dir: The path of the celeba_aligned images. 
-sp: The trained model and logs, intermediate results are stored in this directory.
-bs: Batch size.
-gpu: The GPU index.
--attr: This specifies the target attributes. Note that we concatenate multiple attributes defined in CelebA as our grouped attribute. We use "@" to group multiple multiple attributes to a grouped one (e.g., Mouth_Slightly_Open@Smiling forms a "expression" attriute). We use "," to split different grouped attributes. See the default argument of "run.py" for details. 

Testing

python run.py attribute_manipulation -mp checkpoints/CelebA -sp checkpoints/CelebA/test/Smiling  --filter_target_attr Smiling -s 1 --branch_idx 0 --n_ref 5 -bs 8

This conducts attribute manipulation with reference samples selected in CelebA dataset. The reference samples are selected based on their attributes (--filter_target_attr), and the interpolation path should be chosen accordingly.

Key arguments:

-mp: the model path. The checkpoints of encoder, interpolator and decoder should be stored in this path.
-sp: the save path of the results.
--filter_target_attr: This specifies the attributes of the reference images. The attribute names can be found in "info/attribute_names.txt". We can specify one attribute (e.g., "Smiling") or several attributes (e.g., "Smiling@Mouth_Slightly_Open" will filter mouth open smiling reference images). To filter negative samples, add "NOT" as prefix to the attribute names, such as "NOTSmiling", "NOTSmiling@Mouth_Slightly_Open".
--branch_idx: This specifies the branch index of the interpolator. Each branch handles a group of attribute. Note that the physical meaning of each branch is specified by "--attr" during testing. 
-s: The strength of the manipulation. Range of [0, 2] is suggested. If s>1, the effect is exaggerated.
-bs: the batch size of the testing images. 
-n_ref: the number of images used as reference. 

Testing on unaligned images

Note the the performance could degenerate if the testing image is not well aligned. Thus we also provide a tool for face alignment. Please place all your testing images to a folder (e.g., examples/original), then run

python facealign/align_all.py examples/original examples/aligned

to align testing images to an samples in CelebA. Then you can run manipulation by

python run.py attribute_manipulation -mp checkpoints/CelebA -sp checkpoints/CelebA/test/Smiling  --filter_target_attr Smiling -s 1 --branch_idx 0 --n_ref 5 -bs 8 --test_folder examples/aligned

Note that an additional argument "--test_folder" is specified.

Pretrained model

We have also provided a pretrained model here. It is trained with default parameters. The meaning of each branch of the interpolator is listed bellow.

Branch index Grouped attribute Corresponding labels on CelebA
1 Expression Mouth_Slightly_Open, Smiling
2 Gender trait Male, No_Beard, Mustache, Goatee, Sideburns
3 Hair color Black_Hair, Blond_Hair, Brown_Hair, Gray_Hair
4 Hair style Bald, Receding_Hairline, Bangs
5 Age Young

Updates

  • Jun 17, 2019: It is observed that the face alignment tool is not perfect, and the results of "Testing on unaligned images" does not perform as well as results in CelebA dataset. To make the model less sensitive of the alignment issue, we add random shifting in center_crop during training. The shifting range can be controlled by "--random_crop_bias". We have updated the pretarined model by fine-tuning it with "random_crop_bias=10", which leads to better results in unaligned images.

Reference

Ying-Cong Chen, Xiaogang Xu, Zhuotao Tian, Jiaya Jia, "Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation" , Computer Vision and Pattern Recognition (CVPR), 2019 PDF

@inproceedings{chen2019Homomorphic,
  title={Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation},
  author={Chen, Ying-Cong and Xu, Xiaogang and Tian, Zhuotao and Jia, Jiaya},
  booktitle={CVPR},
  year={2019}
}

Contect

Please contact [email protected] if you have any question or suggestion.

homointerpgan's People

Contributors

yingcong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

homointerpgan's Issues

TypeError: 'NoneType' object has no attribute 'getitem'

loading default VGG
/mnt/backup/project/ycchen/datasets/face/images/celeba_aligned/185954.jpg
Traceback (most recent call last):
File "run.py", line 192, in
engine.run()
File "run.py", line 187, in run
exec ('self.{}()'.format(self.args.command))
File "", line 1, in
File "run.py", line 156, in attribute_manipulation
img_out = [tmp for tmp in ref_loader]
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 615, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/content/HomoInterpGAN/data/attributeDataset.py", line 163, in getitem
img = util.readRGB(self.files[index]).astype(np.float32)
File "/content/HomoInterpGAN/util/util.py", line 144, in readRGB
return img[:, :, [2, 1, 0]]
TypeError: 'NoneType' object has no attribute 'getitem'

at
python run.py attribute_manipulation -mp checkpoints/CelebA -sp checkpoints/CelebA/test/Smiling --filter_target_attr Smiling -s 1 --branch_idx 0 --n_ref 5 -bs 8 --test_folder examples/aligned

Why we need "Rigorous Training" for attribute classifier ?

First of all, congratulation for this outstanding work, but I still have two small questions:

  • Why we need "Rigorous Training" for attribute classifier ? Can you give a more exactly example?
  • Can I consider T^k in interpolator as a "fliter" used to get rid of variables which are not related to correspond attribute from "feature code" F ?

Thanks!

Interpolation between dogs and cats

Hello! I am very interested in your work, especially the interpolation between dogs and cats. I saw this result at supplementary material. I want to interpolate without attribute, because there is no label or attribute in my dataset. So could you please tell me how to do?

Is it possible to generate an image without reference image ?

Hi,

I would like to thank you first for your amazing work !

As the title said it, I would like to know if is there any way to generate an image without reference image at all ?
For example, generate a Woman face with only a Man face (without a Man face as reference).

test_selected_curve & attribute_manipulation need both as well a reference image to generate the interpolation (correct me if I''m wrong)

Thank you in advance !

TypeError: '>' not supported between instances of 'str' and 'int'

I encountered this error when I tried to execute the training script. The line of code that caused the exception is seen here url. I am looking forward to your help.

Below is the detailed error message:

/home/shhs/anaconda3/envs/torch_1_0_py3_6/bin/python /home.bak/shhs/soft/pycharm-2019.1.1/helpers/pydev/pydevd.py --multiproc --qt-support=auto --client 127.0.0.1 --port 46783 --file /media/shhs/Peterou2/user/code/HomoInterpGAN/run.py attribute_manipulation -bs 8 -gpu 0
pydev debugger: process 6843 is connecting

Connected to pydev debugger (build 191.6605.12)
loading default VGG

  • Total Images: 162770
    Traceback (most recent call last):
    File "/home.bak/shhs/soft/pycharm-2019.1.1/helpers/pydev/pydevd.py", line 1741, in
    main()
    File "/home.bak/shhs/soft/pycharm-2019.1.1/helpers/pydev/pydevd.py", line 1735, in main
    globals = debugger.run(setup['file'], None, None, is_module)
    File "/home.bak/shhs/soft/pycharm-2019.1.1/helpers/pydev/pydevd.py", line 1135, in run
    pydev_imports.execfile(file, globals, locals) # execute the script
    File "/home.bak/shhs/soft/pycharm-2019.1.1/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
    File "/media/shhs/Peterou2/user/code/HomoInterpGAN/run.py", line 205, in
    engine.run()
    File "/media/shhs/Peterou2/user/code/HomoInterpGAN/run.py", line 200, in run
    exec ('self.{}()'.format(self.args.command))
    File "", line 1, in
    File "/media/shhs/Peterou2/user/code/HomoInterpGAN/run.py", line 161, in attribute_manipulation
    _, test_dataset = self.load_dataset()
    File "/media/shhs/Peterou2/user/code/HomoInterpGAN/run.py", line 106, in load_dataset
    csv_path='info/celeba-with-orientation.csv')
    File "/media/shhs/Peterou2/user/code/HomoInterpGAN/data/attributeDataset.py", line 220, in init
    f3 = self.frame.iloc[:, 1:] > 0
    File "/home/shhs/anaconda3/envs/torch_1_0_py3_6/lib/python3.6/site-packages/pandas/core/ops.py", line 2108, in f
    res = self._combine_const(other, func)
    File "/home/shhs/anaconda3/envs/torch_1_0_py3_6/lib/python3.6/site-packages/pandas/core/frame.py", line 5120, in _combine_const
    return ops.dispatch_to_series(self, other, func)
    File "/home/shhs/anaconda3/envs/torch_1_0_py3_6/lib/python3.6/site-packages/pandas/core/ops.py", line 1157, in dispatch_to_series
    new_data = expressions.evaluate(column_op, str_rep, left, right)
    File "/home/shhs/anaconda3/envs/torch_1_0_py3_6/lib/python3.6/site-packages/pandas/core/computation/expressions.py", line 208, in evaluate
    return _evaluate(op, op_str, a, b, **eval_kwargs)
    File "/home/shhs/anaconda3/envs/torch_1_0_py3_6/lib/python3.6/site-packages/pandas/core/computation/expressions.py", line 68, in _evaluate_standard
    return op(a, b)
    File "/home/shhs/anaconda3/envs/torch_1_0_py3_6/lib/python3.6/site-packages/pandas/core/ops.py", line 1128, in column_op
    for i in range(len(a.columns))}
    File "/home/shhs/anaconda3/envs/torch_1_0_py3_6/lib/python3.6/site-packages/pandas/core/ops.py", line 1128, in
    for i in range(len(a.columns))}
    File "/home/shhs/anaconda3/envs/torch_1_0_py3_6/lib/python3.6/site-packages/pandas/core/ops.py", line 1766, in wrapper
    res = na_op(values, other)
    File "/home/shhs/anaconda3/envs/torch_1_0_py3_6/lib/python3.6/site-packages/pandas/core/ops.py", line 1625, in na_op
    result = _comp_method_OBJECT_ARRAY(op, x, y)
    File "/home/shhs/anaconda3/envs/torch_1_0_py3_6/lib/python3.6/site-packages/pandas/core/ops.py", line 1603, in _comp_method_OBJECT_ARRAY
    result = libops.scalar_compare(x, y, op)
    File "pandas/_libs/ops.pyx", line 97, in pandas._libs.ops.scalar_compare
    TypeError: '>' not supported between instances of 'str' and 'int'

Questions about the results in supplementary material

Thanks for sharing the nice work!

I have a few questions about the results of testing RaFD model on wild images:

  1. As the images in RaFD dataset are well aligned, I am wondering whether you take any alignment steps there? Or just simply crop the face -> transform the expression -> place it back to the original image?
  2. May I know whether you use all the images in RaFD to train the model or only the images under frontal camera (90 degree)?

Looking forward to your reply. Thanks.

supplementary material

Very fancy idea!
When I was reading the paper, I could not find the supplementary material. Is it available?
Thanks~

weird results

I tried
python run.py attribute_manipulation -mp checkpoints/CelebA -sp checkpoints/CelebA/test/Smiling --filter_target_attr NOTSmiling -s 1 --branch_idx 0 --n_ref 5 -bs 8 --test_folder examples/aligned

and am getting weird results that lack identity preservation which is shown in the paper
Screen Shot 2019-06-11 at 3 42 51 PM

vs

Screen Shot 2019-06-11 at 3 45 12 PM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.