Coder Social home page Coder Social logo

homework1-color-transfer's Introduction

Homework 1 (Color-Transfer and Texture-Transfer)

A clean and readable Pytorch implementation of CycleGAN (https://arxiv.org/abs/1703.10593)

Assign

  1. 20% (Training cycleGAN)
  2. 10% (Inference cycleGAN in personal image)
  3. 20% (Compare with other method)
  4. 30% (Assistant)
  5. 20% (Mutual evaluation)

reference: Super fast color transfer between images

Getting Started

Please first install Anaconda and create an Anaconda environment using the environment.yml file.

conda env create -f environment.yml

After you create the environment, activate it.

source activate hw1

Our current implementation only supports GPU so you need a GPU and need to have CUDA installed on your machine.

Training

1. Download dataset

mkdir datasets
bash ./download_dataset.sh <dataset_name>

Valid <dataset_name> are: apple2orange, summer2winter_yosemite, horse2zebra, monet2photo, cezanne2photo, ukiyoe2photo, vangogh2photo, maps, cityscapes, facades, iphone2dslr_flower, ae_photos

Alternatively you can build your own dataset by setting up the following directory structure:

.
├── datasets                   
|   ├── <dataset_name>         # i.e. apple2orange
|   |   ├── trainA             # Contains domain A images (i.e. apple)
|   |   ├── trainB             # Contains domain B images (i.e. orange) 
|   |   ├── testA              # Testing
|   |   └── testB              # Testing

2. Train

python train.py --dataroot datasets/<dataset_name>/ --cuda

This command will start a training session using the images under the dataroot/train directory with the hyperparameters that showed best results according to CycleGAN authors.

Both generators and discriminators weights will be saved ./output/<dataset_name>/ the output directory.

If you don't own a GPU remove the --cuda option, although I advise you to get one!

Testing

The pre-trained file is on Google drive. Download the file and save it on ./output/<dataset_name>/netG_A2B.pth and ./output/<dataset_name>/netG_B2A.pth.

python test.py --dataroot datasets/<dataset_name>/ --cuda

This command will take the images under the dataroot/testA/ and dataroot/testB/ directory, run them through the generators and save the output under the ./output/<dataset_name>/ directories.

Examples of the generated outputs (default params) apple2orange, summer2winter_yosemite, horse2zebra dataset:

Alt text Alt text Alt text

Acknowledgments

Code is modified by PyTorch-CycleGAN. All credit goes to the authors of CycleGAN, Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.