Coder Social home page Coder Social logo

animating-landscape's Introduction

Animating Landscape

+-----------------------------------------------------------
|Animating Landscape:
|Self-Supervised Learning of Decoupled Motion and Appearance for Single-Image Video Synthesis
|Project page: http://www.cgg.cs.tsukuba.ac.jp/~endo/projects/AnimatingLandscape/

+-----------------------------------------------------------

This repository contains source codes of the following paper:

Yuki Endo, Yoshihiro Kanamori, Shigeru Kuriyama:
"Animating Landscape: Self-Supervised Learning of Decoupled Motion and Appearance for Single-Image Video Synthesis,"
ACM Transactions on Graphics (Proc. of SIGGRAPH Asia 2019), 38, 6, pp.175:1-175:19, November 2019.

Dependencies

  1. Python (we used version 2.7.12)
  2. PyTorch (we used version 0.4.0)
    NOTE: Default grid_sample behavior has changed to align_corners=False since PyTorch 1.3.0. Please specify align_corners=True for the grid_sample functions in test.py and train.py if you use newer PyTorch versions.
  3. OpenCV (we used version 2.4.13)
  4. scikit-learn (we used version 0.19.0)

The other dependencies for the above libraries are also needed. It might work with other versions as well.

Animating landscape image

Download the pretrained models(mirror), put them into the models directory, and run test.py by specifying an input image and an output directory, for example,

python test.py --gpu 0 -i ./inputs/1.png -o ./outputs  

Three videos (looped motion, flow field, and final result) are generated in the output directory. Output videos might differ according to latent codes randomly sampled every time you run the code.

You can also specify latent codes manually from the pre-trained codebook using simple scalar values for motion (-mz) and appearance (-az) in [0,1], for example,

python test.py --gpu 0 -i ./inputs/1.png -o ./outputs -mz 0.9 -az 0.1  

Training new models

Run train.py by specifying a dataset directory and a training mode.

python train.py --gpu 0 --indir ./training_data/motion --mode motion  

Trained models are saved in the models directory.

Fore more optional arguments, run each code with --help option.

Citation

Please cite our paper if you found the code useful:

@article{endo2019animatinglandscape,
  title = {Animating Landscape: Self-Supervised Learning of Decoupled Motion and Appearance for Single-Image Video Synthesis},
  author = {Yuki Endo and Yoshihiro Kanamori and Shigeru Kuriyama},
  journal = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH Asia 2019)},
  year = {2019},
  volume = 38,
  number = 6,
  pages = {175:1--175:19}
}

Acknowledgements

This code borrows the encoder code from BicycleGAN and the Instance Normalization code from fast-neural-style.

animating-landscape's People

Contributors

endo-yuki-t avatar yshhrknmr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

animating-landscape's Issues

The dataset for training motion predictor

As the paper said, The dataset was divided into 1,825 video clips for training and 224 clips for testing at the resolution of 640 × 360. and The resultant video clips contain 227 frames on average.

That means 1825 video clips for training and the average frames is 227 about these 1825 clips, and each video clip the frames is consecutive, right?

Long calculation of codebook for appearance

During training of the appearance part I've encountered rather long delays (15-20 min) with calculation of the codebook values - obviously because of the amount of frames in the training set.
In the published code the codebook is calculated for every frame (in my case 30 videos for 30004000 frames each). On the contrary, the codebook_a in the published model counts for just 125 arrays, only dozens in length each - despite you mentioned ~2000 videos with hundreds of frames in each in your dataset.
Could you please clarify how should this codebook be properly constructed?

NaNs during training

Thanks you for this very interesting repo.
I got some issues with training though: the parameters (codebook values for motion and total_loss for appearance) drop to NaN within few epochs (5-20) when trained with default hyperparameters.
Increase in learning rate to 0.5~1e-4 eliminates this behaviour, but doesn't look like a solution.
Did you encounter such issues in your practice and do you have any advice to sort this out?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.