Coder Social home page Coder Social logo

hwalsuklee / tensorflow-fast-style-transfer Goto Github PK

View Code? Open in Web Editor NEW
244.0 12.0 71.0 12.03 MB

A simple, concise tensorflow implementation of fast style transfer

License: Apache License 2.0

Python 100.00%
style-transfer tensorflow fast neural-style offline change-style

tensorflow-fast-style-transfer's Introduction

Fast Style Transfer

A tensorflow implementation of fast style transfer described in the papers:

I recommend you to check my previous implementation of A Neural Algorithm of Artistic Style (Neural style) in here, since implementation in here is almost similar to it.

Sample results

All style-images and content-images to produce following sample results are given in style and content folders.

Chicago

Following results with --max_size 1024 are obtained from chicago image, which is commonly used in other implementations to show their performance.

Click on result images to see full size images.




Female Knight

The source image is from https://www.artstation.com/artwork/4zXxW

Results were obtained from default setting except --max_size 1920.
An image was rendered approximately after 100ms on GTX 980 ti.

Click on result images to see full size images.




Usage

Prerequisites

  1. Tensorflow
  2. Python packages : numpy, scipy, PIL(or Pillow), matplotlib
  3. Pretrained VGG19 file : imagenet-vgg-verydeep-19.mat
          * Please download the file from link above.
          * Save the file under pre_trained_model
  4. MSCOCO train2014 DB : train2014.zip
          * Please download the file from link above. (Notice that the file size is over 12GB!!)
          * Extract images to train2014.

Train

python run_train.py --style <style file> --output <output directory> --trainDB <trainDB directory> --vgg_model <model directory>

Example: python run_train.py --style style/wave.jpg --output model --trainDB train2014 --vgg_model pre_trained_model

Arguments

Required :

  • --style: Filename of the style image. Default: images/wave.jpg
  • --output: File path for trained-model. Train-log is also saved here. Default: models
  • --trainDB: Relative or absolute directory path to MSCOCO DB. Default: train2014
  • --vgg_model: Relative or absolute directory path to pre trained model. Default: pre_trained_model

Optional :

  • --content_weight: Weight of content-loss. Default: 7.5e0
  • --style_weight: Weight of style-loss. Default: 5e2
  • --tv_weight: Weight of total-varaince-loss. Default: 2e2
  • --content_layers: Space-separated VGG-19 layer names used for content loss computation. Default: relu4_2
  • --style_layers: Space-separated VGG-19 layer names used for style loss computation. Default: relu1_1 relu2_1 relu3_1 relu4_1 relu5_1
  • --content_layer_weights: Space-separated weights of each content layer to the content loss. Default: 1.0
  • --style_layer_weights: Space-separated weights of each style layer to loss. Default: 0.2 0.2 0.2 0.2 0.2
  • --max_size: Maximum width or height of the input images. Default: None
  • --num_epochs: The number of epochs to run. Default: 2
  • --batch_size: Batch size. Default: 4
  • --learn_rate: Learning rate for Adam optimizer. Default: 1e-3
  • --checkpoint_every: Save-frequency for checkpoint. Default: 1000
  • --test: Filename of the content image for test during training. Default: None
  • --max_size: Maximum width or height of the input image for test. None do not change image size. Default: None

Trained models

You can download all the 6 trained models from here

Test

python run_test.py --content <content file> --style_model <style-model file> --output <output file> 

Example: python run_test.py --content content/female_knight.jpg --style_model models/wave.ckpt --output result.jpg

Arguments

Required :

  • --content: Filename of the content image. Default: content/female_knight.jpg
  • --style-model: Filename of the style model. Default: models/wave.ckpt
  • --output: Filename of the output image. Default: result.jpg

Optional :

  • --max_size: Maximum width or height of the input images. None do not change image size. Default: None

Train time

Train time for 2 epochs with 8 batch size is 6~8 hours. It depends on which style image you use.

References

The implementation is based on the projects:

[1] Torch implementation by paper author: https://github.com/jcjohnson/fast-neural-style

  • The major difference between [1] and implementation in here is to use VGG19 instead of VGG16 in calculation of loss functions. I did not want to give too much modification on my previous implementation on style-transfer.

[2] Tensorflow implementation : https://github.com/lengstrom/fast-style-transfer

  • The major difference between [2] and implementation in here is the architecture of image-transform-network. I made it just as in the paper. Please see the supplementary of the paper.

Acknowledgements

This implementation has been tested with Tensorflow over ver1.0 on Windows 10 and Ubuntu 14.04.

tensorflow-fast-style-transfer's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-fast-style-transfer's Issues

I have trouble about these two Examples help

  1. python run_train.py --style style/wave.jpg --output model --trainDB train2014 --vgg_model pre_trained_model
    • AttributeError: module 'tensorflow' has no attribute 'batch_matmul'
  2. python run_test.py --content content/female_knight.jpg --style_model models/wave.ckpt --output result.jpg
    • AttributeError: module 'tensorflow' has no attribute 'pack'

Testing during training.

It would be nice to automatically run a test image every X iterations during training (maybe off of CPU) to see progress as it's happening. I'll probably take a crack at it but I'm certain whatever I come up with won't be very elegant so thought I would mention it.

Number of iterations?

Hey, I'm just curious on what number of iterations and how long it took for you to train them to achieve your sample image results on your 980ti. Thanks!

model training poorly with default

Hi, I'm having trouble recreating the same quality models, for example the first one here is using your pre_trained rain_princess.ckpt file under normal default parameters specified in this repo, while the second image is using a model I trained using default parameters (~6-8hrs). Wondering why there is so much content loss compared to yours. I've changed many parameters from default and each model gets further away from looking as nice as yours. Any suggestions?/has anyone else encountered this before?

Thanks in advance for the help!
Best,
Kelsey

from your model rain_princess.ckpt:
test_rainprincess

from default parameters:
content_weight=7.5e0, style_weight=5e2, content_layer_weights=[1.0], style_layer_weights=[.2,.2,.2,.2,.2], num_epochs=2, batch_size=4
TESTrain_default

from parameters:
content_weight=5e2, style_weight=5e2, content_layer_weights=[0.1], style_layer_weights=[.2,.2,.2,.2,.2], num_epochs=2, batch_size=4
TESTrain2

from parameters:
content_weight=5e2, style_weight=5e2, content_layer_weights=[0.4], style_layer_weights=[.2,.2,.2,.2,.2], num_epochs=2, batch_size=8
TEST3_rainprincess_still

from parameters:
content_weight=7.5e0, style_weight=5e2, content_layer_weights=[1.0], style_layer_weights=[.8,.8,.8,.8,.8], num_epochs=2, batch_size=8
TEST4_rainprincess_still

from parameters:
content_weight=7.5e0, style_weight=5e4, content_layer_weights=[1.0], style_layer_weights=[.8,.8,.8,.8,.8], num_epochs=2, batch_size=8
TEST5_rainprincess_still

Quality compared to style-transfer

Hey!

I'm trying to achieve with fast style transfer roughly same results as with classical one and I can't. Best example, IMO, is style transfer from "Starry night": in Gatys approach (your implementation is fine example) you can see spirals on images styled with Van Gogh style.
With fast style transfer - there is no spirals, just some boring zigzags.

I upload trained model with default parameters here. Before that I worked with Logan Engstrom's variation of it and did experiments with different content weights and style weights, but cannot achieve spirals transfer as well. In your code there is possibility to play with layers as well, but it's time consuming to try all possible options. Can you suggest what should I try to achieve result closer to style transfer?

PS: It's not about only achieving spirals in "Stary night" - it's just a very good illustration of an issue.

Package image processing

Could you please add support for package image processing for --content and --output arguments in run_test.py? Because when I'm trying to pass folder with images as an argument - I'm getting errors.

License

Is there a license on this software?

CPU

Hi does this support cpu for stylisation?

ckpt problems

''
There is no la_muse.ckpt
Tensorflow r0.12 requires 3 files related to *.ckpt
If you want to restore any models generated from old tensorflow versions, this assert might be ignored
''
i use ''la_muse.ckpt" to test the code but the error shows above,does anyone know how to solve it?

train result

Thanks for your implementation, it give a nice results

Just finished training a model and lot of files appear.... not sure if its correct and which one is the valid file????

I make an snapshot
https://ibb.co/p09C6ZT

Thank you!

divide by zero encountered

Can any one help please ?

tensorflow 1.5- python 2.7.12 - cuda 9.1 - cudnn 7 - ubuntu 17.10

style_transfer_trainer.py:219: RuntimeWarning: divide by zero encountered in long_scalars epoch = (iterations * self.batch_size) // num_examples

btw : I replaced some images (different size) with train2014 images

Android conversion

Hello,
first of all thank you for your code.

I successfully used it to train a model and style some images using a GPU.
Then, I would like to port the model on an Android device and try it there, but unfortunately I wasn't able to proceed.

I see the pbtxt file is missing from the directory where the checkpoints are saved so I wasn't able to proceed to freeze the graph and optimize it for mobile. I tried modifying your files to make them save the pbtxt file but I'm not sure I did it correctly.
I added
tf.train.write_graph(self.sess.graph_def, ".", "graph.pbtxt")
just before
res = saver.save(self.sess,self.save_path+'/final.ckpt')
in style_transfer_trainer.py.

The saved graph file is probably way too big (~650MB), and anyway, using the summarize_graph from tensorflow I get 211 possible outputs, so unfortunately I don't know then how to freeze and optimize it.

Could you give me some advices? Thank you.

Train time

How long do I have to wait?
I training 163KB Image in GPGPU Server (TITAN X * 8)

I already wait 3 days and training Iteration is 25017.

i used python run_train.py --style style/image.jpg --output model --trainDB train2014 --vgg_model pre_trained_model

Please answer to my question.
Thank you!

Reproducing demo models

Hi, kudos for an awesome project !
I managed to run this using tf 1.13.1 and works perfectly.
However, when trying to train I get poor results , no where near the pre-trained models.
After trying with the defaults , I also tried changing some of the hyperparameters, eg. content/tv/style weights , learning rate, batch sizes.
I see that learn rate and batch sizes have significant effect on end results ,but can't reproduce your models.
Can you please publish the parameters used for training ? style image sizes ? any other modifications to the code that can help improve results ?

output layer name

Whats the output layer name? i opened a checkpoing with tensorboard but im not able to find it

Thanks

Video Styling

I have been using this code to hobby style some images for awhile now and want to move into video styling. Is anyone aware of an easy fix to make the checkpoint (.ckpt) files produced from here usable with the following? https://github.com/lengstrom/fast-style-transfer

I would hate to need to retrain models for hours on end again.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.