Coder Social home page Coder Social logo

seitaroshinagawa / chainer-partial_convolution_image_inpainting Goto Github PK

View Code? Open in Web Editor NEW
113.0 113.0 30.0 1.59 MB

Reproduction of Nvidia image inpainting paper "Image Inpainting for Irregular Holes Using Partial Convolutions"

License: MIT License

Python 100.00%

chainer-partial_convolution_image_inpainting's People

Contributors

seitaroshinagawa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chainer-partial_convolution_image_inpainting's Issues

Where can I set VGG.caffemodel ?

Hi ,
I downloaded the VGG_ILSVRC_16_layers.caffemodel manually ,But I don't know where can I set this model? can you tell me ,thanks ~~~

load the npz file and then train

Excuse me, if I have trained 1000 times now and got the npz file, how can I set it up if I want to train from the 1000th time?

How to print mask during one batch?

I have used free-form mask to generate mask which shape is (batchsize,1,256,256),I have set batchsize to 1,and I use batch_postprocess_images to reshpe the size to(256,256,1),the type is 'numpy.ndarray',The mask is like this:
56350059-dab1fe00-6204-11e9-97cf-c52f13e60a78
How could I print the mask?

Looking forward to receiving your reply,Thank you very much.

Obvious outline around the hole

Hi,
I produced my own irregular hole masks (more bigger, like objects).
The result is like this:
image

the content inner the hole is good ,but there is an obvious line around the outline。

And all the Iout images have 2 or 3 pixels offset from the corresponding Igt image. Maybe this is the reason why the uncoordinated occur.

Thank you.

Replacement order

Excuse me, in the place2.py file, def get_example part, you use random to take out 8 pictures, if I want to take out the order from the first to the eighth, how can I change?

The problem of the edge

Is it because the external cropping loss of the cropping frame is ignored, so the outermost part of the generated picture is not very effective?How to solve it?

test error

I had trained my model successfully,and as a result,it eventually generated model500000.npz.
When I print "python generate_result.py -g 0 --load_model result/model500000.npz" on my terminal,an error happened.I had tried many ways to solve this problem but in vain.

The question is listed as below:

python generate_result.py -g 0 --load_model result/model500000.npz

Namespace(batch_size=4, crop_to=256, eval_folder='generated_results', gpu=0, load_dataset='place2_test', load_model='result/model500000.npz', resize_to=256)
loading vgg16 ...
ok
Completion model loaded
use gpu 0

Traceback (most recent call last):
File "generate_result.py", line 114, in
main()
File "generate_result.py", line 73, in main
batch = val_iter.next()
File "/home/fl/.local/lib/python3.6/site-packages/chainer/iterators/serial_iterator.py", line 48, in next
self._previous_epoch_detail = self.epoch_detail
File "/home/fl/.local/lib/python3.6/site-packages/chainer/iterators/serial_iterator.py", line 86, in epoch_detail
return self.epoch + self.current_position / len(self.dataset)
ZeroDivisionError: division by zero

What can I do to solve this problem?I will very appreciate if you could help me,thank you!

where is the upsampling

Excuse me! I'm new to the chainer framework. In your code, I found the convolution operation with the class PConv. But I can't find how you implement upsampling. Can you point it out for me?

CUBLAS_STATUS_NOT_SUPPORTED

Hi ! Very thank you for your excellent job! I am trying to read the paper and run your code.I tools are

python3.6+chainer 4.2.0 +CUDA8.0

but it give me a problem "CUBLAS_STATUS_NOT_SUPPORTED", I tried find methods, but finde none (crying) could you give me some tips?

These are the problem details

 File "/home/cxd/code/code_python/chainer-partial_convolution_image_inpainting-master/chainer-partial_convolution_image_inpainting-master/updater.py", line 133, in update_core
    L_style = calc_loss_style(fs_I_out,fs_I_comp,fs_I_gt) #Loss style out and comp
  File "/home/cxd/code/code_python/chainer-partial_convolution_image_inpainting-master/chainer-partial_convolution_image_inpainting-master/updater.py", line 48, in calc_loss_style
    hout_gram = F.batch_matmul(hout,hout,transb=True)
  File "/root/anaconda3/lib/python3.6/site-packages/chainer/functions/math/matmul.py", line 276, in batch_matmul
    return BatchMatMul(transa=transa, transb=transb).apply((a, b))[0]
  File "/root/anaconda3/lib/python3.6/site-packages/chainer/function_node.py", line 258, in apply
    outputs = self.forward(in_data)
  File "/root/anaconda3/lib/python3.6/site-packages/chainer/functions/math/matmul.py", line 210, in forward
    return _batch_matmul(a, b, self.transa, self.transb, False),
  File "/root/anaconda3/lib/python3.6/site-packages/chainer/functions/math/matmul.py", line 178, in _batch_matmul
    return _matmul(a, b, transa, transb, transout)
  File "/root/anaconda3/lib/python3.6/site-packages/chainer/functions/math/matmul.py", line 49, in _matmul
    return xp.matmul(a, b)
  File "cupy/core/core.pyx", line 3525, in cupy.core.core.matmul
  File "cupy/core/core.pyx", line 3682, in cupy.core.core.matmul
  File "cupy/cuda/cublas.pyx", line 700, in cupy.cuda.cublas.sgemmStridedBatched
  File "cupy/cuda/cublas.pyx", line 718, in cupy.cuda.cublas.sgemmStridedBatched
  File "cupy/cuda/cublas.pyx", line 267, in cupy.cuda.cublas.check_status

The question about the mask

I am sorry to trouble you. I want to know what is the meaning of mask_b (https://github.com/SeitaroShinagawa/chainer-partial_convolution_image_inpainting/blob/master/common/net.py#L62) in your code and where did you pixel-wise multiplicate an input image and mask to simulate a broken image?
And the size of the mask is as the same as the feature values x, which is also the same as the cobvolution sliding windows, why did you say that set the mask window as 1(https://github.com/SeitaroShinagawa/chainer-partial_convolution_image_inpainting/blob/master/common/net.py#L55)?
Besides, I can not understand the unmasking rule?
I am sorry that i have so many questions, hope your responce!

License?

Could you please add a license to this project? Thank you.

Do you try 512 resolution?

Hi,
I'm also try to implement this paper by pyTorch. Input 512x512 image (celeA-HQ) and 2 batch_size. But the result have much watermark in the hole edge and low-resolution in generate hole area. Do you have this situation?
Thank you!

Probable issue in calc_loss_tv with solution

In https://github.com/SeitaroShinagawa/chainer-partial_convolution_image_inpainting/blob/master/updater.py line 68, Right now it is :
P = Variable(xp.sign(canvas-0.5)*0.5+1.0) #P region (hole mask: 1 pixel dilated region from hole)

while it should probably should be ( I donot use chainer so not sure about this):
P = Variable(xp.sign(canvas-0.5)+1.0)*0.5 #P region (hole mask: 1 pixel dilated region from hole)

Reason because the canvas would have all the values with dilation 1 from the holes with values>0
So P should be 1 for the values>0 and 0 for values =0

I'm sorry to bother you again,but I have some questions in finetune

Thanks a lot for your excellent work.I have another question for this code,how can we finetune the network in chainer?The original paper seems to use different learning rate(0.00005) and freeze the Batch Normalization in the encoder part of the network ,and in this way,they reduce the color differenes(it means the L1 loss outside the hole).I would appreciate if you could help me,thank you

Perceptual loss issue

Hi,
It seems there is an issue with the perceptual loss.
In https://github.com/SeitaroShinagawa/chainer-partial_convolution_image_inpainting/blob/master/updater.py#L14, it is :

layers = list(hout_dict.keys())
layer_name =  layers[0]
loss = F.mean_absolute_error(hout_dict[layer_name],hgt_dict[layer_name])
loss += F.mean_absolute_error(hout_dict[layer_name],hgt_dict[layer_name])
for layer_name in layers[1:]: 
    loss += F.mean_absolute_error(hcomp_dict[layer_name],hgt_dict[layer_name])
    loss += F.mean_absolute_error(hcomp_dict[layer_name],hgt_dict[layer_name])

while in actual it should be:

layers = list(hout_dict.keys())
layer_name =  layers[0]
loss = F.mean_absolute_error(hout_dict[layer_name],hgt_dict[layer_name])
loss += F.mean_absolute_error(hcomp_dict[layer_name],hgt_dict[layer_name])
for layer_name in layers[1:]: 
    loss += F.mean_absolute_error(hout_dict[layer_name],hgt_dict[layer_name])
    loss += F.mean_absolute_error(hcomp_dict[layer_name],hgt_dict[layer_name])

Little help with the requirements

I'm kind of a newbie, just getting started into deep learning please forgive my amateur question.

I tried implementing your project but I ran into a few issues where I didn't have the required packages. I quickly realized and installed chainer but still ran into some other issues. I'd be really grateful if someone helped me with the requirements I need to install before running train.py

This is the response I got after running train.py

Screenshot 2019-08-19 at 11 17 07 PM

I'm sorry to bother you,but I have some questions

First of all,given an arbitrary broken picture, how to get its mask area?
Furthermore,how to test on a real broken image?I have used your ways to retrain the model but in vain,the generated results still have the broken part.
Last but not the least,what is the meaing of using VGG model,pretraining or caculating the function loss ?
I will very appreciate if you could help me,thank you!

Questions about training time and inpainting real images

Hi, according to what you mentioned in #6 . I have to set the pixel value of broken parts to 0, which is the mask area like this:
1
2
How can I input them into the trained completion model based on my own dataset and get the complete result?
Also, I found that chainer will save the model file(.npz) every 10 epochs so that the file size is quite large. The training process has to be terminated. Therefore, I want to know how to solve this problem.
Last, could you give me some suggestions of a rational training time?
Thanks for your help and looking forward to your reply.

The error "divsion by zero"

Excuse me! I'm new to the chainer framework and python. When i train with the command "python train.py", it always occur the error ''divsion by zero', i am confused! Hope your responce

ZeroDivisionError: division by zero

I followed the suggestions in this thread but still ran into the same issue but that issue was closed so had to open a new one. I changed my path and double-checked. I used a small dataset the "Small images (256 * 256) with easy directory structure" 21G one. It only has the train and val data set so I copied the val data and named it train_256.
Can you please help me with the dataset structure?
Screenshot 2019-08-21 at 11 15 09 PM
Screenshot 2019-08-21 at 11 25 47 PM
Screenshot 2019-08-21 at 11 26 19 PM

UPDATE: I changed my dataset paths to a different location and now running this

#to check train data path

import glob
from common import paths
train_keys = glob.glob(paths.train_place2+"//*.jpg")
print(len(train_keys))
#if successful, you will get "1434892" as return

returns 1803460, I assume thats because of the different dataset but that's better than 0 which I used to get before. But the ZeroDivisionError: division by zero error persists. Can you please help?

Originally posted by @Prudvi01 in #7 (comment)

how to set the test/val_dataset and test/val_iter

When I try to train the model, in your train.py the code about setting test/val_dataset and test/val_iter was commented out,but it seem doesn't work when I remove the ’#‘ before the code,could you please tell me how to set the test/val_dataset and test/val_iter

The test of the real broken image

As you mentioned in the other issue, I input the broken image as x, and set an ones vector as the mask, but the result is poor, the result is just as the broken image.
The broken image: generated_3_igt
The mask:
generated_0_mask
I_out:
generated_1_iout
I_composition:
generated_2_icomp

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.