Coder Social home page Coder Social logo

inpainting-partial-conv's People

Contributors

bobqywei avatar bobwei-intern avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

inpainting-partial-conv's Issues

I'm sorry to bother you,but I have some questions...

First of all,when I python train.py,an error occured:
(py3) [fl@ibcu05 place2]$ python train.py
Loaded training dataset with 1434892 samples and 55116 masks
Loaded model to device...
Setup Adam optimizer...
Setup loss function...

EPOCH:0 of 3 - starting training loop from iteration:0 to iteration:89680

0%| | 0/89680 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 141, in
loss_dict = loss_func(image, mask, output, gt)
File "/home/fl/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/fl/place2/loss.py", line 94, in forward
loss_dict["tv"] = total_variation_loss(composed_output, self.l1) * LAMBDAS["tv"]
File "/home/fl/place2/loss.py", line 50, in total_variation_loss
loss = l1(image[:, :, :, :-1] - image[:, :, :, 1:]) + l1(image[:, :, :-1, :] - image[:, :, 1:, :])
File "/home/fl/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'target'

In addition,when I python inpaint.py,
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.

Aborted (core dumped)

I would very appreciate if anyone could help me,thank you!

training error

thanks for sharing your work, i downloaded the masks from issue#1 and updated them to size 256256 and i want to train with places2 dataset (images 256256) but when i m trying to train using python3 train.py i m getting the following error
Loaded training dataset with 1803460 samples and 12000 masks
Traceback (most recent call last):
File "train.py", line 73, in
assert(data_size % args.batch_size == 0)
AssertionError
can u help? @bobqywei

mask update

could i ask a question about how to update the mask?cause i dont understand the code clearly.
in the mask,1 means need to process and 0 means dont , in the new mask ,the number of 1 becomes larger or smaller?
thank you ~

Pretrained model

Hi, this is awesome work! Is there somewhere I can download the pretrained model to try it out? Would save 5 days of training :)

Thanks!

about MEANean and STDDEV of places2

i have another question regarding if i trained with another dataset should mean and stddev values in places2_train differ? and if these values will differ how to calculate them.
thanks alot

about PyTorch version

Hi,
Thank you for your work.
which version of PyTorch is used?
I met with some strange error when trying it.

##################
Loaded model to device...
Setup Adam optimizer...
Setup loss function...

EPOCH:0 of 3 - starting training loop from iteration:0 to iteration:10625

0%| | 0/10625 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/data/code/inpainting/inpainting-partial-conv/train.py", line 143, in
output = model(image, mask)
File "/data/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/data/code/inpainting/inpainting-partial-conv/partial_conv_net.py", line 150, in forward
encoder_dict[key], mask_dict[key] = getattr(self, encoder_key)(encoder_dict[key_prev], mask_dict[key_prev])
File "/data/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/data/code/inpainting/inpainting-partial-conv/partial_conv_net.py", line 53, in forward
output = self.input_conv((input_x * mask))
File "/data/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/data/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: expected stride to be a single integer value or a list of 3 values to match the convolution dimensions, but got stride=[2, 2]

Process finished with exit code 1

train problem

Hello, thank you very much for your work. Now I want to try Paris street. I create Paris in dataset, and then I create the subdirectories data and Val, test. But running train.py will give an error, and I can't find the picture of my training. My mask is made by myself. Why?

Traceback (most recent call last): File "places2_train.py", line 54, in <module> img, mask, gt = zip(*[places2[i] for i in range(1)]) # returns tuple of a single batch of 3x256x256 images File "places2_train.py", line 54, in <listcomp> img, mask, gt = zip(*[places2[i] for i in range(1)]) # returns tuple of a single batch of 3x256x256 images File "places2_train.py", line 41, in __getitem__ gt_img = Image.open(self.img_paths[index]) IndexError: list index out of range

About the loss and input

Why the weights of the losses are not the same as the orginal paper?
Why the input is gt*mask?

which parameter did you set during the train ?

Hello,
I've trained it with the places365_standard, with batch_size=16, the mask that you mentionned on the issue #1 and the other parameters by default.
But i didn't get that interesting results.
Could you tell me what parameters did you put to get the results that you show in your video?

about the results

hope you are fine. i used the test file after adjusting it to work with one image and used the model u provided but didn't get good results as the video? do u think the model needs more training ?
test
thanks alot for your consistent help:)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.