Coder Social home page Coder Social logo

context_encoder_pytorch's Introduction

context_encoder_pytorch's People

Contributors

boyuanjiang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

context_encoder_pytorch's Issues

请问您的邮箱

你好,请问你的邮箱为何无法投递邮件呢?不知还有别的联系方式吗?想请教几个问题

Unable to change the input size

I think a larger input size will utilize the GPU source better, but I failed to change the input size, could give me some guidance on which operation leads to a fixed size of imageSize 128, is this size a must?

运行中出现的错误serWarning: The use of the transforms.

你好,按照您的方法,运行test.py.我用的是CPU,在哪个ngpu那里设置为0,但是一运行的话就会出现下边的错误,您知道该怎么解决吗
D:\Anaconda\lib\site-packages\torchvision\transforms\transforms.py:207: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
warnings.warn("The use of the transforms.Scale transform is deprecated, " +
D:\Anaconda\lib\site-packages\torchvision\transforms\transforms.py:207: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
warnings.warn("The use of the transforms.Scale transform is deprecated, " +
Traceback (most recent call last):
File "", line 1, in
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "D:\Anaconda\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "D:\Anaconda\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "D:\Anaconda\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\LZZ\Downloads\context_encoder_pytorch-master\test.py", line 78, in
Traceback (most recent call last):
File "", line 1, in
dataiter = iter(dataloader)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 819, in iter
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 225, in prepare
return _DataLoaderIter(self)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 560, in init
_fixup_main_from_path(data['init_main_from_path'])
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "D:\Anaconda\lib\runpy.py", line 263, in run_path
w.start()
File "D:\Anaconda\lib\multiprocessing\process.py", line 112, in start
pkg_name=pkg_name, script_name=fname)
File "D:\Anaconda\lib\runpy.py", line 96, in _run_module_code
self._popen = self._Popen(self)
File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
mod_name, mod_spec, pkg_name, script_name)
File "D:\Anaconda\lib\runpy.py", line 85, in _run_code
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
exec(code, run_globals)
File "C:\Users\LZZ\Downloads\context_encoder_pytorch-master\test.py", line 78, in
return Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
dataiter = iter(dataloader)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 819, in iter
prep_data = spawn.get_preparation_data(process_obj._name)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

return _DataLoaderIter(self)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 560, in init
w.start()
File "D:\Anaconda\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

the defined criterionMSE is not used

in your code, I find that the defined criterionMSE is not used. the errD (BCE Loss) is computed by errD_fake = criterion(output, label). but for the errG_l2 (MSE loss) you compute it by yourself (errG_l2 = (fake - real_center).pow(2) ) and not the defined criterionMSE.
and I find in lua version, the author used built-in loss function in torch for the MSE loss.
does it influence the results if not use the MSELoss function in pytorch?

License?

I am interested in using your code, but I would like to know what license you using for this project.
MIT License would be nice!

torch.FloatTensor constructor received an invalid combination of arguments

I try to run test_one.py but failed at real_center = torch.FloatTensor(1, 3, opt.imageSize/2, opt.imageSize/2) with the error:
ypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (int, int, float, float), but expected one of:

  • no arguments
  • (int ...)
    didn't match because some of the arguments have invalid types: (!int!, !int!, !float!, !float!)
  • (torch.FloatTensor viewed_tensor)
  • (torch.Size size)
  • (torch.FloatStorage data)
  • (Sequence data)
    is there any thing wrong?

problem about torchvision

hi,
I was install torchvision from pip, and it worked.
but when I install torchvision from source, it occured a problem as:

Traceback (most recent call last):
  File "test.py", line 79, in <module>
    real_cpu, _ = dataiter.next()
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 212, in __next__
    return self._process_next_batch(batch)
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 239, in _process_next_batch
    raise batch.exc_type(batch.exc_msg)
AttributeError: Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 41, in _worker_loop
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "build/bdist.linux-x86_64/egg/torchvision/datasets/folder.py", line 116, in __getitem__
    img = self.loader(path)
  File "build/bdist.linux-x86_64/egg/torchvision/datasets/folder.py", line 63, in default_loader
    return pil_loader(path)
  File "build/bdist.linux-x86_64/egg/torchvision/datasets/folder.py", line 45, in pil_loader
    with Image.open(f) as img:
  File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 528, in __getattr__
    raise AttributeError(name)
AttributeError: __exit__

It is there anything wrong?
Thanks

images during training are too dark

Hi, thanks for sharing your great work!
I follow your recommendation that use the The Paris Dataset to train the network.
After training, when I check the result/train/real or result/train/cropped or result/train/recon folder, I found that all the images are too dark, such as this:
real image:
image
cropped image:
image
recon image:
recon_center_samples_epoch_199

The result of inpainting is effective, but why all the image are so dark?

ps: I do not change anything of the code except dataset/train folder.

parameter wtlD for Adversarial Discriminator loss is not used

some parameter is set as follow:
parser.add_argument('--wtl2',type=float,default=0.998,help='0 means do not use else use with this weight')
parser.add_argument('--wtlD',type=float,default=0.001,help='0 means do not use else use with this weight')
but i find wtLD is nerver used, so what is it for?

can't train on the pretrained model

I wanted to train on the pretrained model, but after using this command, the program prints the model structure and ends. What seems to be the problem?
python train.py --cuda --netG model/netG_streetview.pth --wtl2 0.999 --niter 200

Pretrained model's result on test images

Hi! I notice that the pretrained model's result on test images is about 15.79 percent for L1 loss while about 5.31 percent for L2 loss. However, the result reported in paper is about 9.37% for L1 Loss while about 1.96 for L2 loss. Could you provide some possible reasons for the difference? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.