Coder Social home page Coder Social logo

zijundeng / bdrar Goto Github PK

View Code? Open in Web Editor NEW
122.0 7.0 29.0 11 KB

Code for the ECCV 2018 paper "Bidirectional Feature Pyramid Network with Recurrent Attention Residual Modules for Shadow Detection"

Python 100.00%
deeplearning computer-vision shadow-detection

bdrar's People

Contributors

zijundeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

bdrar's Issues

About performance of trained model.

Hi @zijundeng ,
I have loaded your pre-trained model and inference, but I got Ber = 5.20. I have ensured that my evaluation code is right, while I used your network, data loader and other.
I don't know whether is there some problems with your pre-trained model or other.
Thanks.

cannot load pre-trained model, the parameters'name are not consistent

I cannot load the pre-trained model when run infer.py. There are many mismatched parameters.
For example, in the refine3_h2l module of the BDRAR network, the parameters'name in network is
################
refine3_h2l.0.weight
refine3_h2l.1.weight
refine3_h2l.1.bias
refine3_h2l.3.weight
refine3_h2l.4.weight
refine3_h2l.4.bias
refine3_h2l.6.weight
refine3_h2l.7.weight
refine3_h2l.7.bias
refine3_l2h.0.weight
refine3_l2h.1.weight
refine3_l2h.1.bias
refine3_l2h.3.weight
refine3_l2h.4.weight
refine3_l2h.4.bias
refine3_l2h.6.weight
refine3_l2h.7.weight
refine3_l2h.7.bias
################

However, in the pre-trained models, the corresponding parameters'name is:
################
refine3_hl.0.weight
refine3_hl.1.weight
refine3_hl.1.bias
refine3_hl.1.running_mean
refine3_hl.1.running_var
refine3_hl.3.weight
refine3_hl.4.weight
refine3_hl.4.bias
refine3_hl.4.running_mean
refine3_hl.4.running_var
refine3_hl.6.weight
refine3_hl.7.weight
refine3_hl.7.bias
refine3_hl.7.running_mean
refine3_hl.7.running_var
refine3_lh.0.weight
refine3_lh.1.weight
refine3_lh.1.bias
refine3_lh.1.running_mean
refine3_lh.1.running_var
refine3_lh.3.weight
refine3_lh.4.weight
refine3_lh.4.bias
refine3_lh.4.running_mean
refine3_lh.4.running_var
refine3_lh.6.weight
refine3_lh.7.weight
refine3_lh.7.bias
refine3_lh.7.running_mean
refine3_lh.7.running_var
################

pre-trained SOD model

Thanks for sharing the whole training and testing codes. So could you further share the pre-trained BDRAR model for salient object detection?

Fine tune the existing network weights

Hello, first thanks for sharing the code and the trained network.
I downloaded 3000.pth that you provided, and wanted to do further fine tuning on other shadow dataset.
When I run train.py I saw that the file 3000_optim.pth is missing,
Can you upload it as well?

Many thanks,
Tamir

crash by an inplace operation

Hello there,

I am trying to fine tune the model you shared with a new dataset, but I can't get it running due to the following error:

lib/python2.7/site-packages/torch/nn/functional.py:1749: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) Traceback (most recent call last): File "train.py", line 168, in <module> main() File "train.py", line 95, in main train(net, optimizer) File "train.py", line 136, in train loss.backward() File "/home/eloyroura/anaconda3/envs/BDRAR/lib/python2.7/site-packages/torch/tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/eloyroura/anaconda3/envs/BDRAR/lib/python2.7/site-packages/torch/autograd/__init__.py", line 89, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

Any hint of what i am doing wrong?

RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58

At the beginning, I encountered a problem like this:
"RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation"
So, I solved it by changing

for m in self.modules():
    if isinstance(m, nn.ReLU) or isinstance(m, nn.Dropout):
        m.inplace = True

to be

for m in self.modules():
    if isinstance(m, nn.ReLU):
        m.inplace = True

But a new problem occurs , "RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58".
Can't a single 1080 ti support this model?

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

I try to run train.py,then error happen.Can anyone help?

Traceback (most recent call last):
File "D:\Program Files (x64)\PyCharm 2021.1.3\plugins\python\helpers\pydev\pydevd.py", line 1483, in exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\Program Files (x64)\PyCharm 2021.1.3\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "F:/20220410_BDRAR-master/train1.py", line 149, in
main()
File "F:/20220410_BDRAR-master/train1.py", line 78, in main
train(net, optimizer)
File "F:/20220410_BDRAR-master/train1.py", line 117, in train
loss.backward()
File "D:\anaconda3\envs\BDRAR\lib\site-packages\torch\tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "D:\anaconda3\envs\BDRAR\lib\site-packages\torch\autograd_init
.py", line 147, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 8, 128, 128]], which is output 0 of ReluBackward1, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
python-BaseException

How to solve "out of memory"?

Hi,
My GPU memory is enough, 11GB, but I run the training code and obtain the following errors:

/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py:1890: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py:1961: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py:1006: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
Traceback (most recent call last):
File "train.py", line 152, in
main()
File "train.py", line 81, in main
train(net, optimizer)
File "train.py", line 112, in train
loss4_h2l = bce_logit(predict4_h2l, labels)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/loss.py", line 573, in forward
reduction=self.reduction)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py", line 1651, in binary_cross_entropy_with_logits
loss = input - input * target + max_val + ((-max_val).exp() + (-input - max_val).exp()).log()
RuntimeError: CUDA error: out of memory

I want to know how to solve is problem. As the same time, I try to change the batch size 8 to a smaller batch size 4, but it does not work, and outputs the following errors:

/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py:1890: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py:1961: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py:1006: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
Traceback (most recent call last):
File "train.py", line 153, in
main()
File "train.py", line 82, in main
train(net, optimizer)
File "train.py", line 121, in train
loss.backward()
File "/usr/local/lib/python2.7/dist-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/init.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

So can you help me solve this issue? Very thank you.

i have problem

when i run this code,it show Attempting to deserialize object on CUDA device 1 but torch.cuda.device_count() is 1. Please use torch.load with map_location to map your storages to an existing device.Do you know how to solve this ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.