Coder Social home page Coder Social logo

weijun88 / ldf Goto Github PK

View Code? Open in Web Editor NEW
116.0 116.0 19.0 5.24 MB

Codes for the CVPR2020 paper "Label Decoupling Framework for Salient Object Detection"

Home Page: https://arxiv.org/pdf/2008.11048.pdf

MATLAB 25.84% Python 74.16%
cvpr2020 saliency-detection salient-object-detection

ldf's People

Contributors

weijun88 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ldf's Issues

运行train-fine里的test.py出现如下问题

ebf3f9650d30bc09dd5d3150be5d21b
作者您好,我前面的train-course里的train.py和test.py, train-fine里的train.py都可以运行成功。只有这个出现了报错。我不知道如何解决。能帮忙看看是哪里出问题了?谢谢~~~~

recurrent

你好!我最近在复现你的论文,按你的readme中训练过程进行了重新训练,与论文中实验结果有些不同,不知道你在训练时有什么特殊技巧或设置吗。

When will the code be released?

Hi weijun, congratulations! I'm really interested in your code and can't wait to try it out. So when are you going to release the code? Thank you! The paper was great!

train.txt

你好,
感谢你做的贡献,我想请问一下你训练的数据集是怎么样的?train.txt里面内容是什么?因为我下DUTS数据集训练的时候,一直给我报错‘TypeError: 'NoneType' object is not callable’。
谢谢

Inconsistent results on PASCAL-S

Hi,
I just found your provided saliency maps do not have the same evaluation results on PASCAL-S dataset as reported in the paper. I binarize the PASCAL-S dataset for evaluation.
Did you process the PASCAL-S dataset differently?

IOU computing way

hello, In your paper, IOU loss cannot be applied when calculating detail and body, because the input of IOU label is in binary form. But I think BASNet is the IOU loss calculated directly using the true saliency probability. How do you understand this? and the target form is 0 or 1 ? or [0,1] saliency probability

code about utils to generate body and detail map

As you written in your paper,

In addition, we multiply the
newly generated labels with the original binary image I to
remove the background interference as

But I see no where you do this in the code you published, is there something I missed?
Looking for your reply ASAP.

Can't train the model because of model's apex error

Traceback (most recent call last):
File "/content/drive/MyDrive/Colab Notebooks/LDF/LDF/train-coarse/apex/apex/parallel/init.py", line 15, in
import syncbn
ModuleNotFoundError: No module named 'syncbn'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 15, in
from apex import apex
File "/content/drive/MyDrive/Colab Notebooks/LDF/LDF/train-coarse/apex/apex/init.py", line 8, in
from . import parallel
File "/content/drive/MyDrive/Colab Notebooks/LDF/LDF/train-coarse/apex/apex/parallel/init.py", line 18, in
from .sync_batchnorm import SyncBatchNorm
File "/content/drive/MyDrive/Colab Notebooks/LDF/LDF/train-coarse/apex/apex/parallel/sync_batchnorm.py", line 5, in
from .sync_batchnorm_kernel import SyncBatchnormFunction
File "/content/drive/MyDrive/Colab Notebooks/LDF/LDF/train-coarse/apex/apex/parallel/sync_batchnorm_kernel.py", line 4, in
from apex.parallel import ReduceOp
ModuleNotFoundError: No module named 'apex.parallel'

What wrong to this apex ?

saliency maps

Your work is valuable and interesting. However, you only provide the results under the metrics of MAE, mean-F measure and E-measure. So I want to learn from your saliency maps. Thanks a lot.

EOFError: Ran out of input

Hello,

I have been trying to get your model up and running. I want to test it with the ECSSD Dataset.

I am running Windows 10 with Cuda 10.1.

I get the following error when running: python test.py

Parameters...
datapath  : ../data/ECSSD
snapshot  : ./out/model-40.pt
mode      : test
Traceback (most recent call last):
  File "test.py", line 49, in <module>
    t.save()
  File "test.py", line 35, in save
    for image, (H, W), name in self.loader:
  File "C:\Users\Seppi\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 352, in __iter__
    return self._get_iterator()
  File "C:\Users\Seppi\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 294, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\Seppi\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 801, in __init__
    w.start()
  File "C:\Users\Seppi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "C:\Users\Seppi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Seppi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "C:\Users\Seppi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\Seppi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: 'NoneType' object is not callable

C:\Users\Seppi\Desktop\LDF\train-fine>Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\Seppi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Users\Seppi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

Question.

Firstly, thanks for your contributions. I want to know why is the training part divided into "train-coarse" and "train-fine"? And using the result of the "train-coarse" phase to train "train-fine"? Could you please help me understand that? Thanks a lot!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.