This is a PyTorch implementation of our proposed DIPONet for SOD. It has been accepted by DIGITAL SIGNAL PROCESSING.
- Pytorch 0.4.1+
- torchvision
- scipy 1.2.1
- opencv-python 4.4.0.44
git clone https://github.com/TJUMMG/DIPONet.git
cd DIPONet/
Download the training dataset and unzip it into ./data
folder.
- [DUTS-TR]
Download the testing datasets and unzip them into ./data
folder.
- [DUTS-TE]
- [ECSSD]
- [HKU-IS]
- [PASCALS]
- [SOD]
- [DUTOMRON]
For edge label, you can calculate them by using "get_edge_data.py". And you should put edge labels and sal labels together.
If you can't find these public SOD datasets, please concat us via: [email protected]
.
Download the following pre-trained models [BaiDuYun]:https://pan.baidu.com/s/1qc6zgWf3aDAre6Ey_KQGlw
提取码:4o44 into ./pretrained
folder.
-
Set the
image_root
andgt_root
path intrain.py
correctly. -
We demo using ResNet-50 and VGG-16 as network backbone and train with a initial lr of 2e-5 for 26 epoches, which is divided by 2 after 14 and 22 epochs. The input size is 448 * 448 and the batch size is 5.
-
We demo joint training with edge, you should put edge labels with saliency labels together.
-
After training the result model will be stored under
./trainresults/resnet
or./trainresults/vgg
folder.
-
Set the
test_root
anddataset
andmodel.load_state_dict(..)
path intest.py
correctly. -
The input size for testing is 448 * 448.
-
We demo joint training with edge, you can get edge results and saliency results.
-
After testing, the result images will be stored under
./testresults/DIPONet_ResNet
or./testresults/DIPONet_VGG
folder.
- The DIPONet model trained by authors [BaiDuYun]:https://pan.baidu.com/s/1C-k2gepxcpPbHR4QEjbD6A 提取码:frf6
- we provide Saliency maps calculated by ourselves [BaiDuYun]: https://pan.baidu.com/s/1KLJxZzALrUflSj2NI-mcAg 提取码:0jdg
- All the evaluation results are calculated by using https://github.com/ArcherFMY/sal_eval_toolbox.
If you have any questions, feel free to contact us via: [email protected]
or [email protected]
.