Pytorch implementation for Saliency Detection Framework Based on Deep Enhanced Attention Network(ICONIP 2021)
- Python 3.7
- Pytorch 1.8.1
- Torchvision 0.9.1
- Cuda 11.0
This is the Pytorch implementation of DEANet. It has been trained and tested on Linux (Ubuntu 18.02 + Cuda 11.7 + Python 3.7 + Pytorch 1.8.0), and it can also work on Win10.
- Download the pre-trained ImageNet backbone (resnet101, Baidu YunPan: resnet101, password:93q7, and put it in the 'pretrained' folder
- Download the training dataset and modify the 'train_root' and 'train_list' in the
main.py
- Set 'mode' to 'train'
- Run
main.py
- Download the testing dataset and have it in the 'dataset/test/' folder
- Download the already-trained DEANet pytorch model and modify the 'model' to its saving path in the
main.py
- Modify the 'test_folder' in the
main.py
to the testing results saving folder you want - Modify the 'sal_mode' to select one testing dataset (NJU2K, NLPR, STERE, RGBD135, LFSD or SIP)
- Set 'mode' to 'test'
- Run
main.py
The training log is saved in the 'log' folder. If you want to see the learning curve, you can get it by using: tensorboard --logdir your-log-path
resnet101
vgg_conv1, password: rllb
Baidu Pan: DEANet-pytorch, password: svyr
Google Drive:
Baidu Pan: Saliency maps, password: maft
Google Drive:
Baidu Pan:
Training dataset (with horizontal flip), password: i4mi
Testing datadet, password: 1ju8
Google Drive:
Training dataset (with horizontal flip)
Testing datadet
Below is the performance of DEANet-pyotrch (Pytorch implementation). Due to the randomness in the training process, the obtained results will fluctuate slightly.
Datasets | Metrics | Pytorch |
---|---|---|
NJU2K | S-measure | 0.917 |
maxF | 0.900 | |
maxE | 0.919 | |
MAE | 0.038 | |
NLPR | S-measure | 0.959 |
maxF | 0.922 | |
maxE | 0.979 | |
MAE | 0.014 | |
STERE | S-measure | 0.908 |
maxF | 0.877 | |
maxE | 0.921 | |
MAE | 0.041 | |
RGBD135 | S-measure | 0.932 |
maxF | 0.907 | |
maxE | 0.968 | |
MAE | 0.021 | |
LFSD | S-measure | 0.855 |
maxF | 0.855 | |
maxE | 0.885 | |
MAE | 0.078 | |
SSD | S-measure | 0.870 |
maxF | 0.830 | |
maxE | 0.901 | |
MAE | 0.051 |
Please cite our paper if you find the work useful:
@InProceedings{Xing_2021_ICONIP,
author = {Xing Sheng, Zhuoran Zheng, Qiong Wu, Chunmeng Kang, Yunliang Zhuang, Lei Lyu, Chen Lyu},
title = {Saliency Detection Framework Based on Deep Enhanced Attention Network},
booktitle = {International Conference on Neural Information Processing (ICONIP)},
pages={274--286},
year = {2021}
}