X-Restormer [Paper Link]
Xiangyu Chen*, Zheyuan Li*, Yuandong Pu*,Yihao Liu, Jiantao Zhou, Yu Qiao and Chao Dong
@article{chen2023comparative,
title={A Comparative Study of Image Restoration Networks for General Backbone Network Design},
author={Chen, Xiangyu and Li, Zheyuan and Pu, Yuandong and Liu, Yihao and Zhou, Jiantao and Qiao, Yu and Dong, Chao},
journal={arXiv preprint arXiv:2310.11881},
year={2023}
}
- PyTorch>=1.13.0 (Recommend NOT using torch 1.8!)
- BasicSR==1.4.2
Install Pytorch first. Then,
pip install -r requirements.txt
python setup.py develop
- Refer to
./options/test
for the configuration file of the model to be tested, and prepare the testing data and pretrained model. - The pretrained models are available at Google Drive or Baidu Netdisk (access code: im3q).
- Then run the following codes (taking
sr_300k.pth
as an example):
python xrestormer/test.py -opt options/test/001_xrestormer_sr.yml
The testing results will be saved in the ./results
folder.
- Refer to
./options/test/001_xrestormer_sr.yml
for inference without the ground truth image.
- Refer to
./options/train
for the configuration file of the model to train. - Preparation of training data can refer to this page. ImageNet dataset can be downloaded at the official website.
- The training command is like
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=1231 xrestormer/train.py -opt ./options/train/001_xrestormer_sr.yml --launcher pytorch
- Note that the default batch size per GPU is 4, which will cost about 60G memory for each GPU.
The training logs and weights will be saved in the ./experiments
folder.
The inference results on benchmark datasets are available at Google Drive or Baidu Netdisk (access code: g9dw).
If you have any question, please contact [email protected] or [email protected].