Coder Social home page Coder Social logo

lidq92 / linearityiqa Goto Github PK

View Code? Open in Web Editor NEW
93.0 3.0 14.0 5.05 MB

[official] Norm-in-Norm Loss with Faster Convergence and Better Performance for Image Quality Assessment (ACM MM 2020)

Home Page: https://lidq92.github.io/LinearityIQA/

Python 100.00%
image-quality-assessment loss-functions regression in-the-wild

linearityiqa's Introduction

Norm-in-Norm Loss with Faster Convergence and Better Performance for Image Quality Assessment

License

Description

LinearityIQA code for the following paper:

Norm-in-Norm Loss Framework

How to?

Install Requirements

conda create -n reproducibleresearch pip python=3.6
source activate reproducibleresearch
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple > install_0.log
git clone https://github.com/NVIDIA/apex.git
cd apex
# source switch_cuda.sh 10.2 # [optional] if your cuda version for torch is 10.2
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ > install.log 
cd ..
rm -rf apex
# source deactive

Note: Please install apex from the source. I installed the apex from the source (by following the README.md), and pip freeze > requirements.txt shows that apex version I used is 0.1. Make sure that the CUDA version is consistent. If you have any installation problems, please find the details of error information in *.log file, e.g., if the cuda versions are not consistent between apex and torch, one can use switch-cuda.sh to solve it.

Download Datasets

Download the KonIQ-10k and CLIVE datasets. Here is an alternative link with password 9pwl. Then, run the following ln commands in the root of the repo.

cat your_downloaded_path/KonIQ-10k.tar.gz* | tar -xzf - # your_downloaded_path is your path to the downloaded files for KonIQ-10k dataset
upzip your_downloaded_path/CLIVE(IQA).zip # your_downloaded_path is your path to the downloaded files for CLIVE dataset
ln -s koniq10k/images/ KonIQ-10k 
ln -s ChallengeDB_release/Images/ CLIVE 

Training on KonIQ-10k

CUDA_VISIBLE_DEVICES=0 python main.py --dataset KonIQ-10k --resize --exp_id 0 -lr 1e-4 -bs 8 -e 30 --ft_lr_ratio 0.1 -arch resnext101_32x8d --loss_type norm-in-norm --p 1 --q 2 > exp_id=0-resnext101_32x8d-p=1-q=2-664x498.log 2>&1 & # The saved checkpoint is copied and renamed as "p1q2.pth". 
CUDA_VISIBLE_DEVICES=1 python main.py --dataset KonIQ-10k --resize --exp_id 0 -lr 1e-4 -bs 8 -e 30 --ft_lr_ratio 0.1 -arch resnext101_32x8d --loss_type norm-in-norm --p 1 --q 2 --alpha 1 0.1 > exp_id=0-resnext101_32x8d-p=1-q=2-alpha=1,0.1-664x498.log 2>&1 & # The saved checkpoint is copied and renamed as "p1q2plus0.1variant.pth"

More options can be seen by running the help command python main.py --help.

Visualization

tensorboard --logdir=runs --port=6006 # --host your_host_ip; in the server (host:port)
ssh -p port -L 6006:localhost:6006 user@host # in your PC. See the visualization in your PC

You can download our checkpoints with a password 4z7z (Alternative way: Google Drive). Then paste it to checkpoints/.

Note: We do not set drop_last=True where we obtained our results in the paper. However, if the the size of training data % batch size == 1, the last batch only contains 1 sample, one needs to set drop_last=True when prepare the train_loader in line 86-90 of IQAdataset.py. For example, if 80% images of CLIVE are considered as the training data, and the batch size is 8, then based on 929 % 8 == 1, you will have to set drop_last=True. Otherwise, you will get an error in 1D batch norm layer.

Testing

Testing on KonIQ-10k test set (Intra Dataset Evaluation)

CUDA_VISIBLE_DEVICES=0 python test_dataset.py --dataset KonIQ-10k --resize -arch resnext101_32x8d --trained_model_file checkpoints/p1q2.pth
CUDA_VISIBLE_DEVICES=1 python test_dataset.py --dataset KonIQ-10k --resize -arch resnext101_32x8d --trained_model_file checkpoints/p1q2plus0.1variant.pth

Testing on CLIVE (Cross Dataset Evaluation)

CUDA_VISIBLE_DEVICES=0 python test_dataset.py --dataset CLIVE --resize -arch resnext101_32x8d --trained_model_file checkpoints/p1q2.pth
CUDA_VISIBLE_DEVICES=1 python test_dataset.py --dataset CLIVE --resize -arch resnext101_32x8d --trained_model_file checkpoints/p1q2plus0.1variant.pth

Test Demo

CUDA_VISIBLE_DEVICES=0 python test_demo.py --img_path data/1000.JPG --resize -arch resnext101_32x8d --trained_model_file checkpoints/p1q2.pth
# > The image quality score is 10.430044178601875
CUDA_VISIBLE_DEVICES=1 python test_demo.py --img_path data/1000.JPG --resize -arch resnext101_32x8d --trained_model_file checkpoints/p1q2plus0.1variant.pth
# > The image quality score is 16.726127839961094

Remark

If one wants to use the "Norm-in-Norm" loss in his project, he can refer to the norm_loss_with_normalization in IQAloss.py.

If one wants to use the model in his project, he can refer to the IQAmodel.py.

Contact

Dingquan Li, dingquanli AT pku DOT edu DOT cn.

linearityiqa's People

Contributors

lidq92 avatar vfdev-5 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

linearityiqa's Issues

How to use 'DistributedDataParallel '?

@lidq92 Thanks for your sharing. I want to train the model by multi-gpu.
①I use the torch.nn.DataParallel from (lidq92/MDTVSFA#1)
But I find that the gpu0 memory is very large , the others gpu memory is very small. I think I don't make good use of the multi-gpu.So I select the DistributedDataParallel
DistributedDataParallel
But I get the warning:
/root/anaconda3/lib/python3.6/site-packages/ignite/metrics/metric.py:216: RuntimeWarning: IQAPerformance class does not support distributed setting. Computed result is not collected across all computing devices
the error:
writer = SummaryWriter(log_dir='{}/{}-{}'.format(args.log_dir, args.format_str, current_time))
FileExistsError: [Errno 17] File exists: 'runs/m'

Could you tell me how to make good use of multi-gpu? Thank you very much!!

link to model chechpoints is broken

Hello! In this issue you made a link to model checkpoint, stored on google drive. Unfortunately, now this link doesn't work. Can you please check, if everything is correct?

The result of cosine_similarity in IQALoss

Hello, Thank you for sharing the great work. But I have a confusion in 109th line of code in IQA
Outputs and Targets are tensors of B * 1, so the value of the element in the rho is 1 or -1

question about output range

Hello, Thank you for sharing the great work. When I use your pre-train model to predict my own images, some output score looks very wired , like -6.454392677984451 and -38.75561886966097 and so on. The label (MOS_zscore)in the KonIQ-10k dataset is range from 3.9 to 88. So I don't know how this negative result happen. Thanks.

apex==0.1 can't be installed by pip

There is no version 0.1 of apex in pip,

and installing other version is not ok.

I clone the codes from github-apex and install from source.
When using, I use sys.path.append("apex-codes-path") in the head of main.py.

And the problem is solved.

Results in MSU Video Quality Metrics Benchmark

Hello! We have recently launched and evaluated this algorithm on the dataset of our video quality metrics benchmark. The dataset distortions refer to compression artifacts on professional and user-generated content. Method took 7th place on the global leaderboard and 3th place on the no-reference-only leaderboard in the terms of SROCC. You can see more detailed results here. If you have any other video quality metric (either full-reference or no-reference) that you want to see in our benchmark, we kindly invite you to participate. You can submit it to the benchmark, following the submission steps, described here.

How does "resize" influence the performance?

I think downsampling may degrade the quality of images and may cause blurry.
And the training set we can get has already been downsampled. (otherwise they can't be easily in the same resolution)
Should we resize again for the data augmentation? Or just crop the image for training?

Cannot download checkpoints

Hello, Thank you for providing the code. I tried to download your checkpoints (from pan.baidu.com) but it asks for account which asks for Chinese phone number. Can you please provide another way to download your model weights such as Google Drive?

on resizing operation

@lidq92 Thanks for your sharing. I test the model p1q2plus0.1variant.pth you provided, the result is:

/LinearityIQA-master_test/test_dataset.py --dataset CLIVE --arch resnext101_32x8d --trained_model_file /mnt/fhl/LinearityIQA-master/p1q2plus0.1variant.pth
Namespace(P6=1, P7=1, alpha=[1, 0], angle=2, arch='resnext101_32x8d', augment=False, batch_size=8, beta=[0.1, 0.1, 1], crop_size_h=498, crop_size_w=498, data_info={'KonIQ-10k': './data/KonIQ-10kinfo.mat', 'CLIVE': './data/CLIVEinfo.mat'}, dataset='CLIVE', epochs=30, exp_id=0, ft_lr_ratio=0.1, hflip_p=0.5, im_dirs={'KonIQ-10k': '/data_sdb/fhl/KonIQ-10k/images/', 'CLIVE': '/data_sdb/fhl/CLIVE/Images/'}, loss_type='norm-in-norm', lr=0.0001, p=1, pool='avg', q=2, resize=False, resize_size_h=498, resize_size_w=664, save_result_file='results/dataset=CLIVE-tested_on_p1q2plus0.1variant.pth', seed=19920517, train_and_val_ratio=0, train_ratio=0, trained_model_file='/mnt/fhl/LinearityIQA-master/p1q2plus0.1variant.pth', use_bn_end=False)
# test images: 1162
CLIVE, SROCC: 0.797
CLIVE, PLCC: 0.832
CLIVE, RMSE: 11.731
CLIVE, SROCC1: 0.795
CLIVE, PLCC1: 0.827
CLIVE, RMSE1: 11.856
CLIVE, SROCC2: 0.795
CLIVE, PLCC2: 0.830
CLIVE, RMSE2: 11.571

The SROCC is 0.797, is inconsistent with yours(0.834).

KeyError: 'IQA_performance'

CUDA_VISIBLE_DEVICES=0 python main.py --dataset KonIQ-10k --resize --exp_id 0 --lr 1e-4 -bs 8 -e 30 --ft_lr_ratio 0.1 --arch resnext101_32x8d --loss_type Lp --p 1 --q 2

after about one epoch?

Engine run is terminating due to exception: 'IQA_performance'.
……
File "main.py", line 100, in epoch_event_function
performance = evaluator.state.metrics['IQA_performance']
KeyError: 'IQA_performance'

train error

Hello, when I train the network, I met this error:

Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0
Current run is terminating due to exception: Output should have 2 items of the same length, got 2 and 3, 2
Engine run is terminating due to exception: Output should have 2 items of the same length, got 2 and 3, 2
Engine run is terminating due to exception: Output should have 2 items of the same length, got 2 and 3, 2

And I don't know how to solve it, Please help me , thank you!

about train

Hello, when I execute "CUDA"_ VISIBLE_ DEVICES=0 python main.py --dataset KonIQ-10k --resize --exp_ id 0 --lr 1e-4 -bs 8 -e 30 --ft_ lr_ ratio 0.1 --arch resnext101_ 32x8d --loss_ type norm-in-norm --p 1 --q 2 > exp_ id=0-resnext101_ 32x8d-p = 1-Q = 2-664x498. Log 2 > & 1 & " , the terminal outputs" [1] 11303 "directly without any training; Another command directly outputs "[2] 2264 [1] Exit 1".

Error(s) in loading state_dict for IQAModel

RuntimeError: Error(s) in loading state_dict for IQAModel:
Missing key(s) in state_dict: "regr6.0.weight", "regr6.0.bias", "regr6.1.weight", "regr6.1.bias", "regr6.1.running_mean", "regr6.1.running_var", "regr7.0.weight", "regr7.0.bias", "regr7.1.weight", "regr7.1.bias", "regr7.1.running_mean", "regr7.1.running_var", "regression.0.weight", "regression.0.bias", "regression.1.weight", "regression.1.bias", "regression.1.running_mean", "regression.1.running_var".
Unexpected key(s) in state_dict: "regr6.weight", "regr6.bias", "regr7.weight", "regr7.bias", "regression.weight", "regression.bias".

The weights are downloaded from baidu drive.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.