Comments (11)
Thanks for your interest in our work@KleinXin.
- The zip file can be obtained at MEGA. If you want to download the zip file using Baidu Yun, please let me know.
- For image quality regression task, we pay more attention to the correlation between predict scores and MOSs. In addition, the proposed BIQA models do not constrain the predicted scores in a specified range, for example [0, 1]. Therefore, the predicted score 0.38 may reflect the worst image quality and 0.6 may reflect the best image quality, and lots of fair images close to 0.48. Hope it helps you out.
from spaq.
Thanks for your interest in our work@KleinXin.
- The zip file can be obtained at MEGA. If you want to download the zip file using Baidu Yun, please let me know.
- For image quality regression task, we pay more attention to the correlation between predict scores and MOSs. In addition, the proposed BIQA models do not constrain the predicted scores in a specified range, for example [0, 1]. Therefore, the predicted score 0.38 may reflect the worst image quality and 0.6 may reflect the best image quality, and lots of fair images close to 0.48. Hope it helps you out.
- I could not access MEGA, so it is better if you can provide a link with zip file of Baidu Yun
- I also think it should be streched according to the min and max values.
Thank you for your suggestions!
from spaq.
Hi @KleinXin KleinXin,
I will upload the zip file to Baidu Yun. It may need many hours. Please keep attention to our GitHub page in the following days.
I agree with you that normalizating the predicted score to a certain range may help to well reflect the image quality, but it may result in a worse correlation coefficient.
Best,
Hanwei
from spaq.
Hi @KleinXin KleinXin,
I will upload the zip file to Baidu Yun. It may need many hours. Please keep attention to our GitHub page in the following days.
I agree with you that normalizating the predicted score to a certain range may help to well reflect the image quality, but it may result in a worse correlation coefficient.
Best,
Hanwei
Thank you very much!
I also tested the same 9800 images by using Baidu Image Quality Evaluation API.
The mean absolute difference value is 21.28 and std is 13.74. The values are normalized to 0~100.
It shows a relatively large difference. Do you think the data annotation causes the difference? Or it may have some other reasons?
thx
from spaq.
d to 0~100.
It shows a relatively large difference. Do you think the data annotation causes the dif
Hi KleinXin,
I cannot understand the problem. The mean absolute difference value between the BL model and the Baidu Image Quality Evaluation API is 21.28 and std is 13.74? If so, how to relate the difference to data annotation?
from spaq.
d to 0~100.
It shows a relatively large difference. Do you think the data annotation causes the difHi KleinXin,
I cannot understand the problem. The mean absolute difference value between the BL model and the Baidu Image Quality Evaluation API is 21.28 and std is 13.74? If so, how to relate the difference to data annotation?
Yes, I want to know the score of the same image evaluated by different models. So I use SPAQ and Baidu. It seems these two models have a very large difference.
I donot think one model can surpass another such a large score if they are all state of the art models. So I think the only reason is that the data are different.
from spaq.
Hi KleinXin,
The zip file can be downloaded at https://pan.baidu.com/s/1JzwZxwSOpIqcc16cOliBVw (code: 8og5).
I think it is reasonable. For the following reasons,
- The Baidu IQA API may hard to capture the realistic camera distortions in SPAQ. You can validate it by PLCC and SRCC rather than computing the absolute difference with BL model.
- Our models do not normalize the predicted scores to [0, 100], then the normazlization operation may bring some errors. So does Baidu IQA API.
- As you mentioned, the BL model was trained on SPAQ only and Baidu IQA API might have a great difference with it.
from spaq.
Hi KleinXin,
The zip file can be downloaded at https://pan.baidu.com/s/1JzwZxwSOpIqcc16cOliBVw (code: 8og5).
I think it is reasonable. For the following reasons,
- The Baidu IQA API may hard to capture the realistic camera distortions in SPAQ. You can validate it by PLCC and SRCC rather than computing the absolute difference with BL model.
- Our models do not normalize the predicted scores to [0, 100], then the normazlization operation may bring some errors. So does Baidu IQA API.
- As you mentioned, the BL model was trained on SPAQ only and Baidu IQA API might have a great difference with it.
Thank you very much!
I will read your paper again carefully and try to know how to use it according to our requirement.
from spaq.
Hi KleinXin,
The zip file can be downloaded at https://pan.baidu.com/s/1JzwZxwSOpIqcc16cOliBVw (code: 8og5).
I think it is reasonable. For the following reasons,
- The Baidu IQA API may hard to capture the realistic camera distortions in SPAQ. You can validate it by PLCC and SRCC rather than computing the absolute difference with BL model.
- Our models do not normalize the predicted scores to [0, 100], then the normazlization operation may bring some errors. So does Baidu IQA API.
- As you mentioned, the BL model was trained on SPAQ only and Baidu IQA API might have a great difference with it.
I carefully read your paper again.
In chapter 5.1, where the trainning process of Baseline Model is described, you said that l1-norm is used and ground truth of q is MOS which is a continuous score in [0,100] to represent the overall qulity of image.
I used BL_release.pt model to do the test of 9800 images. All scores I got are around 45, with min-35 and max-60. The score of the blurred image on the top is 45 and I divided it by 100 to normalize values to [0,1]. I donot think that image should got such a high score. In fact, the score of that image from Baidu is 0.0025 after normalizing to [0,1] by dividing 100.
Then I normalize scores of all images by using min & max values. The formular is v_norm = (v-min)/(max-min) where v is the score I got from the inference of BL_release.py model. Anying wrong with this procedure?
from spaq.
Hi KleinXin,
The zip file can be downloaded at https://pan.baidu.com/s/1JzwZxwSOpIqcc16cOliBVw (code: 8og5).
I think it is reasonable. For the following reasons,
- The Baidu IQA API may hard to capture the realistic camera distortions in SPAQ. You can validate it by PLCC and SRCC rather than computing the absolute difference with BL model.
- Our models do not normalize the predicted scores to [0, 100], then the normazlization operation may bring some errors. So does Baidu IQA API.
- As you mentioned, the BL model was trained on SPAQ only and Baidu IQA API might have a great difference with it.
I carefully read your paper again.
In chapter 5.1, where the trainning process of Baseline Model is described, you said that l1-norm is used and ground truth of q is MOS which is a continuous score in [0,100] to represent the overall qulity of image.I used BL_release.pt model to do the test of 9800 images. All scores I got are around 45, with min-35 and max-60. The score of the blurred image on the top is 45 and I divided it by 100 to normalize values to [0,1]. I donot think that image should got such a high score. In fact, the score of that image from Baidu is 0.0025 after normalizing to [0,1] by dividing 100.
Then I normalize scores of all images by using min & max values. The formular is v_norm = (v-min)/(max-min) where v is the score I got from the inference of BL_release.py model. Anying wrong with this procedure?
Hi KleinXin,
I think the normalization operation is one of good attempts. Were the 9800 images sampled from the SPAQ database?
from spaq.
Hi KleinXin,
The zip file can be downloaded at https://pan.baidu.com/s/1JzwZxwSOpIqcc16cOliBVw (code: 8og5).
I think it is reasonable. For the following reasons,
- The Baidu IQA API may hard to capture the realistic camera distortions in SPAQ. You can validate it by PLCC and SRCC rather than computing the absolute difference with BL model.
- Our models do not normalize the predicted scores to [0, 100], then the normazlization operation may bring some errors. So does Baidu IQA API.
- As you mentioned, the BL model was trained on SPAQ only and Baidu IQA API might have a great difference with it.
I carefully read your paper again.
In chapter 5.1, where the trainning process of Baseline Model is described, you said that l1-norm is used and ground truth of q is MOS which is a continuous score in [0,100] to represent the overall qulity of image.
I used BL_release.pt model to do the test of 9800 images. All scores I got are around 45, with min-35 and max-60. The score of the blurred image on the top is 45 and I divided it by 100 to normalize values to [0,1]. I donot think that image should got such a high score. In fact, the score of that image from Baidu is 0.0025 after normalizing to [0,1] by dividing 100.
Then I normalize scores of all images by using min & max values. The formular is v_norm = (v-min)/(max-min) where v is the score I got from the inference of BL_release.py model. Anying wrong with this procedure?Hi KleinXin,
I think the normalization operation is one of good attempts. Were the 9800 images sampled from the SPAQ database?
No, all images are acquired from internet.
from spaq.
Related Issues (20)
- Can not handle image with size smaller then 512? HOT 1
- 能提供下训练代码吗? HOT 1
- database is synthetic or real ? HOT 1
- A question about image process HOT 1
- BL模型有问题!!! HOT 1
- MT-A推理 HOT 1
- Database Access HOT 1
- Scenes shot with many devices HOT 1
- Sharing raw subjective ratings (or MOS + variance per image) HOT 5
- AttributeError: numpy HOT 2
- Score Error of the pretrain model, which I doubt the experiments results. HOT 1
- 关于下载的内容该如何使用的问题
- 请问有人知道这种情况该如何解决吗
- 推理代码中不包含model.eval()? HOT 1
- Could you put the checkpoint file(BL_release.pt) to Baidu Yun?3Q HOT 1
- 关于模型保存的问题 HOT 2
- List of test images
- where are the annotations for these images HOT 1
- Results in MSU Video Quality Metrics Benchmark HOT 1
- can not download model by the given url HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from spaq.