Coder Social home page Coder Social logo

Problem about the score about spaq HOT 11 CLOSED

h4nwei avatar h4nwei commented on July 29, 2024
Problem about the score

from spaq.

Comments (11)

h4nwei avatar h4nwei commented on July 29, 2024

Thanks for your interest in our work@KleinXin.

  1. The zip file can be obtained at MEGA. If you want to download the zip file using Baidu Yun, please let me know.
  2. For image quality regression task, we pay more attention to the correlation between predict scores and MOSs. In addition, the proposed BIQA models do not constrain the predicted scores in a specified range, for example [0, 1]. Therefore, the predicted score 0.38 may reflect the worst image quality and 0.6 may reflect the best image quality, and lots of fair images close to 0.48. Hope it helps you out.

from spaq.

KleinXin avatar KleinXin commented on July 29, 2024

Thanks for your interest in our work@KleinXin.

  1. The zip file can be obtained at MEGA. If you want to download the zip file using Baidu Yun, please let me know.
  2. For image quality regression task, we pay more attention to the correlation between predict scores and MOSs. In addition, the proposed BIQA models do not constrain the predicted scores in a specified range, for example [0, 1]. Therefore, the predicted score 0.38 may reflect the worst image quality and 0.6 may reflect the best image quality, and lots of fair images close to 0.48. Hope it helps you out.
  1. I could not access MEGA, so it is better if you can provide a link with zip file of Baidu Yun
  2. I also think it should be streched according to the min and max values.

Thank you for your suggestions!

from spaq.

h4nwei avatar h4nwei commented on July 29, 2024

Hi @KleinXin KleinXin,

I will upload the zip file to Baidu Yun. It may need many hours. Please keep attention to our GitHub page in the following days.

I agree with you that normalizating the predicted score to a certain range may help to well reflect the image quality, but it may result in a worse correlation coefficient.

Best,
Hanwei

from spaq.

KleinXin avatar KleinXin commented on July 29, 2024

Hi @KleinXin KleinXin,

I will upload the zip file to Baidu Yun. It may need many hours. Please keep attention to our GitHub page in the following days.

I agree with you that normalizating the predicted score to a certain range may help to well reflect the image quality, but it may result in a worse correlation coefficient.

Best,
Hanwei

Thank you very much!
I also tested the same 9800 images by using Baidu Image Quality Evaluation API.
The mean absolute difference value is 21.28 and std is 13.74. The values are normalized to 0~100.
It shows a relatively large difference. Do you think the data annotation causes the difference? Or it may have some other reasons?
thx

from spaq.

h4nwei avatar h4nwei commented on July 29, 2024

d to 0~100.
It shows a relatively large difference. Do you think the data annotation causes the dif

Hi KleinXin,
I cannot understand the problem. The mean absolute difference value between the BL model and the Baidu Image Quality Evaluation API is 21.28 and std is 13.74? If so, how to relate the difference to data annotation?

from spaq.

KleinXin avatar KleinXin commented on July 29, 2024

d to 0~100.
It shows a relatively large difference. Do you think the data annotation causes the dif

Hi KleinXin,
I cannot understand the problem. The mean absolute difference value between the BL model and the Baidu Image Quality Evaluation API is 21.28 and std is 13.74? If so, how to relate the difference to data annotation?

Yes, I want to know the score of the same image evaluated by different models. So I use SPAQ and Baidu. It seems these two models have a very large difference.

I donot think one model can surpass another such a large score if they are all state of the art models. So I think the only reason is that the data are different.

from spaq.

h4nwei avatar h4nwei commented on July 29, 2024

Hi KleinXin,

The zip file can be downloaded at https://pan.baidu.com/s/1JzwZxwSOpIqcc16cOliBVw (code: 8og5).

I think it is reasonable. For the following reasons,

  1. The Baidu IQA API may hard to capture the realistic camera distortions in SPAQ. You can validate it by PLCC and SRCC rather than computing the absolute difference with BL model.
  2. Our models do not normalize the predicted scores to [0, 100], then the normazlization operation may bring some errors. So does Baidu IQA API.
  3. As you mentioned, the BL model was trained on SPAQ only and Baidu IQA API might have a great difference with it.

from spaq.

KleinXin avatar KleinXin commented on July 29, 2024

Hi KleinXin,

The zip file can be downloaded at https://pan.baidu.com/s/1JzwZxwSOpIqcc16cOliBVw (code: 8og5).

I think it is reasonable. For the following reasons,

  1. The Baidu IQA API may hard to capture the realistic camera distortions in SPAQ. You can validate it by PLCC and SRCC rather than computing the absolute difference with BL model.
  2. Our models do not normalize the predicted scores to [0, 100], then the normazlization operation may bring some errors. So does Baidu IQA API.
  3. As you mentioned, the BL model was trained on SPAQ only and Baidu IQA API might have a great difference with it.

Thank you very much!
I will read your paper again carefully and try to know how to use it according to our requirement.

from spaq.

KleinXin avatar KleinXin commented on July 29, 2024

Hi KleinXin,

The zip file can be downloaded at https://pan.baidu.com/s/1JzwZxwSOpIqcc16cOliBVw (code: 8og5).

I think it is reasonable. For the following reasons,

  1. The Baidu IQA API may hard to capture the realistic camera distortions in SPAQ. You can validate it by PLCC and SRCC rather than computing the absolute difference with BL model.
  2. Our models do not normalize the predicted scores to [0, 100], then the normazlization operation may bring some errors. So does Baidu IQA API.
  3. As you mentioned, the BL model was trained on SPAQ only and Baidu IQA API might have a great difference with it.

I carefully read your paper again.
In chapter 5.1, where the trainning process of Baseline Model is described, you said that l1-norm is used and ground truth of q is MOS which is a continuous score in [0,100] to represent the overall qulity of image.

I used BL_release.pt model to do the test of 9800 images. All scores I got are around 45, with min-35 and max-60. The score of the blurred image on the top is 45 and I divided it by 100 to normalize values to [0,1]. I donot think that image should got such a high score. In fact, the score of that image from Baidu is 0.0025 after normalizing to [0,1] by dividing 100.

Then I normalize scores of all images by using min & max values. The formular is v_norm = (v-min)/(max-min) where v is the score I got from the inference of BL_release.py model. Anying wrong with this procedure?

from spaq.

h4nwei avatar h4nwei commented on July 29, 2024

Hi KleinXin,
The zip file can be downloaded at https://pan.baidu.com/s/1JzwZxwSOpIqcc16cOliBVw (code: 8og5).
I think it is reasonable. For the following reasons,

  1. The Baidu IQA API may hard to capture the realistic camera distortions in SPAQ. You can validate it by PLCC and SRCC rather than computing the absolute difference with BL model.
  2. Our models do not normalize the predicted scores to [0, 100], then the normazlization operation may bring some errors. So does Baidu IQA API.
  3. As you mentioned, the BL model was trained on SPAQ only and Baidu IQA API might have a great difference with it.

I carefully read your paper again.
In chapter 5.1, where the trainning process of Baseline Model is described, you said that l1-norm is used and ground truth of q is MOS which is a continuous score in [0,100] to represent the overall qulity of image.

I used BL_release.pt model to do the test of 9800 images. All scores I got are around 45, with min-35 and max-60. The score of the blurred image on the top is 45 and I divided it by 100 to normalize values to [0,1]. I donot think that image should got such a high score. In fact, the score of that image from Baidu is 0.0025 after normalizing to [0,1] by dividing 100.

Then I normalize scores of all images by using min & max values. The formular is v_norm = (v-min)/(max-min) where v is the score I got from the inference of BL_release.py model. Anying wrong with this procedure?

Hi KleinXin,

I think the normalization operation is one of good attempts. Were the 9800 images sampled from the SPAQ database?

from spaq.

KleinXin avatar KleinXin commented on July 29, 2024

Hi KleinXin,
The zip file can be downloaded at https://pan.baidu.com/s/1JzwZxwSOpIqcc16cOliBVw (code: 8og5).
I think it is reasonable. For the following reasons,

  1. The Baidu IQA API may hard to capture the realistic camera distortions in SPAQ. You can validate it by PLCC and SRCC rather than computing the absolute difference with BL model.
  2. Our models do not normalize the predicted scores to [0, 100], then the normazlization operation may bring some errors. So does Baidu IQA API.
  3. As you mentioned, the BL model was trained on SPAQ only and Baidu IQA API might have a great difference with it.

I carefully read your paper again.
In chapter 5.1, where the trainning process of Baseline Model is described, you said that l1-norm is used and ground truth of q is MOS which is a continuous score in [0,100] to represent the overall qulity of image.
I used BL_release.pt model to do the test of 9800 images. All scores I got are around 45, with min-35 and max-60. The score of the blurred image on the top is 45 and I divided it by 100 to normalize values to [0,1]. I donot think that image should got such a high score. In fact, the score of that image from Baidu is 0.0025 after normalizing to [0,1] by dividing 100.
Then I normalize scores of all images by using min & max values. The formular is v_norm = (v-min)/(max-min) where v is the score I got from the inference of BL_release.py model. Anying wrong with this procedure?

Hi KleinXin,

I think the normalization operation is one of good attempts. Were the 9800 images sampled from the SPAQ database?

No, all images are acquired from internet.

from spaq.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.