Coder Social home page Coder Social logo

Comments (10)

FJR-Nancy avatar FJR-Nancy commented on June 3, 2024 1

I uploaded the predictions of validation and test sets to the official evaluation server of VOC, and got the same results - mIoU around 62%, which is 70% got from the code for the same prediction of validation set. I think the main differences are:

  1. mIoU shouldn't be computed by batch because it shouldn't depend on batch.
  2. Original labels should be used for evaluation and the predictions should be resized to the same size of the corresponding original label to compare with.

I changed the evaluation code and got the mIoU only 58%. I think there are still some problems in the evaluation code.
By the way, how do you get 74.4% mIoU on Pascal VOC 2012 test set? Did you upload the predictions of test set to the official evaluation server to get the results? I used the same training code and uploaded the predictions of epoch 299, but only got 62.6% mIoU on the test set.
Thank @jfzhang95 for the code, and any response is appreciated.

from pytorch-deeplab-xception.

FJR-Nancy avatar FJR-Nancy commented on June 3, 2024 1

@jfzhang95 Sorry, it’s not by batch but by sample. However, the official evaluation code of VOC computes the intersection and union of all samples together to get IoU.

I refer to https://github.com/npinto/VOCdevkit/blob/master/VOCcode/VOCevalseg.m for detailed official implementation of evaluation, and the official evaluation code of VOC could be downloaded from the official website http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar. The code is in matlab, and it's similar with the fcn metric provided by @OneForward.

By the way, there is another problem that the pixel location of 255 in gt labels shouldn’t be considered while computing IoU.

Could I suppose that you get 74.4% mIoU on the validation set, not test set?Because for the test set, there are no gt labels provided, therefore, the result of test set doesn’t depend on your evaluation code. Have you uploaded the predictions to the evaluation server of VOC to get the real results on test or validation set?

from pytorch-deeplab-xception.

emedinac avatar emedinac commented on June 3, 2024

What is the difference (implementation)? did you implement that code to use it as tensors like this implemented in Pytorch?

from pytorch-deeplab-xception.

jfzhang95 avatar jfzhang95 commented on June 3, 2024
  1. I computed mIoU in size (512, 512). And IoU calculation implemented by myself has
    a problem, which can be found in #16 . Therefore, the real mIoU is lower than 74.4%.
    I am very sorry about that.

  2. Actually, mIoU in my code is not computed by batch. You can see in here,
    total_miou += utils.get_iou(predictions, labels)
    miou = total_miou / (ii * testBatch + inputs.data.shape[0])
    I computed each result's IoU and then calculated mean IoU of all images in test set.

I hope my response could help and thank you for your interest and patience in this code.

from pytorch-deeplab-xception.

emedinac avatar emedinac commented on June 3, 2024

Hi @FJR-Nancy @jfzhang95 , I tested the new (modified) version of the mIoU and I am getting 64.66% on validation set. I trained 175 epochs to get this result. Am I wrong?

from pytorch-deeplab-xception.

niedan1976 avatar niedan1976 commented on June 3, 2024

Hi, thank you for your work! @jfzhang95 I obtained 66.2% mIoU in val set using fcn metric.
And can you provide any suggestion to improve model's performance? Thanks you very much!

from pytorch-deeplab-xception.

jfzhang95 avatar jfzhang95 commented on June 3, 2024

@FJR-Nancy Sorry for the late reply.

  1. Thank you for your suggestion and detailed explanation. I read the official evaluation code,
    and I think you're right. I'll rewrite evaluation code to fix this error.
  2. Yes, previous mIoU is obtained on VOC validation set. It is my fault to mix it up.
    I haven't uploaded predicted results to the evaluation server. I just compute mIoU using
    function in utils.py.

Very thanks for your suggestion and patience!

from pytorch-deeplab-xception.

jfzhang95 avatar jfzhang95 commented on June 3, 2024

@niedan1976 I think you can follow methods mentioned in the original paper in order to get a better result. For example, you can pretrained modified xception in COCO. Or you can try to transfer tensorflow pretrained model into pytorch model.

from pytorch-deeplab-xception.

FJR-Nancy avatar FJR-Nancy commented on June 3, 2024

@niedan1976 Also multi-scale and flipped test could be used to improve the performance, which has not been implemented yet.

from pytorch-deeplab-xception.

emedinac avatar emedinac commented on June 3, 2024

@FJR-Nancy Hi, I'm not sure, but I think the paper has a slightly different xception backbone, mainly in the following layers. That's the reason you may not import the weights. a simple modification can be performed in the first layers of the model to do it. Import the weights would be a very interesting topic to do.

from pytorch-deeplab-xception.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.