Comments (10)
I uploaded the predictions of validation and test sets to the official evaluation server of VOC, and got the same results - mIoU around 62%, which is 70% got from the code for the same prediction of validation set. I think the main differences are:
- mIoU shouldn't be computed by batch because it shouldn't depend on batch.
- Original labels should be used for evaluation and the predictions should be resized to the same size of the corresponding original label to compare with.
I changed the evaluation code and got the mIoU only 58%. I think there are still some problems in the evaluation code.
By the way, how do you get 74.4% mIoU on Pascal VOC 2012 test set? Did you upload the predictions of test set to the official evaluation server to get the results? I used the same training code and uploaded the predictions of epoch 299, but only got 62.6% mIoU on the test set.
Thank @jfzhang95 for the code, and any response is appreciated.
from pytorch-deeplab-xception.
@jfzhang95 Sorry, it’s not by batch but by sample. However, the official evaluation code of VOC computes the intersection and union of all samples together to get IoU.
I refer to https://github.com/npinto/VOCdevkit/blob/master/VOCcode/VOCevalseg.m for detailed official implementation of evaluation, and the official evaluation code of VOC could be downloaded from the official website http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar. The code is in matlab, and it's similar with the fcn metric provided by @OneForward.
By the way, there is another problem that the pixel location of 255 in gt labels shouldn’t be considered while computing IoU.
Could I suppose that you get 74.4% mIoU on the validation set, not test set?Because for the test set, there are no gt labels provided, therefore, the result of test set doesn’t depend on your evaluation code. Have you uploaded the predictions to the evaluation server of VOC to get the real results on test or validation set?
from pytorch-deeplab-xception.
What is the difference (implementation)? did you implement that code to use it as tensors like this implemented in Pytorch?
from pytorch-deeplab-xception.
-
I computed mIoU in size (512, 512). And IoU calculation implemented by myself has
a problem, which can be found in #16 . Therefore, the real mIoU is lower than 74.4%.
I am very sorry about that. -
Actually, mIoU in my code is not computed by batch. You can see in here,
total_miou += utils.get_iou(predictions, labels)
miou = total_miou / (ii * testBatch + inputs.data.shape[0])
I computed each result's IoU and then calculated mean IoU of all images in test set.
I hope my response could help and thank you for your interest and patience in this code.
from pytorch-deeplab-xception.
Hi @FJR-Nancy @jfzhang95 , I tested the new (modified) version of the mIoU and I am getting 64.66% on validation set. I trained 175 epochs to get this result. Am I wrong?
from pytorch-deeplab-xception.
Hi, thank you for your work! @jfzhang95 I obtained 66.2% mIoU in val set using fcn metric.
And can you provide any suggestion to improve model's performance? Thanks you very much!
from pytorch-deeplab-xception.
@FJR-Nancy Sorry for the late reply.
- Thank you for your suggestion and detailed explanation. I read the official evaluation code,
and I think you're right. I'll rewrite evaluation code to fix this error. - Yes, previous mIoU is obtained on VOC validation set. It is my fault to mix it up.
I haven't uploaded predicted results to the evaluation server. I just compute mIoU using
function in utils.py.
Very thanks for your suggestion and patience!
from pytorch-deeplab-xception.
@niedan1976 I think you can follow methods mentioned in the original paper in order to get a better result. For example, you can pretrained modified xception in COCO. Or you can try to transfer tensorflow pretrained model into pytorch model.
from pytorch-deeplab-xception.
@niedan1976 Also multi-scale and flipped test could be used to improve the performance, which has not been implemented yet.
from pytorch-deeplab-xception.
@FJR-Nancy Hi, I'm not sure, but I think the paper has a slightly different xception backbone, mainly in the following layers. That's the reason you may not import the weights. a simple modification can be performed in the first layers of the model to do it. Import the weights would be a very interesting topic to do.
from pytorch-deeplab-xception.
Related Issues (20)
- TypeError: cannot pickle 'module' object HOT 1
- using drn-105: Missing key(s) in state_dict
- my data are images and its 0,255 masks,,the masks are 0 255 images, it is voc format? HOT 1
- how to chnage the input h w of image?
- @stonessss yes, i write test code. HOT 1
- run inference.py success HOT 2
- self.last_conv() input_channels in file decoder.py HOT 1
- self = reduction.pickle.load(from_parent) error Ran out of input HOT 1
- The training time is so long?
- RuntimeError: CUDA error: device-side assert triggered HOT 4
- validation loss always lower than train loss
- Target 10 out of bounds while training on complete dataset HOT 1
- Could you give us some training details about your Pretrained Model
- Xception Backbone problem HOT 1
- data augmentation on the paper
- xception backbone url doesn't exist HOT 1
- Params And FLOPs HOT 1
- Why do I keep the imou and other parameter values obtained from each epoch unchanged during the training process
- Why force to set the output stride for DRN to 8?
- where's the inference.py file?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-deeplab-xception.