Comments (13)
Hi, The BR is plotted against average number of superpixels. Although, we run the SSN script with 500 super pixels, the final number of superpixels in the output is usually less than 500. Please compute average number of superpixels and plot BR against that.
I compute precision-recall curves using David Stutz's library and only Boundary recall using the scripts we developed in our previous work: https://github.com/wctu/SEAL/blob/master/eval/eval.py
Let me know if something is not clear.
from ssn_superpixels.
Hi,
Thanks for your reply.
I have tried to compute the average superpxiel number, but what I got was actually larger than the expected number.
For example, when I ran n_spixels = 100
, I usually got 165 superpixels in average , while if n_spixels = 500
, I usually got around 590 superpixels, which means the BR should be even higher if the Fig.4 is plotted according to the BR result (from SEAL's evaluation method) against the average number of superpixels in reality.
I believe I have strictly followed the instruction to generate the result, what I did is
python compute_ssn_spixels.py --datatype TEST --n_spixels 500 --num_steps 10 --caffemodel ./models/ssn_bsds_model.caffemodel --result_dir ./bsds_500/
and the way I counted the superpixel number is to
- load the .npy, and
- use the
np.unique
function to count the spixel in each image (basicallyn_spixel = len(np.unique(label))
- get the average of it after all the label is loaded
I also realized if I turn off the enforce_connectivity
function, the superpixel number will decrease significantly and being smaller than the input n_spixels
(e.g. I got 486 superpixels with n_spixels=500). However, even if I did this, the BR is still lower than the one reported on paper, BR=85.57%, while the corresponding value in paper should be higher than 92%.
Did you turn this function off during the evaluation? And is there any tips to get a similar result to the paper's one?
Thanks a lot!
from ssn_superpixels.
I remembered wrong. You are right that the number of superpixels is usually higher than initial number as we consider different connected components as different superpixels (to be fair comparison with other works). Just to be clear, did the ASA score you get matches to those reported in the paper? If not, you might be getting different superpixels.
One potential difference could be with 'boundary tolerance' when computing BR score. For ablation studies, I used tolerance of '2' if I remember correctly. For that, you need to replace 'br = computeBR(label_list, gtseg_list, h, w, 1)' with 'br = computeBR(label_list, gtseg_list, h, w, 2)' is SEAL's evaluation method. For all the main Precision-Recall plots, I used Stutz's evaluation code.
from ssn_superpixels.
Yeah, the ASA score and the metrics computed from the Stutz's library align well with the paper result, and if I change the tolerance factor to 2, the BR now also looks similar to the figure 4.
There is one more thing I want to confirm. As the BSD have multiply groundtruth for one image, are the result reported on the paper
- simply the average of the all groundtruth results, or
- like what Stutz's library did that picking the best result for each image and average all the best results?
from ssn_superpixels.
When using SEAL's evaluation for ablation BR plot, we consider each ground-truth as a separate data point. If I remember correctly, I used Stutz's code for precision-recall curves 'boundaryBench' and compactness score 'compactnessBench'.
from ssn_superpixels.
Thank you very much! It really helps.
from ssn_superpixels.
Hi, Thanks for providing these details!
I want to make sure I understand it correctly about how to get the fig 5. in the paper. For every point in the curve (either baselines or yours), the number of superpixels is obtained by averaging all the number of superpixels on all test images? As one problem for both SLIC and SSN is that, even we set the n_spixels = 100
, the resulting superpixel map may contain more than 100 superpixels (after applying enforce_connectivity
).
Thanks!
from ssn_superpixels.
Yes, that is average number of superpixels as the exact number of superpixels vary from image to image.
from ssn_superpixels.
I see, thanks a lot!
from ssn_superpixels.
Hi, Thanks for your great work.
I have a question about the ASA score in Fig. 4. I use the script [asaBench.m] in here. Because BSD dataset has multiply groundturths for one image, the evaluation results include two methods decribed in README. I want to know which method is adopted in your paper.
In addition, I have same confusion for the BR results in Fig. 4 using the script [allBench.m].
from ssn_superpixels.
For computing ASA and BR score, we use scripts from here: https://github.com/wctu/SEAL. We consider each GT as a separate sample while computing the metrics. For BR score computation, try with different boundary tolerances of 1 and 2 here: https://github.com/wctu/SEAL/blob/66317a95d8e545fb431ae9e26b762fa1e5a132b0/eval/eval.py#L29. As far as I remember, we use boundary tolerance of 1 for figure-4. Also try with boundary tolerance of 2 for computing BR score.
For precision-recall curve, we use scripts from https://github.com/davidstutz/extended-berkeley-segmentation-benchmark
from ssn_superpixels.
from ssn_superpixels.
You need to vary the number of superpixels to plot PR curves.
from ssn_superpixels.
Related Issues (20)
- Aboutn
- some question of paper about training HOT 4
- where to install Caffe inside "ssn_superpixels" HOT 2
- About some custom caffe layer ? HOT 3
- input image size different for training and testing on the BSDS500 dataset?? HOT 1
- Some question about initialization of group conv layer 'concat_spixel_feat_50' HOT 3
- about Evaluation HOT 1
- Some questions on source code HOT 7
- what does the training loss curve look like HOT 5
- The implementation of pytorch version HOT 6
- I cannot understand “n*9” mentioned in the paper HOT 1
- F0630 15:37:53.939426 12256 math_functions.cu:79] Check failed: error == cudaSuccess (74 vs. 0) misaligned address *** Check failure stack trace: ***
- Why is random cropping / patching and random scaling applied continuously throughout training? HOT 1
- how to run the code??? HOT 1
- Question of the function of spix_init HOT 2
- about the label and loss HOT 2
- Superpixel border issue on BSDS500 dataset HOT 5
- Pytorch Implementation License
- custom training HOT 1
- extract the features of the superpixel area
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ssn_superpixels.