Coder Social home page Coder Social logo

Comments (13)

varunjampani avatar varunjampani commented on June 14, 2024

Hi, The BR is plotted against average number of superpixels. Although, we run the SSN script with 500 super pixels, the final number of superpixels in the output is usually less than 500. Please compute average number of superpixels and plot BR against that.

I compute precision-recall curves using David Stutz's library and only Boundary recall using the scripts we developed in our previous work: https://github.com/wctu/SEAL/blob/master/eval/eval.py

Let me know if something is not clear.

from ssn_superpixels.

 avatar commented on June 14, 2024

Hi,

Thanks for your reply.

I have tried to compute the average superpxiel number, but what I got was actually larger than the expected number.

For example, when I ran n_spixels = 100, I usually got 165 superpixels in average , while if n_spixels = 500, I usually got around 590 superpixels, which means the BR should be even higher if the Fig.4 is plotted according to the BR result (from SEAL's evaluation method) against the average number of superpixels in reality.

I believe I have strictly followed the instruction to generate the result, what I did is
python compute_ssn_spixels.py --datatype TEST --n_spixels 500 --num_steps 10 --caffemodel ./models/ssn_bsds_model.caffemodel --result_dir ./bsds_500/
and the way I counted the superpixel number is to

  1. load the .npy, and
  2. use the np.unique function to count the spixel in each image (basically n_spixel = len(np.unique(label))
  3. get the average of it after all the label is loaded

I also realized if I turn off the enforce_connectivity function, the superpixel number will decrease significantly and being smaller than the input n_spixels (e.g. I got 486 superpixels with n_spixels=500). However, even if I did this, the BR is still lower than the one reported on paper, BR=85.57%, while the corresponding value in paper should be higher than 92%.

Did you turn this function off during the evaluation? And is there any tips to get a similar result to the paper's one?

Thanks a lot!

from ssn_superpixels.

varunjampani avatar varunjampani commented on June 14, 2024

I remembered wrong. You are right that the number of superpixels is usually higher than initial number as we consider different connected components as different superpixels (to be fair comparison with other works). Just to be clear, did the ASA score you get matches to those reported in the paper? If not, you might be getting different superpixels.

One potential difference could be with 'boundary tolerance' when computing BR score. For ablation studies, I used tolerance of '2' if I remember correctly. For that, you need to replace 'br = computeBR(label_list, gtseg_list, h, w, 1)' with 'br = computeBR(label_list, gtseg_list, h, w, 2)' is SEAL's evaluation method. For all the main Precision-Recall plots, I used Stutz's evaluation code.

from ssn_superpixels.

 avatar commented on June 14, 2024

Yeah, the ASA score and the metrics computed from the Stutz's library align well with the paper result, and if I change the tolerance factor to 2, the BR now also looks similar to the figure 4.

There is one more thing I want to confirm. As the BSD have multiply groundtruth for one image, are the result reported on the paper

  1. simply the average of the all groundtruth results, or
  2. like what Stutz's library did that picking the best result for each image and average all the best results?

from ssn_superpixels.

varunjampani avatar varunjampani commented on June 14, 2024

When using SEAL's evaluation for ablation BR plot, we consider each ground-truth as a separate data point. If I remember correctly, I used Stutz's code for precision-recall curves 'boundaryBench' and compactness score 'compactnessBench'.

from ssn_superpixels.

 avatar commented on June 14, 2024

Thank you very much! It really helps.

from ssn_superpixels.

SteveJunGao avatar SteveJunGao commented on June 14, 2024

Hi, Thanks for providing these details!

I want to make sure I understand it correctly about how to get the fig 5. in the paper. For every point in the curve (either baselines or yours), the number of superpixels is obtained by averaging all the number of superpixels on all test images? As one problem for both SLIC and SSN is that, even we set the n_spixels = 100, the resulting superpixel map may contain more than 100 superpixels (after applying enforce_connectivity).

Thanks!

from ssn_superpixels.

varunjampani avatar varunjampani commented on June 14, 2024

Yes, that is average number of superpixels as the exact number of superpixels vary from image to image.

from ssn_superpixels.

SteveJunGao avatar SteveJunGao commented on June 14, 2024

I see, thanks a lot!

from ssn_superpixels.

CYang0515 avatar CYang0515 commented on June 14, 2024

Hi, Thanks for your great work.
I have a question about the ASA score in Fig. 4. I use the script [asaBench.m] in here. Because BSD dataset has multiply groundturths for one image, the evaluation results include two methods decribed in README. I want to know which method is adopted in your paper.
In addition, I have same confusion for the BR results in Fig. 4 using the script [allBench.m].

from ssn_superpixels.

varunjampani avatar varunjampani commented on June 14, 2024

For computing ASA and BR score, we use scripts from here: https://github.com/wctu/SEAL. We consider each GT as a separate sample while computing the metrics. For BR score computation, try with different boundary tolerances of 1 and 2 here: https://github.com/wctu/SEAL/blob/66317a95d8e545fb431ae9e26b762fa1e5a132b0/eval/eval.py#L29. As far as I remember, we use boundary tolerance of 1 for figure-4. Also try with boundary tolerance of 2 for computing BR score.

For precision-recall curve, we use scripts from https://github.com/davidstutz/extended-berkeley-segmentation-benchmark

from ssn_superpixels.

CYang0515 avatar CYang0515 commented on June 14, 2024

from ssn_superpixels.

varunjampani avatar varunjampani commented on June 14, 2024

You need to vary the number of superpixels to plot PR curves.

from ssn_superpixels.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.