Coder Social home page Coder Social logo

zxhuang1698 / interpretability-by-parts Goto Github PK

View Code? Open in Web Editor NEW
129.0 129.0 27.0 11.43 MB

Code repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)

Home Page: https://www.biostat.wisc.edu/~yli/cvpr2020-interp/

Shell 0.21% Python 99.79%
celeba celeba-dataset cub-dataset cvpr-2020 cvpr-oral cvpr2020 explainable-ai face-segmentation fine-grained-classification interpretability part-based-models pytorch pytorch-implementation weakly-supervised-localization weakly-supervised-segmentation

interpretability-by-parts's People

Contributors

zxhuang1698 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

interpretability-by-parts's Issues

code

你好,大神能否提供一下论文代码和论文,网上找了很久,没有找到这篇文章。

Normal value for shaping loss

May I ask what are the normal value ranges for shaping loss on the three datasets? I tried the method ion my own dataset, yet the value keeps at 1.1 (with coeff of 0.5) and does no decrease.
And what's the influences of hyperparameters in shaping loss?

The Implementation of Region Feature Extraction

Dose the implementation of Region Feature Extraction in here equal to Equation(3) in the paper, Can you explain it more concretely? And what is the difference of just using qx = torch.bmm(assign, x) as the out, dose the model performance would degrade?

fully convolutional test

What does the full convolution test refer to in the experiments on the iNaturalist 2017 dataset? The descriptions in the paper are not well understood.

Implementation on CUB dataset

Thanks for your great work. When I transplanted the code to the CUB task, I just got the 86.4% accuracy. :( Specifically, the linear classifier on celeb task is replaced with a single linear(2048, 200) coresponding with a cross entropy loss. And the backbone is Resnet101. The experiments also followed the implementation details in the paper. I wonder if there are still some differences between the reimplementation and the source code.
And would you please release the code or reveal more training details on CUB dataset?

The importance of postblock?

Great work! After I remove the postblock in your code, the eval_acc only decreases a little(87.0%-->85.9%),eval_interp(12.0%-->14.4%). However, the visualization of assignment map become very blurry ,so why did it happen? And what's the effect of the postblock? Thanks!

Assignment visualization

Hello! I have retrained a ResNet on CUB, but when I run the visualization script, every pixel in the images is assigned a color mask, even those not belonging to any body part. In your demo.jpg, I see that your model only masks real relevant pixels for the learned parts (and maybe some of the neighboring pixels as well at most), while leaving non-relevant pixels unmasked. Any idea what may be causing this?
Thank you for your great work!

what does the "binary label" mentioned in the code mean?

hi zxhuang
i have two quenstions:
(0)when i read the code in 'train.py' I am confused about the "binary label"?
if the num_classes is 3; the binary label is the same as one_hot label ? "001 010 100"? or just show as "00 01 10"?

(1)Do you think it will be useful to apply the method in the article to the TWO_classes classification?
looking forward your reply!
thanks a lot!

When training the cub training set, the code always stops automatically every 13epoch

First, when training cub training set, the code always stops automatically every 13epoch, for example, 0-13, 14-27. I only changed the batchsize to 4, and at the same time, I changed the learning rate in the paper to 1e-3 (the learning rate in the code is 5e-4). I want to ask what's the matter, can't I change the batchsize?

The second problem is that after the training, the accuracy of the training set is 82.5, but in the visualization processing, only one of the 25 images is correctly judged. What's the problem.

Thank you very much.

evaluation for cub dataset

hello,
in the paper there are three splits for the evaulation on cub dataset :
cub-001, cub-002 , cub-002, what are these stands for and how can i run on each split?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.