Coder Social home page Coder Social logo

icgy96 / arpl Goto Github PK

View Code? Open in Web Editor NEW
137.0 137.0 25.0 844 KB

[TPAMI 2022] Adversarial Reciprocal Points Learning for Open Set Recognition

Home Page: https://ieeexplore.ieee.org/document/9521769

License: MIT License

Python 100.00%
open-set-recognition openset openset-classification openset-recognition out-of-distribution-detection

arpl's People

Contributors

icgy96 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

arpl's Issues

OSCR

我跑你的代码的时候,我发现当我把compute_oscr函数的条件改成跟计算AUROC一样的时候,log输出的AUROC的值要大于OSCR的值。然后我阅读compute_oscr中计算不同阈值TP和FP的代码时候,发现TP是K+1但是FP没有K+1(我知道+1是为了取消最大的置信度当阈值时的计算)
CC = s_k_target[k+1:].sum()
FP = s_u_target[k:].sum()
我想问这里的FP的计算是不是索引也应该是k+1

Replicating results from Table 3 for ARPL

Hi,
I have trouble replicating the results that you reported in Table 3 in the paper for the ARPL method (so far).

I downloaded you git repository and run this command to train the ARPL model (I run it three times to account for random initialization using different out dirs):
python ood.py --dataset cifar10 --out-dataset svhn --model arpl --loss ARPLoss --outf log_arpl
Here is a training log from one of the model training script logs.txt.

To evaluate I run this command (for each trained model):
python ood.py --dataset cifar10 --out-dataset svhn --model arpl --loss ARPLoss --outf log_arpl --eval

I get the following results:

Acc: 92.58000
       TNR    AUROC  DTACC  AUIN   AUOUT
Bas    25.496 82.803 78.414 65.403 90.405
Acc (%): 92.580  AUROC (%): 82.803       OSCR (%): 79.600
Acc: 92.68000
       TNR    AUROC  DTACC  AUIN   AUOUT
Bas    21.589 78.534 74.769 54.259 88.407
Acc (%): 92.680  AUROC (%): 78.534       OSCR (%): 75.498
Acc: 92.68000
       TNR    AUROC  DTACC  AUIN   AUOUT
Bas    37.930 82.561 77.525 78.463 82.497
Acc (%): 92.680  AUROC (%): 82.561       OSCR (%): 78.857

There were no errors or warnings during the running of the scripts.
All metrics are significantly below the reported numbers.
Do you have any idea what may be the issue?
Thank you.

Experiments on TinyimageNet

Many thanks for your great work.

I tried to reproduce your exepriment results on TinyImagenet. However, the classification accuracy on the closed set in less than 0.50.

Also, there seems an issue with the Tiny_ImageNet_Filter (the size of outset is 0):

image

Could you help with it?

Incorrect Weight Parameter Used for Loss Calculation

File: train.py

Issue:
Incorrect weight parameter options['beta'] used in loss calculations.
Lines affected: 93 and 117.

Expected:
Use options['--weight-pl'] for weight in the loss calculations, as it's intended for center loss, not options['beta'], which is for entropy loss.

Details:

  • Line 93: generator_loss calculation.
  • Line 117: total_loss calculation.

Suggested Fix:
Replace options['beta'] with options['--weight-pl'] in the affected lines.

Questions regarding the baseline results

Hi there! Thanks for your inspiring work and releasing the code. I have a small question regarding the baseline results. I did not modify the code and run it with command python osr.py --dataset cifar10 --loss Softmax. If I understand correctly, this would be the baseline method, and according to the Table 1 in your paper, the result AUROC should be 67.7 for the CIFAR10 dataset. However, the log I obtained is as follows:

,0,1,2,3,4
TNR,34.0,30.625000000000004,22.624999999999996,35.175,30.500000000000004
AUROC,86.9976125,85.59652499999999,84.34554583333333,86.77233749999999,86.72411666666667
DTACC,80.25416666666668,79.03333333333333,78.1625,79.575,79.97916666666667
AUIN,92.19165744328468,90.77076215295214,90.76942678194541,91.79811544137021,91.64555773527704
AUOUT,77.31935437681796,75.15246512319506,72.03227915144676,77.35292209454035,76.84887201588833
ACC,94.16666666666667,95.58333333333333,91.35,95.3,95.18333333333334
OSCR,84.46526250000002,84.14960000000002,80.82762291666654,84.88179583333343,84.94574374999975
unknown,"[0, 8, 3, 5]","[2, 3, 4, 5]","[0, 8, 2, 6]","[8, 2, 3, 5]","[8, 2, 3, 5]"
known,"[2, 4, 1, 7, 9, 6]","[8, 6, 1, 9, 0, 7]","[1, 5, 7, 3, 9, 4]","[7, 6, 4, 9, 0, 1]","[0, 6, 4, 9, 1, 7]"

And the average AUROC is about 86.09, which is significantly higher than the results reported. I'd like to know if there is anything that I haven't done properly. Thanks in advance!

Why assign class splits specifically in code ?

I noticed that the classes of close set are assigned in split.py. Are the results only reimplemented via the specific classes ? Two phenomenons shown in experiments.

  1. I tried to change the class number into random set. The performance decrease dramatically. The average AUROC in 5 times is only about 50.
  2. I trained the close set with close set with cross entropy loss, and use the softmax probability as logits in evaluation. Then, I get similar perfromance (about 74~ in AUROC) in results. May I miss some something in your code ?

How to train and test on my custom dataset

Hi, thanks for your work.
I am interested in this paper and I would like to train and use this model for my datasets.
How can I train this model on my dataset?

All the best.
Kazuki

PRL code

Hi,
thanks for sharing the code!

Where can I find the code for PRL (ECCV20), I am interested in it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.