icgy96 / arpl Goto Github PK
View Code? Open in Web Editor NEW[TPAMI 2022] Adversarial Reciprocal Points Learning for Open Set Recognition
Home Page: https://ieeexplore.ieee.org/document/9521769
License: MIT License
[TPAMI 2022] Adversarial Reciprocal Points Learning for Open Set Recognition
Home Page: https://ieeexplore.ieee.org/document/9521769
License: MIT License
Many thanks for your wonderful work! But the metric is not very intuitive. Do you have some theory analysis or design it in an emprical way.
Suppose we pass an OOD image from the model. What output do we expect that would differentiate it from the training classes?
我跑你的代码的时候,我发现当我把compute_oscr函数的条件改成跟计算AUROC一样的时候,log输出的AUROC的值要大于OSCR的值。然后我阅读compute_oscr中计算不同阈值TP和FP的代码时候,发现TP是K+1但是FP没有K+1(我知道+1是为了取消最大的置信度当阈值时的计算)
CC = s_k_target[k+1:].sum()
FP = s_u_target[k:].sum()
我想问这里的FP的计算是不是索引也应该是k+1
Hi,
I have trouble replicating the results that you reported in Table 3 in the paper for the ARPL method (so far).
I downloaded you git repository and run this command to train the ARPL model (I run it three times to account for random initialization using different out dirs):
python ood.py --dataset cifar10 --out-dataset svhn --model arpl --loss ARPLoss --outf log_arpl
Here is a training log from one of the model training script logs.txt.
To evaluate I run this command (for each trained model):
python ood.py --dataset cifar10 --out-dataset svhn --model arpl --loss ARPLoss --outf log_arpl --eval
I get the following results:
Acc: 92.58000
TNR AUROC DTACC AUIN AUOUT
Bas 25.496 82.803 78.414 65.403 90.405
Acc (%): 92.580 AUROC (%): 82.803 OSCR (%): 79.600
Acc: 92.68000
TNR AUROC DTACC AUIN AUOUT
Bas 21.589 78.534 74.769 54.259 88.407
Acc (%): 92.680 AUROC (%): 78.534 OSCR (%): 75.498
Acc: 92.68000
TNR AUROC DTACC AUIN AUOUT
Bas 37.930 82.561 77.525 78.463 82.497
Acc (%): 92.680 AUROC (%): 82.561 OSCR (%): 78.857
There were no errors or warnings during the running of the scripts.
All metrics are significantly below the reported numbers.
Do you have any idea what may be the issue?
Thank you.
Could you provide the code as well as pretrained models for the experiment on ImageNet1k?
I'm trying to evaluate ARPL on various other OOD datasets. Thanks !
File: train.py
Issue:
Incorrect weight parameter options['beta'] used in loss calculations.
Lines affected: 93 and 117.
Expected:
Use options['--weight-pl'] for weight in the loss calculations, as it's intended for center loss, not options['beta'], which is for entropy loss.
Details:
Suggested Fix:
Replace options['beta'] with options['--weight-pl'] in the affected lines.
To get the visualization as in the paper, you use t-sne or use a fc(out_channel=2) to do that?
Thanks
Hi there! Thanks for your inspiring work and releasing the code. I have a small question regarding the baseline results. I did not modify the code and run it with command python osr.py --dataset cifar10 --loss Softmax
. If I understand correctly, this would be the baseline method, and according to the Table 1 in your paper, the result AUROC should be 67.7
for the CIFAR10 dataset. However, the log I obtained is as follows:
,0,1,2,3,4
TNR,34.0,30.625000000000004,22.624999999999996,35.175,30.500000000000004
AUROC,86.9976125,85.59652499999999,84.34554583333333,86.77233749999999,86.72411666666667
DTACC,80.25416666666668,79.03333333333333,78.1625,79.575,79.97916666666667
AUIN,92.19165744328468,90.77076215295214,90.76942678194541,91.79811544137021,91.64555773527704
AUOUT,77.31935437681796,75.15246512319506,72.03227915144676,77.35292209454035,76.84887201588833
ACC,94.16666666666667,95.58333333333333,91.35,95.3,95.18333333333334
OSCR,84.46526250000002,84.14960000000002,80.82762291666654,84.88179583333343,84.94574374999975
unknown,"[0, 8, 3, 5]","[2, 3, 4, 5]","[0, 8, 2, 6]","[8, 2, 3, 5]","[8, 2, 3, 5]"
known,"[2, 4, 1, 7, 9, 6]","[8, 6, 1, 9, 0, 7]","[1, 5, 7, 3, 9, 4]","[7, 6, 4, 9, 0, 1]","[0, 6, 4, 9, 1, 7]"
And the average AUROC is about 86.09
, which is significantly higher than the results reported. I'd like to know if there is anything that I haven't done properly. Thanks in advance!
I noticed that the classes of close set are assigned in split.py. Are the results only reimplemented via the specific classes ? Two phenomenons shown in experiments.
Hi, thanks for your work.
I am interested in this paper and I would like to train and use this model for my datasets.
How can I train this model on my dataset?
All the best.
Kazuki
Hello. Just curious where in code can I set these parameters?
Hi,
thanks for sharing the code!
Where can I find the code for PRL (ECCV20), I am interested in it.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.