uoguelph-mlrg / confidence_estimation Goto Github PK
View Code? Open in Web Editor NEWLearning Confidence for Out-of-Distribution Detection in Neural Networks
License: Other
Learning Confidence for Out-of-Distribution Detection in Neural Networks
License: Other
We reproduced the experiment on the XOR data set of your paper, and produced two sets of data respectively for experiment. However, the results were not ideal and there were some deviations from the images in your paper. Did you use any techniques not mentioned in the paper when conducting these experiments? Or is there something special about the distribution of the data you're using? We use two kinds of data:
We look forward to hearing from you.
Thank you for the interesting work, I reproduce the results in paper, but when I add the confidence branch to my classifier in another work, during training procedure, the minimal confidence and maximum confidence tend to be the same(all big or all small with different learning rate). So I wonder to know how to figure out the reason and solve the problem. Do you have any suggestions? Thanks in advance ๏ผ)
In section 2.3.2 of the paper, it says:
We find that giving hints with 50% probability addresses this
issue, as the gradients from low confidence examples now
have a chance of backpropagating unhindered and updating
the decision boundary, but we still learn useful confidence
estimates. One way we can implement this in practice is
by applying Equation 2 to only half of the batch at each
iteration.
If I am not mistaken, the following lines are where this idea is implemented:
confidence_estimation/train.py
Lines 242 to 244 in c98f23d
Does this really set half of the confidences to 1
?
Is applying bernoulli
on uniform samples between [0, 1]
really equivalent?
Would applying a threshold at 0.5 on each uniform sample work as well?
Hi, I'm running your code with the following command:
python train.py --dataset cifar10 --model vgg13 --budget 0.3 --data_augmentation --cutout 16
But the accuracy on CIFAR10 only achives 78%, while the normal accuracy should be around 95%. Is there anything that I missed?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.