diegoortego / labelnoisemoit Goto Github PK
View Code? Open in Web Editor NEWOfficial implementation for: "Multi-Objective Interpolation Training for Robustness to Label Noise"
Official implementation for: "Multi-Objective Interpolation Training for Robustness to Label Noise"
We need to remove the cosine between themselves, so the authors used the code
.dist = torch.mm(features, trainFeatures) # instead of features here
dist[torch.arange(dist.size()[0]), index] = -1 ##Self-contrast set to -1
Thanks so much for your interesting work! But I can not get the same results in your paper on web noise Mini-imagenet (I have set the same hyperparameters). For red_noise_nl_0.4, I can only get 46.24 (Top-1 Accuracy of MOIT, not MOIT+). The results of other settings and the results in the paper are also very different (lower than the results in the paper). Below are the augmentations I adopted:
transforms.RandomCrop(84, padding=8),
transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.495, 0.477, 0.436), (0.292, 0.285, 0.299))
And the data of Mini-imagenet is downloaded from this URL: https://storage.googleapis.com/cnlw/dataset.zip (it is provided in LJY-HY/MentorMix_pytorch#1, which is the repo of controlled web noise in Mini-imagenet). I use the red-noise split files to get the training dataset.
If there are any other details, could you share them? Thanks again!
Hi,
Thanks for sharing the code! I have a question about how to decide the useful feature is learnt. For example, when training on cifar-10, the first 130 epochs are only trained with given label without noise correction. How to decide the "130 epochs"?
Hi,
thanks for sharing your implementation. I have two questions about it:
Thanks!
Thanks so much for your work. I found that in table.6 and 7 of paper, the reported number from dividemix are much lower than the original paper, any ideas for this?
Hi,thanks for your interesting work! As the paper reported, MOIT performs much better than other SOTAs on mini-WebVision dataset. Could you share the code on mini-WebVision dataset? That's will help me a lot!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.