Coder Social home page Coder Social logo

Comments (7)

YyzHarry avatar YyzHarry commented on July 23, 2024 1

the batch size is set to 256 while the default batch size is 128 in DRW source code

That's a good catch. I just checked the setting when I ran the experiments, and the batch size I used is 128 on CIFAR. Thought there might be some inconsistency on the default value when I clean up the code (already updated it).

Regarding your questions, I just quickly ran three experiments. For baseline on CIFAR-10-LT with None, I got 71.03%. By using None + SSP, I got 73.99%. For DRW + SSP, I got 77.45%, which is even slightly higher than the number reported in our paper. I'm using the Rotation checkpoint I provided in this repo, which has 83.07% test acuracy, similar to yours. I also checked your log, which seems fine to me. So currently I'm not sure what causes the difference. I would suggest you try using the Rotation SSP checkpoint I provided, to see if there's any difference.

Otherwise, you may want to check whether the pre-trained weights are loaded correctly, or the exact training setting such as PyTorch version (1.4 for this repo), or how many GPUs used (for CIFAR experiments only 1), etc.

from imbalanced-semi-self.

ZezhouCheng avatar ZezhouCheng commented on July 23, 2024

I also tried the same experiments on cifar100, the results are as below:

image

Not sure if there's sth I missed, for example, do you use the same optimization hyperparameters like learning rate for these experiments. I feel like using the self-supervised pretrained feature generally requires different learning rates during the linear evaluation.

from imbalanced-semi-self.

YyzHarry avatar YyzHarry commented on July 23, 2024

Hi there, thanks for your interest! I just took a quick look on your results --- it seems a bit weird to me. From my experience, for train_rule as "None" (i.e., vanilla CE) on CIFAR-10-LT, typical number should be around 70% (see also baseline results from this paper and this paper). Also, for those baseline models, using other train_rules should lead to better results than with "None", which is not the case in your results. It seems to me the two rows are kind of like reversed (or something weird is happening). Can you please double check that the results are correct? If so, we might then further look into it and see what happens.

from imbalanced-semi-self.

ZezhouCheng avatar ZezhouCheng commented on July 23, 2024

Thanks for the quick reply! I double-checked my experiments. Sorry that it seems I mistakenly fixed the training rule to 'DRW' when I run vanilla CE on cifar10. Below is the updated results.

image

However, adding SSP still does not improve the performance. Here are the log files for the experiments on cifar10:

https://www.dropbox.com/sh/ex0oiduxu93u3y0/AACRVJu_bxbjTJg3nc-beKaBa?dl=0

from imbalanced-semi-self.

ZezhouCheng avatar ZezhouCheng commented on July 23, 2024

I'm able to reproduce the baselines using https://github.com/kaidic/LDAM-DRW. I noticed that in train.py, the batch size is set to 256 while the default batch size is 128 in DRW source code. It seems this makes a difference on cifar100. For example, I tested train.py with CE+rand. init.+ Resample: 30% accu with 256 bz --> 34% accu with 128 bz, but there is still a gap with LDAM-DRM which achieves 38.59% under the same setting. Not sure if there are any other factors that cause this gap.

from imbalanced-semi-self.

ZezhouCheng avatar ZezhouCheng commented on July 23, 2024

Thanks for the help! Let me try these ideas.

from imbalanced-semi-self.

87nohigher avatar 87nohigher commented on July 23, 2024

I met the same problem.
Did you solve the question?
How did you solve it?
Thanks

Thanks for the help! Let me try these ideas.

from imbalanced-semi-self.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.