Coder Social home page Coder Social logo

dvlab-research / parametric-contrastive-learning Goto Github PK

View Code? Open in Web Editor NEW
222.0 7.0 29.0 18.78 MB

Parametric Contrastive Learning (ICCV2021) & GPaCo (TPAMI 2023)

Home Page: https://arxiv.org/abs/2107.12028

License: MIT License

Python 52.47% Shell 0.88% Jupyter Notebook 46.52% Dockerfile 0.05% Makefile 0.03% Batchfile 0.04% CSS 0.01%
contrastive-learning long-tailed-recognition supervised-learning image-classification imagenet supervised-contrastive-learning parametric-contrastive-learning iccv2021 class-imbalance imbalanced-data

parametric-contrastive-learning's Introduction

Imbalanced Learning for Recognition

This repository contains the code of our papers on the topic of imbalanced learning for recognition.

  • Our new paper "Classes Are Not Equal: An Empirical Study on Image Recognition Fairness" is accepted by CVPR 2024
  • Our new arXiv paper "Decoupled Kullback-Leibler (DKL) Divergence Loss" achieves new state-of-the-art on knowledge distillation and adversarial robustness. Code is released.
  • Code for RR & CeCo is partially released.
  • Our paper "Generalized Parametric Contrastive Learning" is accepted by TPAMI 2023.
  • Our paper "Understanding Imbalanced Semantic Segmentation Through Neural Collapse" is accepted by CVPR2023. The code will be released soon.
  • The code for our preprint paper "Generalized Parametric Contrastive Learning" is released;
  • The code for our preprint paper "Region Rebalance for Long-Tailed Semantic Segmentation" (paper) will be released soon;
  • The code for our TPAMI 2022 paper "Residual Learning for Long-tailed recogntion" (paper and code);
  • The code for our ICCV 2021 paper "Parametric Contrastive Learning" (paper and code);

Generalized Parametric-Contrastive-Learning

This repository contains the implementation code for ICCV2021 paper Parametric Contrastive Learning (https://arxiv.org/abs/2107.12028) and TPAMI 2023 paper Generalized Parametric Contrastive Learning (https://arxiv.org/abs/2209.12400).

PWC PWC PWC PWC

Full ImageNet Classification and Out-of-Distribution Robustness

Method Model Full ImageNet ImageNet-C (mCE) ImageNet-C (rel. mCE) ImageNet-R ImageNet-S link log
GPaCo ResNet-50 79.7 50.9 64.4 41.1 30.9 download download
CE ViT-B 83.6 39.1 49.9 49.9 36.1 --- download
CE ViT-L 85.7 32.4 41.4 60.3 45.5 --- download
multi-task ViT-B 83.4 --- --- --- --- --- download
GPaCo ViT-B 84.0 37.2 47.3 51.7 39.4 download download
GPaCo ViT-L 86.0 30.7 39.0 60.3 48.3 download download

CIFAR Classification

Method Model Top-1 Acc(%) link log
multi-task ResNet-50 79.1 --- download
GPaCo ResNet-50 80.3 --- download

Long-tailed Recognition

ImageNet-LT

Method Model Top-1 Acc(%) link log
GPaCo ResNet-50 58.5 download download
GPaCo ResNeXt-50 58.9 download download
GPaCo ResNeXt-101 60.8 download download
GPaCo ensemble( 2-ResNeXt-101) 63.2 --- ---

iNaturalist 2018

Method Model Top-1 Acc(%) link log
GPaCo ResNet-50 75.4 download download
GPaCo ResNet-152 78.1 --- download
GPaCo ensembel(2-ResNet-152) 79.8 --- ---

Places-LT

Method Model Top-1 Acc(%) link log
GPaCo ResNet-152 41.7 download download

Semantic Segmentation

Method Dataset Model mIoU (s.s.) mIoU (m.s.) link log
GPaCo ADE20K Swin-T 45.4 46.8 --- download
GPaCo ADE20K Swin-B 51.6 53.2 --- download
GPaCo ADE20K Swin-L 52.8 54.3 --- download
GPaCo COCO-Stuff ResNet-50 37.0 37.9 --- download
GPaCo COCO-Stuff ResNet-101 38.8 40.1 --- download
GPaCo Pascal Context 59 ResNet-50 51.9 53.7 --- download
GPaCo Pascal Context 59 ResNet-101 54.2 56.3 --- download
GPaCo Cityscapes ResNet-18 78.1 79.7 --- download
GPaCo Cityscapes ResNet-50 80.8 82.0 --- download
GPaCo Cityscapes ResNet-101 81.4 82.1 --- download

Get Started

Environments

We use python3.8, pytorch 1.8.1, mmcv 1.3.13 and timm==0.3.2. Our code is based on PaCo, MAE, and mmseg.

Train and Evaluation Scripts

On full ImageNet and OOD robustness,

We use 8 Nvidia GForce RTx 3090 GPUs. MAE pretrained models should be downloaded from here.

cd GPaCo/LT
bash sh/ImageNet/train_resnet50.sh
bash sh/ImageNet/eval_resnet50.sh

cd GPaCo/MAE-ViTs
bash sh/finetune_base_mae.sh
bash sh/finetune_base_mae_multitask.sh
bash sh/finetune_base_mae_gpaco.sh
bash sh/finetune_base_mae_gpaco_eval.sh

On imbalanced data,

cd GPaCo/LT
bash sh/LT/ImageNetLT_train_X50_multitask.sh
bash sh/LT/ImageNetLT_train_X50.sh
sh/LT/ImageNetLT_eval_X50.sh

bash sh/LT/Inat_train_R50.sh
sh/LT/Inat_eval_R50.sh

bash sh/LT/PlacesLT_train_R152.sh
bash sh/LT/PlacesLT_eval_R152.sh

On semantic segmentation,

cd GPaCo/Seg/semseg
bash sh/ablation_paco_ade20k/upernet_swinbase_160k_ade20k_paco.sh
bash sh/ablation_paco_coco10k/r50_deeplabv3plus_40k_coco10k_paco.sh
bash sh/ablation_paco_context/r50_deeplabv3plus_40k_context_paco.sh
bash sh/ablation_paco_cityscapes/r50_deeplabv3plus_40k_context.sh

Parametric-Contrastive-Learning

This repository contains the implementation code for ICCV2021 paper:
Parametric Contrastive Learning (https://arxiv.org/abs/2107.12028)

Overview

In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalance learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our PaCo loss under a balanced setting. Our analysis demonstrates that PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist 2018 manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models trained with PaCo loss surpass supervised contrastive learning across various ResNet backbones.

Results and Pretrained models

Full ImageNet (Balanced setting)

Method Model Top-1 Acc(%) link log
PaCo ResNet-50 79.3 download download
PaCo ResNet-101 80.9 download download
PaCo ResNet-200 81.8 download download

ImageNet-LT (Imbalance setting)

Method Model Top-1 Acc(%) link log
PaCo ResNet-50 57.0 download download
PaCo ResNeXt-50 58.2 download download
PaCo ResNeXt-101 60.0 download download

iNaturalist 2018 (Imbalanced setting)

Method Model Top-1 Acc(%) link log
PaCo ResNet-50 73.2 TBD download
PaCo ResNet-152 75.2 TBD download

Places-LT (Imbalanced setting)

Method Model Top-1 Acc(%) link log
PaCo ResNet-152 41.2 TBD download

Get Started

For full ImageNet, ImageNet-LT, iNaturalist 2018, Places-LT training and evaluation. Note that PyTorch>=1.6. All experiments are conducted on 4 GPUs. If you have more GPU resources, please make sure that the learning rate should be linearly scaled and 32 images per gpu is recommented.

cd Full-ImageNet
bash sh/train_resnet50.sh
bash sh/eval_resnet50.sh

cd LT
bash sh/ImageNetLT_train_R50.sh
bash sh/ImageNetLT_eval_R50.sh
bash sh/PlacesLT_train_R152.sh
bash sh/PlacesLT_eval_R152.sh

cd LT
bash sh/CIFAR100_train_imb0.1.sh

Contact

If you have any questions, feel free to contact us through email ([email protected]) or Github issues. Enjoy!

BibTex

If you find this code or idea useful, please consider citing our work:

@ARTICLE{10130611,
  author={Cui, Jiequan and Zhong, Zhisheng and Tian, Zhuotao and Liu, Shu and Yu, Bei and Jia, Jiaya},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={Generalized Parametric Contrastive Learning}, 
  year={2023},
  volume={},
  number={},
  pages={1-12},
  doi={10.1109/TPAMI.2023.3278694}}


@inproceedings{cui2021parametric,
  title={Parametric contrastive learning},
  author={Cui, Jiequan and Zhong, Zhisheng and Liu, Shu and Yu, Bei and Jia, Jiaya},
  booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
  pages={715--724},
  year={2021}
}

@ARTICLE{9774921,
  author={Cui, Jiequan and Liu, Shu and Tian, Zhuotao and Zhong, Zhisheng and Jia, Jiaya},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={ResLT: Residual Learning for Long-Tailed Recognition}, 
  year={2023},
  volume={45},
  number={3},
  pages={3695-3706},
  doi={10.1109/TPAMI.2022.3174892}
  }

  
@article{cui2022region,
  title={Region Rebalance for Long-Tailed Semantic Segmentation},
  author={Cui, Jiequan and Yuan, Yuhui and Zhong, Zhisheng and Tian, Zhuotao and Hu, Han and Lin, Stephen and Jia, Jiaya},
  journal={arXiv preprint arXiv:2204.01969},
  year={2022}
  }
  
@article{zhong2023understanding,
  title={Understanding Imbalanced Semantic Segmentation Through Neural Collapse},
  author={Zhong, Zhisheng and Cui, Jiequan and Yang, Yibo and Wu, Xiaoyang and Qi, Xiaojuan and Zhang, Xiangyu and Jia, Jiaya},
  journal={arXiv preprint arXiv:2301.01100},
  year={2023}
}

parametric-contrastive-learning's People

Contributors

jiequancui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

parametric-contrastive-learning's Issues

Question about supcon framework

Hi, thanks for the great work, I have a question about the selection of the framework. Since you use the supervised contrastive loss, why not use the framework from supcon and use MoCo framework instead?

About learnable centers in Paco-loss

In your paper, you mentioned that when calculating Paco-loss for sample xi, the learnable centers cj,j=1...m, are also included as positive/negative samples, besides, the centers seen as positive samples share a different weight compared to other positive samples which are data samples not centers.
However, I checked the codes of GPaco and Paco, finding no use of centers in PacoLoss.
When reproducing your work, I find taking centers into consideration even badly hurts the model performance.
Could you tell the reason? I am quite bothered by this issue.

About the results of SupCon

Hi, I notice that compared with the FCL (which is similar to SupCon) in "EXPLORING BALANCED FEATURE SPACES FOR REPRESENTATION LEARNING", the result of SupCon given in your paper is much lower. What is the reason?

The meaning of beta and gamma in the code

parser.add_argument('--beta', default=1.0, type=float,
help='supervise loss weight')
parser.add_argument('--gamma', default=1.0, type=float,
help='paco loss')
I found these two parameters in the code, and they are used in losses.py, but I can't understand the using of them, are they mentioned in the paper?
If you can explain them to me, I will very appreciate on you, thank you!

Where to download the pretrained model?

Hi,

Thanks for your work. I found that there is a requirement for pretrained_models/resnet152-394f9c45.pth when I run the PlacesLT_train_R152.sh. Could you share the link to download the pretrained model?

Best Regards,
Hongxin

The checkpoint on iNat2018

Thank you guys for your impressive work and releasing the code. I just wonder when will the checkpoint on iNat2018 be released? I'm looking forward to that. Thanks a lot!

Unable to reproduce numbers on ImagenetLT from the given GPaco Resnet50 checkpoint

We used the code below to load gpaco_r50_imagenetlt.pth.tar onto model and evaluated it on ImagenetLT. We ensured to use the correct moco builder files and appropriate parameters given in this repo. However, the model gives near 0 accuracy on ImagenetLT.

We were able to load the parameters successfully from the checkpoint to the model. We are unable to pinpoint the reason for the reduced accuracy, and seek your help for the same.

        if 'paco' in args.path:

            model = moco.builder.MoCo(
                models.__dict__[args.model],
                args.moco_dim, args.moco_k, args.moco_m, args.moco_t, args.mlp, args.feat_dim, num_classes=1000)
        else:
            model = models.__dict__[args.model](num_classes=args.nb_classes, use_norm=args.use_norm)


        if args.path.startswith('https'):
            checkpoint = torch.hub.load_state_dict_from_url(
                args.path, map_location='cpu', check_hash=True)
        else:
            print("[INFORMATION] Loading teacher model from path ", args.path)
            checkpoint = torch.load(args.path, map_location='cuda:0')


        if 'paco' in args.path:
            
            model.to(device)
            model = torch.nn.parallel.DistributedDataParallel(model, device_ids = [0], find_unused_parameters = True)
            model.load_state_dict(checkpoint['state_dict'])
        else:
            model.load_state_dict(checkpoint['model'] if 'model' in checkpoint.keys() else checkpoint['state_dict'])
        
        model.to(device)
        model.eval()

bias/normalisation

Hello.
I have one remark

  • The "bias" term is not mentionned in your paper but it seems to appear when you compare centers with representations. Moreover there is no normalisation in the previous case.

If my previous comment is correct, can you explain this choice. Thanks in advance

Question about inference function in moco/builder.py

Hello, I'm impressed with your work and thank you for sharing the codes.

I have a question abut inference function in LT/moco/builder.py !
In the code below, you are not using q but using self.feat_after_avg_q as input for linear.
Then, it doesn't seem to need the first two lines in the code.
Or, should self.feat_after_avg_q be changed to q?
Can you check this?
Thank you in advance! :)

_def _inference(self, image):
q = self.encoder_q(image)
q = nn.functional.normalize(q, dim=1)
encoder_q_logits = self.linear(self.feat_after_avg_q)

    return encoder_q_logits_

Questions about center learning

Hi, thanks for your exciting work.
I have a question about the centre learning that I did not find any function in the paper, and I did not find any comment of centres on the code.
Could you give any clue about how could we the parametric centres?

I am looking forward to your reply.

Why not use NormedLinear_Classifier?

Hi, thanks for your work!

I noticed that you didn't use NormedLinear_Classifier. In your model you just use a nn.Linear.

What's the reason?

Thanks!

Questions about experiments

Thanks for your ICCV work. However, I find you directly use the test set of ImageNet-LT to store the best models, which may lead to overfitting in practice and seems to be unfair to other compared methods. Could you please provide the results on ImageNet-LT using the validation set to store models? It would be easier for us to compare with PaCo in our work. Thanks very much.

Different resnet backbones

Hi thanks for the great work. I notice that you have used different renet backbones for cifar training (load from resnet_cifar) and imagenet training (load from resnet_imagenet). Is there a reason why you use different backbones?

Questions about NormedLinear_Classifier

Thanks for your great work. I am trying to use NormedLinear_Classifier since I may modify it later. But, it is not good now. Could you please tell me the hyperparameters when your training, such as lr, supt in loss?

where to get the ImageNet_LT data listed in the .txt files?

May I ask where can I download data corresponding to as listed in ImageNet_LT_train.txt, ImageNet_LT_val.txt, ImageNet_LT_test.txt.
I have downloaded several from the original ImageNet site but none of them corresponds to the items listed in the .txt completely.
Thanks for the answer in advance.

About Batch Size

Thank you very much for your work. I am currently trying to reproduce this work, and found that when the Batch Size is 128, the speed is relatively slow, and I want to increase the Batch Size to speed up training. But it is found that Batch Size is limited by the parameter MoCo-k, can I modify MoCo-k arbitrarily, will this affect the result?
Looking forward to your reply.

Question about CIFAR-LT experiment

Hi! thank you for your interesting work!

I have a question on hyperparameters in CIFAR-LT dataset experiments

The paper explains alpha and moco temperature, but there is no explanation for the rest of the hyper-parameters(beta, gamma, augmentation stradegy). Can you please provide it?

Questions about tau-norm with randaugment

Thanks for your exciting work!
Tau-norm with randaugment performs so well as shown in Table 3 and Table 5. I wonder about its implementation, just use augmentation_randncls as train_transform in training stage-1?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.