Coder Social home page Coder Social logo

jh-lee-kr / l2p-pytorch Goto Github PK

View Code? Open in Web Editor NEW
159.0 2.0 22.0 153 KB

PyTorch Implementation of Learning to Prompt (L2P) for Continual Learning @ CVPR22

License: Apache License 2.0

Python 99.31% Shell 0.69%
continual-learning deep-learning incremental-learning pytorch-implemention

l2p-pytorch's Introduction

Stats

Top Langs

Python PyTorch Shell Script Ansible
Docker Kubernetes Django

l2p-pytorch's People

Contributors

jh-lee-kr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

l2p-pytorch's Issues

vit_base_patch16_224

Hello!
Thank you so much for implementing the pytorch version of l2p!
When obtaining the pre-training model, I encountered the following problems. Are there any other friends who encounter similar problems?
Snipaste_2023-04-24_21-29-49
Thank you for your work! Looking forward to your reply, best wishes!

Question about loss function

In engine.py line 68
loss = loss - args.pull_constraint_coeff * output['reduce_sim']
In the paper is
loss = loss + args.pull_constraint_coeff * output['reduce_sim']?

Question regarding prompt selection

Hi!

Thank you for implementing the pytorch version of l2p!
While running the code on CIFAR100 dataset, I find that for all tasks, only prompt with index 0, 4, 5, 8, 9 will be selected.
However, if the same subset of prompts is selected for all tasks, it will be updated for each task and wouldn't this still cause catatstrophic forgetting? Do you have an idea of why this is happening and why l2p seem to suffer from much less forgetting?

Thank you!

L2P reproduce (about freeze layer & shuffle argument)

Hello. Thank you for the PyTorch implementation of L2P. I have a question regarding the reimplementation of L2P on CIFAR100. In the paper, the freeze layer includes blocks, patch_embed, and cls_token. However, the --freeze argument in the code you provided includes blocks, patch_embed, cls_token, pos_embed, and norm. I have implemented it following the paper's approach, but the performance is not matching the GitHub code. (The shuffle element is also set to False.) Do you have any ideas or suggestions?

Question about the classification head layer

Hi, I am very interested in your work, and your reproduction work contributes a lot to the incremental learning community.

In the process of studying the code, I found that the classification layer is only updated at the first task, and then no longer updated in subsequent tasks, is this normal? Is the official code also set up like this? Look forward to your answer,

Diversifying prompt-selection

Thank you very much your pytorch implementation for L2P!
I have a question about prompt selection.
In the paper, they use prompt frequency based weight to select diverse prompt, but i think i can't find that part in the code.
I think I can't find that part not only in your code but also in the official jax-based code, so could you let me know if there's anything I'm missing?

Thank you very much for your work!!!

The Prompt parameters of five_datasets in pytorch-implementation is different from that given in paper

Hi!
Thank you so much for implementing the pytorch version of l2p!
I'm recently trying to reproduce the result of "five_datasets" in pytorch-implementation you gave, but I have noticed that the Prompt parameters of five_datasets in pytorch-implementation are different from that given in the paper. The prompt length you gave in the paper is 5, but it's 10 in the code. And I wonder if this will affect the final experimental results?

Thank you for your work! Looking forward to your reply, best wishes!

Loss is NaN.

Hi @Lee-JH-KR,

Thank you for realising the PyTorch version of L2P.

I am getting the following error. Do you have any suggestions for it?

Loss is nan, stopping training.

I really appreciate any help you can provide.

How to use the rehearsal buffer?

Hi, amazing work!

I noticed in the paper that l2p can use the rehearsal buffer to further improve performance, but the repository doesn't seem to include code implementation for this part. I have a few questions about the implementation of this part:
(1) Random sampling or herding sampling?
(2) In addition to having old samples in dataloader, are any other operations used, such as distillation, balanced fine-tuning, etc.
(3) Will the official implementation of this part be added to the code later?
(4) Whether the rehearsal buffer can further improve the performance of "dualprompt" also?

Looking forward to your reply, best wishes!

Doubts regarding Transferring previous learned prompt params to the new prompt

Hi @JH-LEE-KR, thanks for this amazing Pytorch implementation of L2P. I have the following doubts in the code:

  1. In engine.py > train_and_evaluate() : Transfer previous learned prompt params to the new prompt. I am confused about this - the top_k prompts used for any task will be overlapping as there aren't enough dedicated (mutually exclusive) prompts for each task. So why are we shifting the weights of prompts from prev_idx to cur_idx ?
    model.prompt.prompt[cur_idx] = model.prompt.prompt[prev_idx]
    Based on my understanding, if the prompt pool size is 10, then the 10 prompts will be common/shared across all tasks and at every batch training, top k (5 prompts) will get updated based on query function. Kindly help me understand this.

  2. Regarding the usage of train_mask and class_mask:
    Does L2P not initialize its own classifier for every new task (that has a union of all classes seen till that task)? Then why do we need to mask out certain classes just before loss computation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.