Comments (14)
Thanks for your interest.
For Swin, we use bottleneck=dim // 12
, which can bring similar amount of parameters compared with plain ViT.
from adaptformer.
Thanks for your interest.
For Swin, we use
bottleneck=dim // 12
, which can bring similar amount of parameters compared with plain ViT.
Thank you for sharing. I want to know that the pre training weight of SWin is mainly the input of 224 or 384, but when I use SWin, the input size is 1024 or 1120. In this way, the pre training weight is frozen, and only the effect of Adapt-mlp is good?What is the input size when the author tries to apply as to SWin?
from adaptformer.
Thanks for your interest.
For Swin, we usebottleneck=dim // 12
, which can bring similar amount of parameters compared with plain ViT.Thank you for sharing. I want to know that the pre training weight of SWin is mainly the input of 224 or 384, but when I use SWin, the input size is 1024 or 1120. In this way, the pre training weight is frozen, and only the effect of Adapt-mlp is good?What is the input size when the author tries to apply as to SWin?
Hi, @LUO77123
Thanks for your interest. I am sorry that I am not sure if I understand you correctly.
We use input size 224x224 for swin transformer. We did not experiment with other image sizes.
from adaptformer.
Thanks for your interest.
For Swin, we usebottleneck=dim // 12
, which can bring similar amount of parameters compared with plain ViT.Thank you for sharing. I want to know that the pre training weight of SWin is mainly the input of 224 or 384, but when I use SWin, the input size is 1024 or 1120. In this way, the pre training weight is frozen, and only the effect of Adapt-mlp is good?What is the input size when the author tries to apply as to SWin?
Hi, @LUO77123
Thanks for your interest. I am sorry that I am not sure if I understand you correctly.
We use input size 224x224 for swin transformer. We did not experiment with other image sizes.
Hello, I mean to use SWin for the backbone network of target detection. The input image size is no longer 224x224 or 384x384 when the pre-training weight is used, but 1024x1024 or 1120x1120. At this time, freeze the pre training weight and only train the unfrozen layers in the middle of the adapt MLP. Is this good?
您好,我的意思是将SWin用于目标检测的骨干网络,输入的图像大小不再是预训练权重时候的224x224或者384x384,而是1024x1024或者1120x1120,这时候再冻结预训练权重,只训练Adapt-mlp中间未冻结的几层,这样的效果好吗?
from adaptformer.
For downstream tasks, please refer #1. We will update related results for downstream tasks after finishing experiments.
from adaptformer.
For downstream tasks, please refer #1. We will update related results for downstream tasks after finishing experiments.
thanks
from adaptformer.
For downstream tasks, please refer #1. We will update related results for downstream tasks after finishing experiments.
Hello, there is one last question. If you apply Adapt-MLP to Swin's detection network backbone, do you want to build a new dictionary to import the 384x384 Swin pre training weights according to the new network structure? At this time, freeze the pre training weights and only train the unfrozen layers in the middle of Adapt-MLP. Is this the way to do it?
您好,还有最后一个问题,如果将Adapt-MLP运用到SWin的检测网络骨干中,是否是将384x384的Swin预训练权重按照新的网络结构构建新的字典导入,这时候再冻结预训练权重,只训练Adapt-mlp中间未冻结的几层,是这样操作吗?
from adaptformer.
Yes, you are right.
from adaptformer.
Yes, you are right.
OK, thank you. I'll try the effect. Are you going to open source this downstream image processing method in mid or late June?
好的,谢谢您,我去尝试一下效果,您准备6月中旬还是下旬开源这种下游图像处理的这种方法吗?
from adaptformer.
Yes, you are right.
Could you tell me where the code for freezing weights is in the video processing code you implemented? I was careless and didn't look carefully. Can you give me some guidance on where to study.
还能否请问一下,您实现的视频处理代码中,冻结权重的代码在哪里呀,自己粗心没仔细看。能否指导一下在哪里,好好学习一下。
from adaptformer.
Here: https://github.com/ShoufaChen/AdaptFormer/blob/main/main_video.py#L340-L348
from adaptformer.
Here: https://github.com/ShoufaChen/AdaptFormer/blob/main/main_video.py#L340-L348
Thanks, I have modified the adjustment, but I don't know the three values (mid_dim=64, dropout=drop, S=0.1). mid_dim's experiment in the paper proves that it takes 64. Dropout is 0 by default. S is 0.1 or 0? Can you answer?
谢谢,我已经修改调通了,但是我不知道这三个值(mid_dim=64, dropout=drop, s=0.1),mid_dim在论文中的实验证明取的64,dropout我默认取0,S是取0.1还是0喃,能解答一下吗?
from adaptformer.
mid_dim
is 64 for ViT and dim // 12
for swin transformer. dropout is 0 and s is 0.1.
from adaptformer.
Thanks for your interest.
For Swin, we use
bottleneck=dim // 12
, which can bring similar amount of parameters compared with plain ViT.
Hi, Where to set "bottleneck=dim//12"? thanks in advance!
from adaptformer.
Related Issues (20)
- Vit-B IN21K weights HOT 4
- Questions about data split. HOT 1
- Code for NUS-WIDE
- [ Preprocessing SSv2 ] HOT 2
- [ Video_main.py : SSv2 dataset ] HOT 1
- question about reproduce
- Which ViT IN21K weights did you use?
- Question about Table 2 (multi-label classification performance)
- Does adapter-based fine-tuning save memory compared to full parameter fine-tuning?
- how to prepare the dataset of HMDB51?
- I would like to know the number of tokens selected for "VPT" in Table1.
- Evaluation HOT 1
- How to use Adaptformer for Swin Transformer
- How to finetune the Imagenet21k HOT 1
- AdaptFormer on Swin Transformer
- How to obtain supervised pre-trained model parameters in VideoMAE.
- question about the initialization
- Thank you for your work. This is the first paper I have replicated in the field of artificial intelligence, and your detailed explanations and well-organized code have brought great convenience to my replication process.
- Inconsistency in pretrained mae vitb16 weights
- Why should we even use adapter in image vit?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from adaptformer.