Comments (20)
@YiLiangNie Hi,did you train the model?
from recurrent-attention-cnn.
@super-wcg I froze the apn nets, randomly initialized 4 classification layers for 3 scales and a fusion scale, then fine-tune the model based on given one with 1e-4 learning rate, while i only got 81.2% on CUB. If i froze all the apn and conv layers, only trained the classification layers, i got 83.5% not 85% as in paper. What about you?
from recurrent-attention-cnn.
@chenbinghui1 How to do this? I need your help to run the project.
- Can this project run in Linux?
- I have download the dataset "CUB_200_2011". But I don't know what should I do next?
If you can write a details doc, Which is much better. My e-mail address is [email protected]
I really need you help. Thank you very much!
from recurrent-attention-cnn.
@chenfeima Here is what I did, in order to train the network.
If I get the paper right, the training consists of three steps. I skipped the initialization with the vgg weights and the reinforcement learning part, because reinforcement learning algorithms are not part of caffe.
So I created three different train_val.prototxt files.
I used the first one to train the scaling subnets and freezed all other layers.
The second one is used to train the attention proposal networks with a ranking loss to convergence. A sample implementation of this layer can be found here: https://github.com/wanji/caffe-sl/blob/master/src/caffe/layers/pairwise_ranking_loss_layer.cpp
In this second stage all scaling layers are freezed.
In the final training stage all layers are freezed and the final output layers of the network are trained, which combine the output of the different scalings.
I made a gist, with the training prototxt files belonging to this three training steps. You may use it as a starting point.
https://gist.github.com/jens25/6b0ea1143599fb99bd499a08dd5c072c
from recurrent-attention-cnn.
@jens25 I have some questions. (1)What's your final results? Does it close to 85%? (2)As shown in your prototxt, the final classifier for each scale is a 100-class classifier? While in cub there are 200 classes.(3)I think directly fine-tune on the given model with freezing all the layers except for the classifier layers, i.e. your stage-3, the result should be close to 85%, while in fact it only has 83%.
from recurrent-attention-cnn.
@chenbinghui1 I don't have any results. I just created the prototxt files, in order to train the network on a custom dataset. I didn't evaluated it on the bird dataset now. Maybe a direct finetuning of the model will give you better results, than this approach.
from recurrent-attention-cnn.
@jens25 I download the dataset "CUB_200_2011", but I can not Transform it to lmdb.
Can you give a script to transform?
from recurrent-attention-cnn.
@chenbinghui1 I run the test net, also get the result 83%. Do you know why? Have you got the 85%?
from recurrent-attention-cnn.
@chenfeima If you only test it with the given model, It actually will be 85%. And if you fine-tune it (only fine-tune the classifier layers), you will get 83% and I don't know why.
from recurrent-attention-cnn.
@jens25 Thank you very much!
from recurrent-attention-cnn.
@jens25 I want to know how to Initialize the net ? If only one vgg19, I konw "--weights=caffemodel" But there are 3. I don't konow how to use the same caffemodel to Initialize it?
from recurrent-attention-cnn.
@chenfeima Hello, have you solve the initialization problem? I initialized the network by setting weight sharing in the train***.prototxt and saved the model after the network have been initialized before training. Then using this caffe model as my pre-trained model. could anyone tell me whether I'm correct?
from recurrent-attention-cnn.
@Zyj061Using the python Interface:1. read caffemodels. 2. copy the params to new caffemodel by layer names.
from recurrent-attention-cnn.
@chenbinghui1 How can I add attentioncrop layer and rankloss in caffe.proto?I need some help about message parameter compile in caffe.proto
from recurrent-attention-cnn.
@jens25
Hi, jens25. I have followed your steps, but it didn't work. The model just never converge.
I skipped the initialization with the VGG weights and used Adam with lr=1e-4. It ran on the CUB_200_2011 for about 10 epochs with loss1, loss2, loss3 floating about 5.
What may cause this situation? Ang suggestion will be appreciated.
from recurrent-attention-cnn.
@lhCheung1991 are you also trying to reproduce the result? Maybe we can talk together offline
from recurrent-attention-cnn.
@ouceduxzk
I am very appreciated for your message. I noticed that you have made some effort on re-implementing this paper. I am looking forward to communicating with you for some details.
from recurrent-attention-cnn.
Hi,@chenbinghui1 Could you please tell me how you prepared your test data when testing the pretrained RA_CNN model? I can only get 74% accuracy using the available pretrained model,and I don't know why.
from recurrent-attention-cnn.
@jackshaw Hi, can you leave a contact to me (maybe qq)? My email address is [email protected]
I have some trouble in start training the model.
thank you very much!
from recurrent-attention-cnn.
@lhCheung1991 I met the same problem with you, the loss float arount 5.2. Do you know what cause the problem, and how can solve this problem. Thank you very much!
from recurrent-attention-cnn.
Related Issues (20)
- Data augmentation? HOT 1
- Is the code and model link still available? HOT 6
- Anyone who achieve reported performance ? Run successfully, 0.78ccuracy gained HOT 8
- Vanishing gradient issue in APN HOT 1
- 论文流程理解 HOT 10
- Implementation in pytorch HOT 22
- The question about training with rank loss . HOT 2
- code and model HOT 6
- some problem about APN HOT 1
- Why we can not achieve the performance mentioned in original paper?
- Missing Windows under RA_CNN_caffe folder
- Where can i download the original implement code? HOT 2
- original implement code
- where is the code? HOT 47
- Equation 7 in the paper confused me HOT 1
- AttentionCrop Layer: Where can we find it? HOT 4
- The effect of softmax loss and rank loss HOT 21
- Paper's VGG-19 accuracy question HOT 21
- Implement RA-CNN in tensorflow HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from recurrent-attention-cnn.