Coder Social home page Coder Social logo

Comments (20)

super-wcg avatar super-wcg commented on August 13, 2024

@YiLiangNie Hi,did you train the model?

from recurrent-attention-cnn.

chenbinghui1 avatar chenbinghui1 commented on August 13, 2024

@super-wcg I froze the apn nets, randomly initialized 4 classification layers for 3 scales and a fusion scale, then fine-tune the model based on given one with 1e-4 learning rate, while i only got 81.2% on CUB. If i froze all the apn and conv layers, only trained the classification layers, i got 83.5% not 85% as in paper. What about you?

from recurrent-attention-cnn.

chenfeima avatar chenfeima commented on August 13, 2024

@chenbinghui1 How to do this? I need your help to run the project.

  1. Can this project run in Linux?
  2. I have download the dataset "CUB_200_2011". But I don't know what should I do next?
    If you can write a details doc, Which is much better. My e-mail address is [email protected]
    I really need you help. Thank you very much!

from recurrent-attention-cnn.

jens25 avatar jens25 commented on August 13, 2024

@chenfeima Here is what I did, in order to train the network.
If I get the paper right, the training consists of three steps. I skipped the initialization with the vgg weights and the reinforcement learning part, because reinforcement learning algorithms are not part of caffe.
So I created three different train_val.prototxt files.
I used the first one to train the scaling subnets and freezed all other layers.
The second one is used to train the attention proposal networks with a ranking loss to convergence. A sample implementation of this layer can be found here: https://github.com/wanji/caffe-sl/blob/master/src/caffe/layers/pairwise_ranking_loss_layer.cpp
In this second stage all scaling layers are freezed.
In the final training stage all layers are freezed and the final output layers of the network are trained, which combine the output of the different scalings.
I made a gist, with the training prototxt files belonging to this three training steps. You may use it as a starting point.
https://gist.github.com/jens25/6b0ea1143599fb99bd499a08dd5c072c

from recurrent-attention-cnn.

chenbinghui1 avatar chenbinghui1 commented on August 13, 2024

@jens25 I have some questions. (1)What's your final results? Does it close to 85%? (2)As shown in your prototxt, the final classifier for each scale is a 100-class classifier? While in cub there are 200 classes.(3)I think directly fine-tune on the given model with freezing all the layers except for the classifier layers, i.e. your stage-3, the result should be close to 85%, while in fact it only has 83%.

from recurrent-attention-cnn.

jens25 avatar jens25 commented on August 13, 2024

@chenbinghui1 I don't have any results. I just created the prototxt files, in order to train the network on a custom dataset. I didn't evaluated it on the bird dataset now. Maybe a direct finetuning of the model will give you better results, than this approach.

from recurrent-attention-cnn.

chenfeima avatar chenfeima commented on August 13, 2024

@jens25 I download the dataset "CUB_200_2011", but I can not Transform it to lmdb.
Can you give a script to transform?

from recurrent-attention-cnn.

chenfeima avatar chenfeima commented on August 13, 2024

@chenbinghui1 I run the test net, also get the result 83%. Do you know why? Have you got the 85%?

from recurrent-attention-cnn.

chenbinghui1 avatar chenbinghui1 commented on August 13, 2024

@chenfeima If you only test it with the given model, It actually will be 85%. And if you fine-tune it (only fine-tune the classifier layers), you will get 83% and I don't know why.

from recurrent-attention-cnn.

chenfeima avatar chenfeima commented on August 13, 2024

@jens25 Thank you very much!

from recurrent-attention-cnn.

chenfeima avatar chenfeima commented on August 13, 2024

@jens25 I want to know how to Initialize the net ? If only one vgg19, I konw "--weights=caffemodel" But there are 3. I don't konow how to use the same caffemodel to Initialize it?

from recurrent-attention-cnn.

Zyj061 avatar Zyj061 commented on August 13, 2024

@chenfeima Hello, have you solve the initialization problem? I initialized the network by setting weight sharing in the train***.prototxt and saved the model after the network have been initialized before training. Then using this caffe model as my pre-trained model. could anyone tell me whether I'm correct?

from recurrent-attention-cnn.

chenfeima avatar chenfeima commented on August 13, 2024

@Zyj061Using the python Interface:1. read caffemodels. 2. copy the params to new caffemodel by layer names.

from recurrent-attention-cnn.

cocowf avatar cocowf commented on August 13, 2024

@chenbinghui1 How can I add attentioncrop layer and rankloss in caffe.proto?I need some help about message parameter compile in caffe.proto

from recurrent-attention-cnn.

lhCheung1991 avatar lhCheung1991 commented on August 13, 2024

@jens25
Hi, jens25. I have followed your steps, but it didn't work. The model just never converge.
I skipped the initialization with the VGG weights and used Adam with lr=1e-4. It ran on the CUB_200_2011 for about 10 epochs with loss1, loss2, loss3 floating about 5.

What may cause this situation? Ang suggestion will be appreciated.

from recurrent-attention-cnn.

ouceduxzk avatar ouceduxzk commented on August 13, 2024

@lhCheung1991 are you also trying to reproduce the result? Maybe we can talk together offline

from recurrent-attention-cnn.

lhCheung1991 avatar lhCheung1991 commented on August 13, 2024

@ouceduxzk
I am very appreciated for your message. I noticed that you have made some effort on re-implementing this paper. I am looking forward to communicating with you for some details.

from recurrent-attention-cnn.

jackshaw avatar jackshaw commented on August 13, 2024

Hi,@chenbinghui1 Could you please tell me how you prepared your test data when testing the pretrained RA_CNN model? I can only get 74% accuracy using the available pretrained model,and I don't know why.

from recurrent-attention-cnn.

ProblemTryer avatar ProblemTryer commented on August 13, 2024

@jackshaw Hi, can you leave a contact to me (maybe qq)? My email address is [email protected]
I have some trouble in start training the model.
thank you very much!

from recurrent-attention-cnn.

yuqiu1233 avatar yuqiu1233 commented on August 13, 2024

@lhCheung1991 I met the same problem with you, the loss float arount 5.2. Do you know what cause the problem, and how can solve this problem. Thank you very much!

from recurrent-attention-cnn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.