Comments (21)
The source code do not have rank loss, can you tell me where I can get the rank loss, or can you send me the rank loss ? Thanks a lot.
from recurrent-attention-cnn.
@zanghao2 Here is a simple implementation. Hope it helps. https://gist.github.com/QQQYang/e535f336813b44d72d3b1d6184bf4586
from recurrent-attention-cnn.
@QQQYang Hello,I want to retrain the project, but my ability is very low. Are you willing to help me?Sharing your train.prototxt? Thank you very much!
from recurrent-attention-cnn.
@chenfeima This is my train_cnn.prototxt. But I have not achieved good performance on my own dataset. Maybe it needs some fixes. If you find some errors in the prototxt file, please keep me informed. Thank you.
https://gist.github.com/QQQYang/3b8b564554c02fc55325dc026747bdb6
from recurrent-attention-cnn.
@QQQYang Thank you very much! The RankLoss whether is https://gist.github.com/QQQYang/e535f336813b44d72d3b1d6184bf4586 ? If not I also need your RankLoss. My own train_prototxt and rankloss is very bad, only get 77% accuary on cub200 by scale1+2.
from recurrent-attention-cnn.
@chenfeima I have updated the prototxt file to keep consistent with the RankLoss above. You can check the train.prototxt again.
from recurrent-attention-cnn.
@QQQYang Tank you very much!
from recurrent-attention-cnn.
@QQQYang I have down this: 1. Fix the apn net, optimize by softmaxloss. 2. Fix conv/fc, optimize by your RankLoss. 3. Fix the apn net, optimize by softmaxloss. I only get 0.8% acc improvement in scale2. Whether my strategy is wrong? What about your strategy and result? Whether the RankLoss is not perfect?
from recurrent-attention-cnn.
@chenfeima My strategy is the same with you. I did not test on public datasets, but got poor performance on my own dataset. The RankLoss is written according to the original paper and passed the gradient test. Maybe there is something wrong with my RankLoss. I did not debug this project for a time.
from recurrent-attention-cnn.
@QQQYang Is it necessary that I compile the attentioncrop layer and rank loss in my own caffe firstly ,secondly I can use train.prototxt?
from recurrent-attention-cnn.
@cocowf Yes, you have to compile them first on Linux. Feel free to use the train.prototxt.
from recurrent-attention-cnn.
@QQQYang Hello!How you adjust the parameters when you optimize APN by RankLoss?(ek, margins, learning rate). When stop optimize APN by RankLoss, and change to optimize scale2 by softmax loss?
from recurrent-attention-cnn.
@chenfeima I did not spend much time on adjusting hyperparameters. So I cannot give any advice. What I have done is preparing two train.prototxt with different learning rate respectively. In each prototxt file, I adopted similar parameters and strategy as traditional networks, like learning rate decay, fixed margin. When training the whole network, these two files are used alternately.
from recurrent-attention-cnn.
@QQQYang when I trian my own data,rank loss is increasing ,loss1,2,3 and accuracy shake steady.I want to know your learning rate and how to change margin .In addtion,is it comvenient for you to leave another contact,such as QQ .my qq is 597512150
from recurrent-attention-cnn.
@QQQYang HELLO, why I think your rankloss is not consistent with the original paper? Pred[label[i]+i×dim+dim/3×j]- Pred[label[i]+i×dim+dim/3×(j+1)]
from recurrent-attention-cnn.
@QQQYang
Thanks for your contribution to implementing the Rank Loss. Have your re-implement the result on the paper? Or you just trained on your own dataset?
from recurrent-attention-cnn.
@lhCheung1991 I just tested on my own dataset.
from recurrent-attention-cnn.
@QQQYang
OK. Could you share the alternating-training script for RA-CNN. I will very appreciate.
from recurrent-attention-cnn.
@QQQYang I am trying to train the RACNN, could you send me the rank_loss?
I think the loss is not correct. https://gist.github.com/QQQYng/e535f336813b44d72d3b1d6184bf4586
from recurrent-attention-cnn.
Hello, I have added the rank_loss2_layer you provided to the RACNN provided by the original author, but even after training many times, the loss has not changed. Have you solved this problem?
from recurrent-attention-cnn.
I can't download the source code,can you send me the source code with caffe?
from recurrent-attention-cnn.
Related Issues (20)
- Data augmentation? HOT 1
- Is the code and model link still available? HOT 6
- Anyone who achieve reported performance ? Run successfully, 0.78ccuracy gained HOT 8
- Vanishing gradient issue in APN HOT 1
- 论文流程理解 HOT 10
- Implementation in pytorch HOT 22
- The question about training with rank loss . HOT 2
- code and model HOT 6
- some problem about APN HOT 1
- Why we can not achieve the performance mentioned in original paper?
- Missing Windows under RA_CNN_caffe folder
- Where can i download the original implement code? HOT 2
- original implement code
- where is the code? HOT 47
- Equation 7 in the paper confused me HOT 1
- AttentionCrop Layer: Where can we find it? HOT 4
- Some questions about training and testing HOT 20
- Paper's VGG-19 accuracy question HOT 21
- Implement RA-CNN in tensorflow HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from recurrent-attention-cnn.