Comments (22)
@jeong-tae I think step 2 is something like:
- Initialize the network with VGG pre-trained from ImageNet.
- Forward propagation the images and get the feature maps after conv5_4.
- Find a square(x, y, l) with half length of the original image and maximum the sum of value in corresponding area in the feature map.
- Train the APN network(only APN part) with ground truth(x, y, l) and some loss like MSE.
from recurrent-attention-cnn.
@jeong-tae Hi, I'm also trying to implemention to reproduce this paper with tensorflow, and I also have some trouble about APN。 For your question, I think we should use earlystopping when we trained.
besides this, I have some doubt about APN, As I understand. The input is a batch of images and we will get a set of points(tx,ty,tl) for the segmentation area, so should we use these three dimensional points to cut the current batch of pictures for training? If so, when can we use the next batch of data ?
from recurrent-attention-cnn.
@Ostnie i think we use the points to crop the current batch. the points are about current image. so it must be.
i am not sure where are you confusing now
actually i did the early stopping for APN pretrain. but... when? loss does not converging well.
from recurrent-attention-cnn.
@jeong-tae As you said, we should cut current image, and send it to the VGG19, then we use it's loss to modify the APN parameters. Then we will get three new points, shall we still repeat the steps before ?
I'm really confused about the loss of APN, I'm not sure how to calculate it. I guess it depends on the classification of VGG19. As the formula 8, loss=rankloss +crossentropyLoss ,is it ?
from recurrent-attention-cnn.
following the paper, we should repeat two times. The losses are not backpropagated togather. rank loss is for APN, entropy loss for conv/classifier.
As authors said, it should calculated in alternative way
from recurrent-attention-cnn.
@jeong-tae Yes you are right, then I have some doubts about rank loss, is it calculated by the output of the softmax layers in vgg19?I think it is strange because the loss contain some information about it's network's parameters. Can we use vgg's loss to modify the APN? I don't know how to do this, could you plz show me some code about this?
from recurrent-attention-cnn.
Yes it is. you can use the output of the softmax layer.
rank_loss = (pred[i]-pred[i+1] + 0.05).clamp(min = 0)
i calculated the loss like this. Why can't we use the loss that contains the network parameters?
i think the purpose of the rank loss is to fill the gap between scales performances. by doing this, APN will propose the more precise region to increase the performance at each scale.
from recurrent-attention-cnn.
When I learned the back propagation algorithm. Loss is not just a number that shows how much the difference between the pred and the truth, it also contains information about the impact of each parameter on the final loss in the network. If we use the loss value of VGG, then the loss does not contain APN information in it, although they share most layers, but the last few A fully connected layer is independent of each other. In other words, if you give me a loss value of VGG and let me back propagation to calculate how to optimize the parameters of APN, I don't think it can be done.
I think I may be wrong, but based on the back propagation algorithm I have deduced, I really can not understand this method.
from recurrent-attention-cnn.
The rank loss is the gap between VGG1 and VGG2. You can easily imagine the meta-learning that teach the difference between two networks(in this cage VGG1 and VGG2). And the gap is occured in different scales with attention. So APN learn the attention where should we focus. if gap is large enough, the APN will try to reduce that gap by the proposing a attention.
from recurrent-attention-cnn.
@jeong-tae This makes me confused,it seems to be right, but how can I get VGG's loss backpropagation to APN? I can't understand it and it really upset me.
In tensorflow, I don't know how to set APN's loss as VGG‘s loss, could you plz show me how pytorch accomplished this step?
from recurrent-attention-cnn.
oh, you mean backpropagation for APN?
i actually implement the backward code following the caffe code, which is in attention crop layer.
i will finish the code work so soon and make it public. Then you can see the whole process as well!
from recurrent-attention-cnn.
@jeong-tae https://github.com/Charleo85/DeepCar this library may help you, it is written in pytorch.
from recurrent-attention-cnn.
@Ostnie oh, very nice! thx!
from recurrent-attention-cnn.
@Ostnie i publish the code and need some helps. If you still interested in implementation with other framework. come to here https://github.com/jeong-tae/RACNN-pytorch and work together.
from recurrent-attention-cnn.
@jeong-tae Oh,great , I will study it soon, but I'm not familiar with pytorch, let's have a try first !
from recurrent-attention-cnn.
Hi,@jeong-tae,I'm trying to reproduce RA-CNN too.I have some doubt about the data preprocessing.In pytorch,the pixesl of images will be rescaled to 0 between 255,which is different from that in caffe.Do you think this difference will inluence the performance ?
from recurrent-attention-cnn.
@jackshaw hello, jachshaw
I am not sure what you mean. Do you mean normalization? or subtract mean?
Whatever you do, it will not effect too much... maybe. But actually it influence to performance.
https://stackoverflow.com/questions/4674623/why-do-we-have-to-normalize-the-input-for-an-artificial-neural-network
This reply will help you to understand data preprocessing
from recurrent-attention-cnn.
@jeong-tae Thanks very much for your reply. Did you ever tried the available caffe pretrained model?I can only get 74% accuracy far from 85%. I think I must miss some important details when preparing my test data, but I can not figure out what details I've missed. I just resized the shortest side of each image and then converted the resized image to lmdb format.
from recurrent-attention-cnn.
Nope. i didn't. In pytorch, there is image resize preprocessing that used in the paper.
You can easily find it in the pytorch docs.
from recurrent-attention-cnn.
i think so too exactly same!
i tried with that way but i can’t reproduce the result. i will soon try again.
from recurrent-attention-cnn.
Could you send me the source code with caffe?
from recurrent-attention-cnn.
@jeong-tae I think step 2 is something like:
- Initialize the network with VGG pre-trained from ImageNet.
- Forward propagation the images and get the feature maps after conv5_4.
- Find a square(x, y, l) with half length of the original image and maximum the sum of value in corresponding area in the feature map.
- Train the APN network(only APN part) with ground truth(x, y, l) and some loss like MSE.
How can I get the ground truth(x,y,l) ?
from recurrent-attention-cnn.
Related Issues (20)
- Data augmentation? HOT 1
- Is the code and model link still available? HOT 6
- Anyone who achieve reported performance ? Run successfully, 0.78ccuracy gained HOT 8
- Vanishing gradient issue in APN HOT 1
- 论文流程理解 HOT 10
- The question about training with rank loss . HOT 2
- code and model HOT 6
- some problem about APN HOT 1
- Why we can not achieve the performance mentioned in original paper?
- Missing Windows under RA_CNN_caffe folder
- Where can i download the original implement code? HOT 2
- original implement code
- where is the code? HOT 47
- Equation 7 in the paper confused me HOT 1
- AttentionCrop Layer: Where can we find it? HOT 4
- The effect of softmax loss and rank loss HOT 21
- Some questions about training and testing HOT 20
- Paper's VGG-19 accuracy question HOT 21
- Implement RA-CNN in tensorflow HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from recurrent-attention-cnn.