Coder Social home page Coder Social logo

Comments (22)

bluemandora avatar bluemandora commented on August 13, 2024 3

@jeong-tae I think step 2 is something like:

  1. Initialize the network with VGG pre-trained from ImageNet.
  2. Forward propagation the images and get the feature maps after conv5_4.
  3. Find a square(x, y, l) with half length of the original image and maximum the sum of value in corresponding area in the feature map.
  4. Train the APN network(only APN part) with ground truth(x, y, l) and some loss like MSE.

from recurrent-attention-cnn.

Ostnie avatar Ostnie commented on August 13, 2024

@jeong-tae Hi, I'm also trying to implemention to reproduce this paper with tensorflow, and I also have some trouble about APN。 For your question, I think we should use earlystopping when we trained.

besides this, I have some doubt about APN, As I understand. The input is a batch of images and we will get a set of points(tx,ty,tl) for the segmentation area, so should we use these three dimensional points to cut the current batch of pictures for training? If so, when can we use the next batch of data ?

from recurrent-attention-cnn.

jeong-tae avatar jeong-tae commented on August 13, 2024

@Ostnie i think we use the points to crop the current batch. the points are about current image. so it must be.
i am not sure where are you confusing now

actually i did the early stopping for APN pretrain. but... when? loss does not converging well.

from recurrent-attention-cnn.

Ostnie avatar Ostnie commented on August 13, 2024

@jeong-tae As you said, we should cut current image, and send it to the VGG19, then we use it's loss to modify the APN parameters. Then we will get three new points, shall we still repeat the steps before ?

I'm really confused about the loss of APN, I'm not sure how to calculate it. I guess it depends on the classification of VGG19. As the formula 8, loss=rankloss +crossentropyLoss ,is it ?

from recurrent-attention-cnn.

jeong-tae avatar jeong-tae commented on August 13, 2024

following the paper, we should repeat two times. The losses are not backpropagated togather. rank loss is for APN, entropy loss for conv/classifier.

As authors said, it should calculated in alternative way

from recurrent-attention-cnn.

Ostnie avatar Ostnie commented on August 13, 2024

@jeong-tae Yes you are right, then I have some doubts about rank loss, is it calculated by the output of the softmax layers in vgg19?I think it is strange because the loss contain some information about it's network's parameters. Can we use vgg's loss to modify the APN? I don't know how to do this, could you plz show me some code about this?

from recurrent-attention-cnn.

jeong-tae avatar jeong-tae commented on August 13, 2024

Yes it is. you can use the output of the softmax layer.
rank_loss = (pred[i]-pred[i+1] + 0.05).clamp(min = 0)
i calculated the loss like this. Why can't we use the loss that contains the network parameters?

i think the purpose of the rank loss is to fill the gap between scales performances. by doing this, APN will propose the more precise region to increase the performance at each scale.

from recurrent-attention-cnn.

Ostnie avatar Ostnie commented on August 13, 2024

When I learned the back propagation algorithm. Loss is not just a number that shows how much the difference between the pred and the truth, it also contains information about the impact of each parameter on the final loss in the network. If we use the loss value of VGG, then the loss does not contain APN information in it, although they share most layers, but the last few A fully connected layer is independent of each other. In other words, if you give me a loss value of VGG and let me back propagation to calculate how to optimize the parameters of APN, I don't think it can be done.

I think I may be wrong, but based on the back propagation algorithm I have deduced, I really can not understand this method.

from recurrent-attention-cnn.

jeong-tae avatar jeong-tae commented on August 13, 2024

The rank loss is the gap between VGG1 and VGG2. You can easily imagine the meta-learning that teach the difference between two networks(in this cage VGG1 and VGG2). And the gap is occured in different scales with attention. So APN learn the attention where should we focus. if gap is large enough, the APN will try to reduce that gap by the proposing a attention.

from recurrent-attention-cnn.

Ostnie avatar Ostnie commented on August 13, 2024

@jeong-tae This makes me confused,it seems to be right, but how can I get VGG's loss backpropagation to APN? I can't understand it and it really upset me.

In tensorflow, I don't know how to set APN's loss as VGG‘s loss, could you plz show me how pytorch accomplished this step?

from recurrent-attention-cnn.

jeong-tae avatar jeong-tae commented on August 13, 2024

oh, you mean backpropagation for APN?
i actually implement the backward code following the caffe code, which is in attention crop layer.

i will finish the code work so soon and make it public. Then you can see the whole process as well!

from recurrent-attention-cnn.

Ostnie avatar Ostnie commented on August 13, 2024

@jeong-tae https://github.com/Charleo85/DeepCar this library may help you, it is written in pytorch.

from recurrent-attention-cnn.

jeong-tae avatar jeong-tae commented on August 13, 2024

@Ostnie oh, very nice! thx!

from recurrent-attention-cnn.

jeong-tae avatar jeong-tae commented on August 13, 2024

@Ostnie i publish the code and need some helps. If you still interested in implementation with other framework. come to here https://github.com/jeong-tae/RACNN-pytorch and work together.

from recurrent-attention-cnn.

Ostnie avatar Ostnie commented on August 13, 2024

@jeong-tae Oh,great , I will study it soon, but I'm not familiar with pytorch, let's have a try first !

from recurrent-attention-cnn.

jackshaw avatar jackshaw commented on August 13, 2024

Hi,@jeong-tae,I'm trying to reproduce RA-CNN too.I have some doubt about the data preprocessing.In pytorch,the pixesl of images will be rescaled to 0 between 255,which is different from that in caffe.Do you think this difference will inluence the performance ?

from recurrent-attention-cnn.

jeong-tae avatar jeong-tae commented on August 13, 2024

@jackshaw hello, jachshaw
I am not sure what you mean. Do you mean normalization? or subtract mean?
Whatever you do, it will not effect too much... maybe. But actually it influence to performance.

https://stackoverflow.com/questions/4674623/why-do-we-have-to-normalize-the-input-for-an-artificial-neural-network
This reply will help you to understand data preprocessing

from recurrent-attention-cnn.

jackshaw avatar jackshaw commented on August 13, 2024

@jeong-tae Thanks very much for your reply. Did you ever tried the available caffe pretrained model?I can only get 74% accuracy far from 85%. I think I must miss some important details when preparing my test data, but I can not figure out what details I've missed. I just resized the shortest side of each image and then converted the resized image to lmdb format.

from recurrent-attention-cnn.

jeong-tae avatar jeong-tae commented on August 13, 2024

Nope. i didn't. In pytorch, there is image resize preprocessing that used in the paper.
You can easily find it in the pytorch docs.

from recurrent-attention-cnn.

jeong-tae avatar jeong-tae commented on August 13, 2024

i think so too exactly same!
i tried with that way but i can’t reproduce the result. i will soon try again.

from recurrent-attention-cnn.

lmy418lmy avatar lmy418lmy commented on August 13, 2024

Could you send me the source code with caffe?

from recurrent-attention-cnn.

flash1803 avatar flash1803 commented on August 13, 2024

@jeong-tae I think step 2 is something like:

  1. Initialize the network with VGG pre-trained from ImageNet.
  2. Forward propagation the images and get the feature maps after conv5_4.
  3. Find a square(x, y, l) with half length of the original image and maximum the sum of value in corresponding area in the feature map.
  4. Train the APN network(only APN part) with ground truth(x, y, l) and some loss like MSE.

How can I get the ground truth(x,y,l) ?

from recurrent-attention-cnn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.