Coder Social home page Coder Social logo

Comments (8)

mrlooi avatar mrlooi commented on August 16, 2024

Hi Amlan,

May I ask how the first point is trained and generated i.e. how does the first point network generate an arbitrary starting point? I believe this isn't clear from either the CVPR 2017 or 2018 poly-rnn++ paper

from pytorch-polygon-rnn.

amlankar avatar amlankar commented on August 16, 2024

I see, sorry about that.

The first point is generated from a separate network.
This network takes in the image features tries to predict all the vertex pixels of the polygon (as a binary classification task), and all the edge pixels (again as binary classification). Both these predictions are done in two separate heads.

This is trained with simple binary cross entropy given grouth truth vertex and edge masks for the edge and vertex head. The first vertex that is passed to the LSTM at train time is the first vertex in the ground truth polygon that it is being trained against.

At test time and during RL training, we sample 1 vertex position from the predicted logits on the vertex head, and pass that to the LSTM as the starting point for the polygon.

I hope this helps!

from pytorch-polygon-rnn.

mrlooi avatar mrlooi commented on August 16, 2024

That's really helpful, esp with regards to the output and training of the first point network.

To clarify things a little, I'm assuming the vertex head depends on the output of the edge head, based on the 2017 paper "One branch predicts object boundaries while the other takes as input the output of the boundary-predicting layer as well as the image features and predicts the vertices of the polygon."

Re LSTM training, I believe the paper did not indicate that the first GT polygon vertex was arbitrarily chosen as the first LSTM input vertex during training (instead of sampling from the model prediction).
EDIT: I noticed that feeding the GT vertex during LSTM training was only mentioned roughly in the 2018 paper section 3.3

Thanks

from pytorch-polygon-rnn.

amlankar avatar amlankar commented on August 16, 2024

That is correct, we also found that using them in two separate heads work well too, so it's fine whichever way you implement it.

For the LSTM training, it was obvious to us as a data augmentation, so we maybe did not mention it in the main paper due to space reasons. I agree that we could have added a lot more information, which we hope to make clear with a training code release soon.

from pytorch-polygon-rnn.

AlexMa011 avatar AlexMa011 commented on August 16, 2024

Hi, sorry to reply so late. I cannot get a gpu to test the code until now. The result is
{'motorcycle': 0.6401435543456829, 'bicycle': 0.6027971853228983, 'truck': 0.7875317978551573, 'train': 0.7095660575472281, 'car': 0.8150176570258733, 'person': 0.6011296154919364, 'bus': 0.8266491499291869, 'rider': 0.5957857106974498}
given the bounding box. The given bounding box is selected to 15% larger than the groud truth bounding box. I don't know if we use the same processes and the same criteria. And I am glad to know you have released your training code. Can't wait to see your implementation.

from pytorch-polygon-rnn.

amlankar avatar amlankar commented on August 16, 2024

The results look great! Did you evaluate at 28x28, or at full resolution? We do it at full resolution, and there is usually a drop in IoU at full resolution because you can only do so much while predicting at 28x28 (a reason why we used a graph net later to be able to predict at higher res)

from pytorch-polygon-rnn.

AlexMa011 avatar AlexMa011 commented on August 16, 2024

I do evaluate at full resolution. Though instead of using the upper left point to represent a 8*8 small block, I use the center point.

from pytorch-polygon-rnn.

amlankar avatar amlankar commented on August 16, 2024

Thanks! In the paper, we prepare the data by also removing occlusions from the masks -- the polygons provided directly in cityscapes might or might not have the occlusion removed, since they actually solve occlusion using depth ordering while preparing the pixel wise instance/semantic map.

I have a feeling this is the reason why your numbers are so much higher, it'd be great to see how much this repository gets using our pre-processed data! (It is included in our release)

Thanks for the great work!

from pytorch-polygon-rnn.

Related Issues (13)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.