Comments (8)
Hi Amlan,
May I ask how the first point is trained and generated i.e. how does the first point network generate an arbitrary starting point? I believe this isn't clear from either the CVPR 2017 or 2018 poly-rnn++ paper
from pytorch-polygon-rnn.
I see, sorry about that.
The first point is generated from a separate network.
This network takes in the image features tries to predict all the vertex pixels of the polygon (as a binary classification task), and all the edge pixels (again as binary classification). Both these predictions are done in two separate heads.
This is trained with simple binary cross entropy given grouth truth vertex and edge masks for the edge and vertex head. The first vertex that is passed to the LSTM at train time is the first vertex in the ground truth polygon that it is being trained against.
At test time and during RL training, we sample 1 vertex position from the predicted logits on the vertex head, and pass that to the LSTM as the starting point for the polygon.
I hope this helps!
from pytorch-polygon-rnn.
That's really helpful, esp with regards to the output and training of the first point network.
To clarify things a little, I'm assuming the vertex head depends on the output of the edge head, based on the 2017 paper "One branch predicts object boundaries while the other takes as input the output of the boundary-predicting layer as well as the image features and predicts the vertices of the polygon."
Re LSTM training, I believe the paper did not indicate that the first GT polygon vertex was arbitrarily chosen as the first LSTM input vertex during training (instead of sampling from the model prediction).
EDIT: I noticed that feeding the GT vertex during LSTM training was only mentioned roughly in the 2018 paper section 3.3
Thanks
from pytorch-polygon-rnn.
That is correct, we also found that using them in two separate heads work well too, so it's fine whichever way you implement it.
For the LSTM training, it was obvious to us as a data augmentation, so we maybe did not mention it in the main paper due to space reasons. I agree that we could have added a lot more information, which we hope to make clear with a training code release soon.
from pytorch-polygon-rnn.
Hi, sorry to reply so late. I cannot get a gpu to test the code until now. The result is
{'motorcycle': 0.6401435543456829, 'bicycle': 0.6027971853228983, 'truck': 0.7875317978551573, 'train': 0.7095660575472281, 'car': 0.8150176570258733, 'person': 0.6011296154919364, 'bus': 0.8266491499291869, 'rider': 0.5957857106974498}
given the bounding box. The given bounding box is selected to 15% larger than the groud truth bounding box. I don't know if we use the same processes and the same criteria. And I am glad to know you have released your training code. Can't wait to see your implementation.
from pytorch-polygon-rnn.
The results look great! Did you evaluate at 28x28, or at full resolution? We do it at full resolution, and there is usually a drop in IoU at full resolution because you can only do so much while predicting at 28x28 (a reason why we used a graph net later to be able to predict at higher res)
from pytorch-polygon-rnn.
I do evaluate at full resolution. Though instead of using the upper left point to represent a 8*8 small block, I use the center point.
from pytorch-polygon-rnn.
Thanks! In the paper, we prepare the data by also removing occlusions from the masks -- the polygons provided directly in cityscapes might or might not have the occlusion removed, since they actually solve occlusion using depth ordering while preparing the pixel wise instance/semantic map.
I have a feeling this is the reason why your numbers are so much higher, it'd be great to see how much this repository gets using our pre-processed data! (It is included in our release)
Thanks for the great work!
from pytorch-polygon-rnn.
Related Issues (13)
- Could you please share the datasets? HOT 4
- Visualizing the net-structure when training the net HOT 1
- Training is throwing error. HOT 4
- Images/performance metrics
- How does the `newdataset` handle the polygon-processing? HOT 3
- LICENSE HOT 2
- Which dataset in CitySpace are you refering to?
- Bad performance on CityScapes dataset. HOT 3
- Differences to Original Polygon RNN Paper HOT 2
- Running Correction code HOT 3
- Training on cityscapes, acc always under 0.1 HOT 4
- Testing on CItyscapes, the channels are wrong HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-polygon-rnn.