Coder Social home page Coder Social logo

Comments (10)

lreiher avatar lreiher commented on August 29, 2024

Do I correctly understand that all differences you are spotting in the computations are on the order of 1e-14 and smaller? In that case, I wouldn't worry about it, probably just floating point computation issues on that scale, which might also vary between different computers.

If currently you are only trying to reproduce the results, may I ask what MIoU values you are achieving on the provided datasets? I'm assuming that you didn't change any config files?

if I want to use my own dataset, how can I get the correct uNetXST_Homographs value?

  1. Create new camera configuration files like the ones here, containing the intrinsic/extrinsic parameters of your own camera setup.
  2. Run preprocessing/occlusion/occlusion.py
  3. Run preprocessing/occlusion/ipm.py with your own camera config files
  4. Compute uNetXST-compatible homographies with your own camera config files by following the instructions in [preprocessing/homography_converter](https://github.com/ika-rwth-aachen/Cam2BEV/blob/master/preprocessing/homography_converter
  5. Adjust or create a new one-hot conversion file (model/one_hot_conversion)
  6. Set all training parameters in a dedicated config file
  7. Start training

from cam2bev.

yyyzwy avatar yyyzwy commented on August 29, 2024

Do I correctly understand that all differences you are spotting in the computations are on the order of 1e-14 and smaller?

Yes, that's right. So maybe i don't need to worry it anymore.

If currently you are only trying to reproduce the results, may I ask what MIoU values you are achieving on the provided datasets? I'm assuming that you didn't change any config files?

Actually i am using my own dataset, and expect the the model will act just fine like it on provided dataset. But consider the difference between the dataset, this maybe be unreasonable.

I checked the images that warpped by the IPM_homographs i calculated, it seems right and make sense, so i think that the calculation of IPM_homographs about my camera settings is right. So the uNetXST_ homographies that i calculated should also be right, because it is simplely obtained by inputting IPM_homographs parameters into the conversion script.

And the MIoU of uNetXST on my dataset is about 63%, if i use the IPM image as input, send to Deeplab MobileNetV2 model, the MIoU i got is about 66%.

from cam2bev.

lreiher avatar lreiher commented on August 29, 2024

It seems too coincidental that your own camera images would transform just fine with our dataset's camera intrinsics and extrinsics. Would you mind sharing your input images and transformed IPM images?

from cam2bev.

yyyzwy avatar yyyzwy commented on August 29, 2024

No, I didn't use your camera parameters. I recalculated it for my camera settings. But the effect did not meet the expectation, so I thought maybe it was because I had miscalculated the relevant homographs parameters.

drone
image

IPM
image

front
image

rear
image

left
image

right
image

from cam2bev.

lreiher avatar lreiher commented on August 29, 2024

Ah okay, got it. Your IPM image looks correct. If you have then converted the homographies printed by the IPM-script using the homography converter, you have correctly followed the steps.

Did you also apply the occlusion script to the drone label image?

from cam2bev.

yyyzwy avatar yyyzwy commented on August 29, 2024

Yep.
image

from cam2bev.

lreiher avatar lreiher commented on August 29, 2024

Okay, this is also looking good. Another thing to check might be your color palette, i.e., which color represents which class and which classes are merged before presenting input to the network.

Could you also show some predictions given by uNetXST or DL MobileNet?

from cam2bev.

yyyzwy avatar yyyzwy commented on August 29, 2024

Here is a example by DL MobileNet.
image

The difference between input color palette and output color palette is the "Occluded" class I append in the output color palette.
And I think if there is problem with the color palette, all models should be affected.

from cam2bev.

yyyzwy avatar yyyzwy commented on August 29, 2024

By the way, this is the same input result predict by uNetXST.
image

from cam2bev.

lreiher avatar lreiher commented on August 29, 2024

I believe we cannot guarantee that uNetXST would always outperform the other models. I can only give one further suggestion right now what you could investigate: debugging the uNetXST model, you could take a look at the feature maps before and after the Spatial Transformer modules inside the network to check that the IPM transformation is working as expected inside the network.

from cam2bev.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.