Coder Social home page Coder Social logo

Comments (7)

JUGGHM avatar JUGGHM commented on May 28, 2024

Thanks for your interest! This is a very interesting finding and this does happen in my environment. However, via the following "experiments", my conclusion is that: the surprising latency is owned to the data copy process from GPU to CPU:

(1) Situations when the latency is above 100ms:
(i)
image
(ii)
image
(iii)
image

(2) Situations when the latency is close to those reported in the paper
image

I hope these examples are helpful for your question.

By the way, It shocked me that you successfully trained ENet towards a comparable performance with my trained full model. I wonder if your device configurations and training parameters are available? (Ignore it if it is against your will)

from penet_icra2021.

wdjose avatar wdjose commented on May 28, 2024

Actually, for the screenshots above, I ran your pre-trained PENet, not ENet 😄 all other parameters are the same: python main.py -b 1 -n pe --evaluate pe.pth.tar. But I also tried it with just ENet, and increased latency was also present.

Regarding the latency: I am not sure if this is due to latency from transferring between CPU and GPU. Actually, the reason why I am very skeptical, is because I actually tried to run this model on the Jetson Nano (vanilla, no modifications):

These were my results:

image
original code

image
modified code - with print(pred) (or just str(pred))

In addition, I was tripping the GPU watchdog timer of the Jetson Nano in the same line 268 where metrics are being computed (which should just be purely CPU operations), instead of in the model inference.

I find it hard to believe that printing prediction tensors in the Jetson Nano takes >10s, while model inference takes just 2s. [Jetson Nano uses unified memory for both CPU and GPU, so no data transfer is needed]. This is why my previous conclusion was that model prediction in PyTorch is done with lazy execution, and will not actually do model prediction unless the prediction tensors are actually already needed. I think the three experiments you showed triggered model execution, while the fourth experiment did not. (but I am not sure)

I'm not sure, what do you think we can do to check if model inference indeed completes with the pred = model(batch_data) function call (without doing any gpu-to-cpu transfer so that it is convincing)? In the meantime I'll try some more experiments to verify.

from penet_icra2021.

wdjose avatar wdjose commented on May 28, 2024

Okay, I seem to have found something that might work to eagerly perform the model inference: torch.cuda.synchronize()
(Similar model inference time measurement issues from: sacmehta/ESPNet#57 and wutianyiRosun/CGNet#2)

If you replace print(pred) with torch.cuda.synchronize(), the runtimes are the same. I think this is because torch.cuda is asynchronous with pytorch cpu thread, which is why the inference time might have been faster than how long cuda actually finished?

from penet_icra2021.

JUGGHM avatar JUGGHM commented on May 28, 2024

This is somehow a problem I wasn't aware of before as I followed the implementation of https://github.com/fangchangma/self-supervised-depth-completion. I think your intuition is right and I will look into it.

from penet_icra2021.

wdjose avatar wdjose commented on May 28, 2024

Okay, thank you. Let us know how it goes. 😀 In any case, the model performance is still state-of-the-art. My current research direction is actually in running fast depth completion on the edge, which is why I took an interest in your paper. My next experiments will be in trying to minify your network and reduce parameters to run it faster 🙂

from penet_icra2021.

JUGGHM avatar JUGGHM commented on May 28, 2024

Proper inference time is reported now at the page of this project. Thanks for your pointing out this problem!

from penet_icra2021.

wdjose avatar wdjose commented on May 28, 2024

Got it, thank you for updating!

from penet_icra2021.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.