Comments (7)
Thanks for your interest! This is a very interesting finding and this does happen in my environment. However, via the following "experiments", my conclusion is that: the surprising latency is owned to the data copy process from GPU to CPU:
(1) Situations when the latency is above 100ms:
(i)
(ii)
(iii)
(2) Situations when the latency is close to those reported in the paper
I hope these examples are helpful for your question.
By the way, It shocked me that you successfully trained ENet towards a comparable performance with my trained full model. I wonder if your device configurations and training parameters are available? (Ignore it if it is against your will)
from penet_icra2021.
Actually, for the screenshots above, I ran your pre-trained PENet, not ENet 😄 all other parameters are the same: python main.py -b 1 -n pe --evaluate pe.pth.tar
. But I also tried it with just ENet, and increased latency was also present.
Regarding the latency: I am not sure if this is due to latency from transferring between CPU and GPU. Actually, the reason why I am very skeptical, is because I actually tried to run this model on the Jetson Nano (vanilla, no modifications):
These were my results:
modified code - with print(pred)
(or just str(pred)
)
In addition, I was tripping the GPU watchdog timer of the Jetson Nano in the same line 268 where metrics are being computed (which should just be purely CPU operations), instead of in the model inference.
I find it hard to believe that printing prediction tensors in the Jetson Nano takes >10s, while model inference takes just 2s. [Jetson Nano uses unified memory for both CPU and GPU, so no data transfer is needed]. This is why my previous conclusion was that model prediction in PyTorch is done with lazy execution, and will not actually do model prediction unless the prediction tensors are actually already needed. I think the three experiments you showed triggered model execution, while the fourth experiment did not. (but I am not sure)
I'm not sure, what do you think we can do to check if model inference indeed completes with the pred = model(batch_data)
function call (without doing any gpu-to-cpu transfer so that it is convincing)? In the meantime I'll try some more experiments to verify.
from penet_icra2021.
Okay, I seem to have found something that might work to eagerly perform the model inference: torch.cuda.synchronize()
(Similar model inference time measurement issues from: sacmehta/ESPNet#57 and wutianyiRosun/CGNet#2)
If you replace print(pred)
with torch.cuda.synchronize()
, the runtimes are the same. I think this is because torch.cuda is asynchronous with pytorch cpu thread, which is why the inference time might have been faster than how long cuda actually finished?
from penet_icra2021.
This is somehow a problem I wasn't aware of before as I followed the implementation of https://github.com/fangchangma/self-supervised-depth-completion. I think your intuition is right and I will look into it.
from penet_icra2021.
Okay, thank you. Let us know how it goes. 😀 In any case, the model performance is still state-of-the-art. My current research direction is actually in running fast depth completion on the edge, which is why I took an interest in your paper. My next experiments will be in trying to minify your network and reduce parameters to run it faster 🙂
from penet_icra2021.
Proper inference time is reported now at the page of this project. Thanks for your pointing out this problem!
from penet_icra2021.
Got it, thank you for updating!
from penet_icra2021.
Related Issues (20)
- Runtime measurement HOT 4
- How to use the sparse depth?
- How to use the sparse depth? HOT 4
- Modify the backbone network HOT 3
- lightweight deployment of PENet network
- How to infer PENet for KITTI object task? HOT 3
- broken PNG file HOT 1
- the difference intrinsic parameters between train and test HOT 4
- Modify DA-CSPN++ for single branch input HOT 1
- some questions about the implement of CSPN HOT 4
- Model mismatch at inference time HOT 1
- Tensor Dimension Mismatch HOT 1
- How many parameters dose PENet have? HOT 2
- About training HOT 1
- How to implement the model with ROS?
- can't download the pretrained PENet Model
- Pretrained on NYU dataset?
- RuntimeError: CUDA out of memory.
- confindence map
- lower lidar scanline input
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from penet_icra2021.