Comments (14)
The inference.py currently only used for kitti viewer. you can check inference steps in viewer.py.
There is no way to inference a single example in command line for now, you need to use evaluate in train.py to predict entire test set or use viewer.py to inference in GUI.
Inference step:
- read point cloud [N, 4] and calib matrix from file (to remove points outside camera and generate bbox2d/camera box3d)
- use remove_outside_points to remove points.
- generate anchors (and anchor masks, if you remove empty anchors), use prep_pointcloud to get example dict (just generate voxels, num_points_per_voxel and coordinates when predict)
- pass example dict to net.call() and get results.
from second.pytorch.
Thanks a lot.
Just create a new .py file following your suggestions giving an inference solution for a single example.
I guess I can just remove the code related to 'infos' if I do not care the bbox in camera's coordinates, right?
from second.pytorch.
Yes, but you still need to create a dict which is needed for net.call. you need to add some code in VoxelNet.predict to return LiDAR boxes:
second/pytorch/models/voxelnet.py, line 924:
final_box_preds_camera = box_torch_ops.box_lidar_to_camera(
final_box_preds, rect, Trv2c)
if self.lidar_only:
predictions_dict = {
"box3d_lidar": final_box_preds,
"scores": final_scores,
"label_preds": label_preds,
"image_idx": img_idx,
}
else:
# camera code
from second.pytorch.
Thank you. Closing the issue.
from second.pytorch.
Hi! @bigsheep2012
I'm trying to use the pretrained model to predict the results on my point cloud. Have you figure it out? Would you like to share your code? Thanks a lot!
from second.pytorch.
Hello @kxhit,
I am not using a pretrained model as @traveller59 said there might be some issues using pretrained model with SparseConv by facebook research.
I trained the model from scratch for like one day with a 1080 ti. I am afraid that I may not be able to share the code because it is done in a company.
My way is firstly trying to extract the voxel generator, target_assigner and voxelnet parts for building a small version without any data augmentation and, then add other stuffs.
Do modifications carefully in box_np_ops.* as there are many trivial modifications if you are using your own data.
from second.pytorch.
@bigsheep2012 @bigsheep2018 @traveller59 Thanks for your reply!
I found the time consuming is much higher than the time as stated in the paper and the KITTI benchmark(0.05s).
My test info is:
[14:03:35] input preparation time: 0.2631642818450928
[14:03:36] detection time: 0.291057825088501
I want to know the 0.05s specifically refers to which time-consuming and why it takes me much more time. Am I doing something wrong?
Waiting for reply! Thanks!
from second.pytorch.
@kxhit some code need real-time JIT compiling, so the first run may cost some time. you can see the input preparation time is very long because point_to_voxel is a numba.jit function. the following run should cost much less time.
from second.pytorch.
hi @kxhit ,
Firstly, I do not think the Submission of Kitti by @traveller59 is the shared github version as sparse convolution on GPU is not implemented on the github code. Thus, the time cost might be some different.
Secondly, if you are using the author's orginal code (and the reduced point cloud mentioned in ReadMe) without any modification, you should get a faster result. In my case, on my desktop with a 1060 6G, the detection time is 100-120 ms.
from second.pytorch.
As you say, the next will cost much less time. Thanks! Testing on TITAN XP 12G.
So, this open source code can't achieve 0.05s performance now, right?
[14:37:15] input preparation time: 0.013611555099487305
[14:37:15] detection time: 0.08942961692810059
from second.pytorch.
@kxhit
Yes, I think this is the case. You can refer to his paper, which is cited on the Kitti website. The GPU version Sparse Convolution implemented by the author is not shared. This github code is using the default sparse convolution by Facebook research.
from second.pytorch.
@kxhit @bigsheep2012 Currently I can't reproduce the speed in Ubuntu 18.04, PyTorch 1.0 and newest SparseConvNet. The forward time (not include input prepare time) of the pointcloud 107 is 0.069s in current environment but I can get 0.049s in previous 16.04 in 1080Ti , you can check the deprecated KittiViewer picture in README.md.
I even can't build SparseConvNet correctly after lots of try. Now I am using a wheel package built in a 16.04 docker. I have no idea about this speed problem for know.
from second.pytorch.
Thank you. Closing the issue.
@bigsheep2012 I am also trying to predict on only on Lidar data (from a custom Lidar), I do not have image and calib input. Could you please tell me which files I need to modify to remove the image and calib parameters?
from second.pytorch.
Thank you. Closing the issue.
@bigsheep2012 I am also trying to predict on only on Lidar data (from a custom Lidar), I do not have image and calib input. Could you please tell me which files I need to modify to remove the image and calib parameters?
I have the same problem,hava you solved the problem?
from second.pytorch.
Related Issues (20)
- KeyError: 'annotations' when using nuscenes dataset
- KeyError: 'annotations' when using nuscenes dataset HOT 3
- How to start training from interrupted step HOT 1
- About gt_sampling
- ModuleNotFoundError: No module named 'second' HOT 1
- Kitti web viewer backend issue HOT 1
- second.pytorch 1.6.0 Alpha and spconv 2.1 HOT 2
- Convert custom Lidar point cloud data to KITTI format HOT 1
- Summary name eval.kitti/official/Car/[email protected]/1 is illegal; using eval.kitti/official/Car/3d_0.70/1 instead.
- Issues while using Kitti viewer. HOT 1
- Need tips on improving performance on custom dataset
- Need suggestions on how to generate onnx files using this repo
- Source of torchplus package
- OSS License compatibility question
- About kitti viewer
- second.data HOT 2
- I would like to know how many samples should be used for validation if 3517 samples are used for training?
- Is the sparse library writed by the autuor?
- Input only the desired scene
- 有人试过把这个代码在windows上运行吗
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from second.pytorch.