xinzhuma / patchnet Goto Github PK
View Code? Open in Web Editor NEWCode release for "Rethinking Pseudo-LiDAR Representation (ECCV2020)".
License: MIT License
Code release for "Rethinking Pseudo-LiDAR Representation (ECCV2020)".
License: MIT License
Congratulations, these results are quite impressive. I am trying to replicate them using your pre-trained models, however I am having issues preparing the data, the process gets killed after a few iterations. Is there a way to not store everything in memory before pickling (or even better, using the data directly at training time)?
Hello, xinzhuma.
Thanks for your great work of patchnet. I noticed that you also provide your realization of FPointnet and PseudoLiDAR in your repository. And I try to run both of two only for test. The results of FPointNet are a little different from official code of FPointNet.
Did you test your realization of FPointNet, and can you give your results on KITTI val set for reference? Thanks a lot.
looks like box2 is not getting picked
patchnet/lib/models/patchnet.py
Line 94 in 88d98ce
I found the add_rgb in config file. But if I change it to 'True', then the network should be change. So when or how could I use add_rgb? Maybe the rgb information only for the 2D detection?
If I want to use my own monocular point clouds, instead of the ones you provide, what are the pre-processing steps required?
May I know the purpose of the operations on lines 67, 74, and 90? I don't understand why patch values are deducted by mean value and later adding mean value back to center to get stage1_center. Thank you very much in advance.
Hi, @xinzhuma .
Thank you for providing the impressive work of patchnet. When I try to evaluate the pretrained model, the results didn't reach the goals that you provided on your github page. I changed the path in' experiments/patchnet/config_patchnet.yaml': resume_model: 'checkpoints/checkpoint_patchnet_mono.pth'. And I've prepared the data 'data/KITTI/object/training/label_2' before, but still got poor performances. (34.3/19.1/16.1)
Are there anything else I should check to reach the results as shown? Thank you!
您好!我发现您读取数据时,对每一个物体框的x,z通道调用了rotato_patch_to_center这个函数,请问这个函数是做什么的呢? 非常感谢!
Hello @xinzhuma, thanks for your wonderful contribution. I have two questions about KITTI datasets. I would be appreciated if you can give me some help:
I didn't find the '2d_detections' on the KITTI website. Does '2d_detections' match to 'Download reference detections (L-SVM) for training and test set (800 MB)' in KITTI?
I also didn't find 'ImageSets', could you please tell me which file I need to download and put it in this file?
Thanks in advance.
Hello, I would like to ask if there is a code for this paper——“Accurate Monocular Object Detection via Color-Embedded 3D Reconstruction for Autonomous Driving”?
Hi, @xinzhuma when I use your pretrain models for evaluate. I meet the screenshot error.
I change the 'lib/helper/model_builder.py' line 4: from lib.models.patchnet_vanilla import PatchNet_Vanilla
to from lib.models.patchnet import PatchNet
. If I should't change this, could you share me your 'patchnet_vanilla.py' OR a 'patchnet' model?
SO:
Thanks for your excellent work and sharing source code with us.
Since the performance is highly dependent on the 2D-detector, I am curious whether you can provide the 2D detection results on test set. I will be very appreciated if you can share the 2D detector on test set with us :)
hi, when i train according to your steps. Why does the loss go up with the epoch increase (in one epoch the loss is decreasing), the first epoch ,The loss converges to 22 , but the last few epochs ,loss converges to 100+.How can I solve this problem?
Hi,
I can't generate the test set pkl, because the test set have not the label_2 directory.
(PCT-3D) zizhang.wu@shaxbw04:/newnfs/zzwu/08_3d_code/patchnet/tools/data_prepare$ python /newnfs/zzwu/08_3d_code/patchnet/tools/data_prepare/patch_data_prepare_adabin.py
test split patch data gen: 0%| | 0/7518 [00:00<?, ?it/s]Traceback (most recent call last):
File "/newnfs/zzwu/08_3d_code/patchnet/tools/data_prepare/patch_data_prepare_adabin.py", line 349, in
whitelist = whitelist)
File "/newnfs/zzwu/08_3d_code/patchnet/tools/data_prepare/patch_data_prepare_adabin.py", line 76, in extract_patch_data
objects = dataset.get_label(data_idx)
File "/newnfs/zzwu/08_3d_code/patchnet/lib/datasets/kitti_dataset_adabin.py", line 69, in get_label
assert os.path.exists(label_file)
AssertionError
请问当我用fpointnet和pseudo-lidar的数据预处理去生成3个pickle文件时,_val_rgb_detection.pickle文件总是为空,请问是什么原因呢?我该怎么做去生成此文件.
Can you provide a script or toolbox that you use to visualize 3d bbox on monocular images and lidar scans? Thanks a lot.
Hello author,
firstly nice work!
Question:
I want to add another backbone like ['plainnet', 'resnet', 'resnext', 'senet', 'another backbone']
I modify the related codes, but i don't know how to load my pretrained model or setting "no pretrained"
and the "/lib/helpers/tester_helper.py" and "/lib/helpers/save_helper.py" always report the error.
Can you help me ?
你好,想请问下您的代码里面patch的获取是不是直接利用了2d框的GT,这样的话相当于2d检测百分之百正确了
I'm trying to use Pseudo-LiDAR++ for depth maps generation and run the PatchNet approach afterwards to predict bounding boxes.
The issue I'm facing is that Pseudo-LiDAR++ produces depth maps as .npy files but PatchNet uses depth maps in png format. How can the transformation of the numpy arrays to png depth maps be done?
Appreciating your help :)
Hi, when I was training, I encountered the following error:
2021-01-22 21:38:42,123 INFO BN Momentum: 0.500000
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=51 error=63 : OS call failed or operation not supported on this OS
Traceback (most recent call last):
File "../../tools/train_val.py", line 101, in
main()
File "../../tools/train_val.py", line 97, in main
trainer.train()
File "/home/zd/patchnet-master/lib/helpers/trainer_helper.py", line 46, in train
self.train_one_epoch()
File "/home/zd/patchnet-master/lib/helpers/trainer_helper.py", line 70, in train_one_epoch
loss, stat_dict = self.decorator(self.model, batch_data, self.cfg['decorator'])
File "/home/zd/patchnet-master/lib/helpers/decorator_helper.py", line 27, in decorator
output_dict = model(patch, one_hot_vec)
File "/home/zd/venvpy36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/zd/patchnet-master/lib/models/patchnet.py", line 87, in forward
box = result_selection_by_distance(stage1_center, box1, box2, box3)
File "/home/zd/patchnet-master/lib/models/patchnet.py", line 95, in result_selection_by_distance
disntance = torch.zeros(center.shape[0], 1).cuda()
File "/home/zd/venvpy36/lib/python3.6/site-packages/torch/cuda/init.py", line 163, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (63) : OS call failed or operation not supported on this OS at /pytorch/aten/src/THC/THCGeneral.cpp:51
the environment is Ubuntu18.04, python3.6, torch1.1
Do you know why this error happened? thanks!
您好!在calibration.py 中的rect_to_img函数中有这样两行代码
pts_img = (pts_2d_hom[:, 0:2].T / pts_rect_hom[:, 2]).T # (N, 2)
pts_rect_depth = pts_2d_hom[:, 2] - self.P2.T[3, 2] # depth in rect camera coord
请问第一行处理homogeneous coordinate时为什么是除以 pts_rect_hom[:, 2]而不是 pts_2d_hom[:, 2]?它们会有微小的差别,因为P2矩阵的P2[3,4]不是0。谢谢!
Hello,
I just successfully generated the training data and move to the training step. I ran " python ../../tools/train_val.py --config config_patchnet.yaml" but it returns en error of
" File "root/lib/datasets/patch_dataset.py", line 41, in init
with open(self.pickle_file, 'rb') as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'training/data/path'"
I am confused about where we have such a file or directory, any suggestions will be helpful!
Thanks in advance.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.