air-thu / dair-v2x Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Hi, according to the readme file in "configs/vic3d/late-fusion-pointcloud/pointpillars/", the perfromance is directly evaluated after obtaining two model trained by infrastructure-side and vehicle-side respectively.
So, when does the "MLP to predict their velocities" in Figure 3 been trained?
Looking forward to your reply. Thank you.
Hello, this is the result when I changed 000010.json into 000010.txt when I reintroduced early-fusion. I compared the two data and found that they are all the same except for x,y and z, which are quite different. Is this normal?
{"type": "car", "occluded_state": 1, "truncated_state": 0, "alpha": -1.4830296047472067, "2d_box": {"xmin": 941.608582, "ymin": 564.241882, "xmax": 961.3516239999999, "ymax": 579.840454}, "3d_dimensions": {"h": 1.511236, "w": 1.775471, "l": 4.389911}, "3d_location": {"x": 92.3319499999998, "y": 9.902099000000167, "z": -0.819973200000002}, "rotation": 0.019190620000032486, "world_8_points": [[94.50946618443342, 10.831791008931262, -1.5755911999999999], [94.5435364823912, 9.056646934034948, -1.5755911999999959], [90.1544338155663, 8.972406991068505, -1.5755912000000005], [90.12036351760852, 10.747551065964819, -1.5755912000000045], [94.50946618443376, 10.831791008930919, -0.06435519999999673], [94.54353648239108, 9.056646934035058, -0.06435519999999925], [90.15443381556663, 8.972406991068617, -0.06435520000000103], [90.12036351760862, 10.747551065964931, -0.06435520000000294]]}.
Car 0 1-1.4830296047472367 941.608582 564.241882 961.3516239999999 579.840454 1.511236 4.389911 1.775471 -4.954665545984375 -0.12774647902019565 92.9216698325848 -0.019190620000032486
Hello, thank you for your outstanding work, but could you please answer my question? Has "Vehicle", "Pedestrian", and "Cyclist" been redefined? like behind, because I try the dair-v2x-i detection in openpcdet with pointpillar, but got different results.
my results of the dair-v2x-i detection in openpcdet with pointpillar:
请问数据集得challenge大概什么时候会出现?
请问ImvoxelNet关于车路协同数据集得.pth权重文件大概什么时候会出现?
Hello ,sorry to bother you. I'm a novice in this field. I am trying to use your pre-training model for point cloud early fusion evaluation. The. sh file will delete the cache folder, but I can't find it. Do you want me to create a new cache file? In addition, it seems that the. sh file will call mmdet3d_ anymodel_ lidar_ Early.py creates a new folder in the cache folder so that it does not match the rm - r/ Does the cache conflict? Thank you for your patience!
Thanks for your great work!
Besides mmdet3d==0.17.1, I would like to know the exact environment configuration information, including the versions of cuda, pytorch, mmcv, mmdetection and mmseg. I want to reproduce your experimental results more accurately.
hi, i want to konw how th AB is calculate?
the code is as follows:
def send(self, key, val):
self.data[key] = val
if isinstance(val, np.ndarray):
cur_bytes = val.size * 8
elif type(val) in [int, float]:
cur_bytes = 8
elif isinstance(val, list):
cur_bytes = np.array(val).size * 8
elif type(val) is str:
cur_bytes = len(val)
if key.endswith("boxes"):
cur_bytes = cur_bytes * 7 / 24
self.cur_bytes += cur_bytes
i want to konw which is used to calculate the point cloud or box?
for pointcloud, each point cloud has 4 float value, each float takes up 4 bytes. so the AB of pointcloud is N*4*4 (N is the numbers of point clous in each file).
for box,each box has 8 vertices, each vertice has 3 coordinates. if it is float too, the AB of pointcloud is M*8*3*4(M is the number of boxes in each file).
Am I doing the right thing?, Or can you tell me about the logic of the source code?
Thank you!
你好,请问用于路端相机的镜头焦距为多少,我想通过此来计算路端摄像机的FOV?
How to properly project the LiDAR point cloud onto the camera, Is the extrinsic calibration accurate?
Hi, thanks for your inspiring work!
Following vic3d/late-fusion-pointcloud/pointpillars/README.md
, the reproduced results using trainval_config_v.py
and trainval_config_v.py
are not as good as those using your provided checkpoints (inf-model & veh-model).
car 3d IoU threshold 0.30, Average Precision = 60.87
car 3d IoU threshold 0.50, Average Precision = 49.24
car 3d IoU threshold 0.70, Average Precision = 29.07
car bev IoU threshold 0.30, Average Precision = 64.03
car bev IoU threshold 0.50, Average Precision = 54.57
car bev IoU threshold 0.70, Average Precision = 44.08
Average Communication Cost = 927.07 Bytes
car 3d IoU threshold 0.30, Average Precision = 63.40
car 3d IoU threshold 0.50, Average Precision = 53.36
car 3d IoU threshold 0.70, Average Precision = 37.28
car bev IoU threshold 0.30, Average Precision = 65.26
car bev IoU threshold 0.50, Average Precision = 59.16
car bev IoU threshold 0.70, Average Precision = 50.53
Average Communication Cost = 897.99 Bytes
How to reproduce the results from your provided checkpoints?
Any advice would be greatly appreciated!
the link is here [https://github.com/AIR-THU/DAIR-V2X/tree/main/configs/vic3d/early-fusion-pointcloud/pointpillars]
An error occurred when the third step was executed , I ran all night, but the next day it was like this. I'm so sad,could you tell me what happen? Thank you very much ,the wrong screenshot is below.
There are some problem in vis V2X-V(tools/visualize/vis_label_in_image.py line 25), in this dataset, you should use
label_path = osp.join(path, data_info["label_camera_std_path"])
to replace
label_path = osp.join(path, data_info["label_lidar_std_path"])
to get right image!
@haibao-yu
When I ran the following command:
python tools/dataset_converter/dair2kitti.py --source-root ./data/DAIR-V2X/DAIR-V2X-I/single-infrastructure-side \
--target-root ./data/DAIR-V2X/DAIR-V2X-I/single-infrastructure-side \ --split-path ./data/split_datas/single-infrastructure-split-data.json \ --label-type lidar --sensor-view infrastructure
The error was encountered as follows:
================ Start to Convert ================
================ Start to Copy Raw Data ================
================ Start to Generate Label ================
Traceback (most recent call last):
File "tools/dataset_converter/dair2kitti.py", line 80, in <module>
json2kitti(json_root, kitti_label_root)
File "/home/shb/open-mmlab/dair-v2x/tools/dataset_converter/gen_kitti/label_json2kitti.py", line 36, in json2kitti
write_kitti_in_txt(my_json, path_txt)
File "/home/shb/open-mmlab/dair-v2x/tools/dataset_converter/gen_kitti/label_json2kitti.py", line 22, in write_kitti_in_txt
i15 = str(-item["rotation"])
TypeError: bad operand type for unary -: 'str'
Thank you for responding to issues recently.
I wonder how the image is resized to have the size 1920 by 1080 when the camera sensor is 4096x2160. Also, does this process affect the alignment of LiDAR on an image? For example, for a stationary object below, why the signs are not aligned correctly using the provided intrinsic matrix? This object is stationary, so there shouldn't be due to a time difference between LiDAR capture time and image capture time.
Thanks for your contributions about this cooperative 3D object detection dataset. I wonder how to process the original dataset so that it is suitable for training pipeline of MMDection3D.
Hi, thanks a lot for your great contribution in V2X area!
When I follow your instruction of late fusion, some problems occured.
I download the dataset from the official website, and convert the data using dair2kitti.py, when I evaluate the pretrained checkpoints you provided, the mAP result is correct.
Then I want to train the model in mmdet3d, so I create data using the command as followed
python tools/create_data.py kitti --root-path ~/code/DAIR-V2X-main/data/DAIR-V2X/cooperative-vehicle-infrastructure/vehicle-side/ --out-dir ~/code/DAIR-V2X-main/data/DAIR-V2X/cooperative-vehicle-infrastructure/vehicle-side/
and the following files are generated successfully
However, when I browse the dataset using command:
python tools/misc/browse_dataset.py configs/pointpillars/trainval_config_v.py --output-dir tools/misc/veh_side/ --task det
the output xxx_points.obj files are empty when visualize them like this:
but there is data like this when I read the files as text
and the xxx_gt.obj files are like this:
What could be the reason for this problem? I will appreciate it a lot if you could share your advice, thank you!
Thanks for your great work!
I configured mmdet3d==0.17.1, but I got an error.
Building wheels for collected packages: mmdet3d
Building wheel for mmdet3d (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [667 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.7
creating build/lib.linux-x86_64-3.7/mmdet3d
copying mmdet3d/version.py -> build/lib.linux-x86_64-3.7/mmdet3d
copying mmdet3d/init.py -> build/lib.linux-x86_64-3.7/mmdet3d
creating build/lib.linux-x86_64-3.7/mmdet3d/core
......
......
warning: no files found matching '.cpp' under directory 'mmdet3d/.mim/ops'
warning: no files found matching '.cu' under directory 'mmdet3d/.mim/ops'
warning: no files found matching '.h' under directory 'mmdet3d/.mim/ops'
warning: no files found matching '.cc' under directory 'mmdet3d/.mim/ops'
......
......
creating /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src
Emitting ninja build file /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/7] c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice.o
c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice.cc:15:10: fatal error: spconv/geometry.h: No such file or directory
#include <spconv/geometry.h>
^~~~~~~~~~~~~~~~~~~
compilation terminated.
[2/7] c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering.o
c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering.cc:15:10: fatal error: spconv/reordering.h: No such file or directory
#include <spconv/reordering.h>
^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
[3/7] c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool.o
c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool.cc:15:10: fatal error: spconv/maxpool.h: No such file or directory
#include <spconv/maxpool.h>
^~~~~~~~~~~~~~~~~~
compilation terminated.
[4/7] c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/all.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/all.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/all.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/all.o
c++ -MMD -MF /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/all.o.d -pthread -B /home/wxr/miniconda3/envs/openmmlaba/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/all.cc -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/all.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/all.cc:16:10: fatal error: spconv/fused_spconv_ops.h: No such file or directory
#include <spconv/fused_spconv_ops.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
[5/7] /usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_BFLOAT16_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool_cuda.o
/usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/maxpool_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_BFLOAT16_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS_ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/maxpool_cuda.cu:16:10: fatal error: spconv/maxpool.h: No such file or directory
#include <spconv/maxpool.h>
^~~~~~~~~~~~~~~~~~
compilation terminated.
[6/7] /usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering_cuda.o
/usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/reordering_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/reordering_cuda.cu:16:10: fatal error: spconv/mp_helper.h: No such file or directory
#include <spconv/mp_helper.h>
^~~~~~~~~~~~~~~~~~~~
compilation terminated.
[7/7] /usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
FAILED: /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice_cuda.o
/usr/local/cuda-11.1/bin/nvcc -DWITH_CUDA -I/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/TH -I/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/wxr/miniconda3/envs/openmmlaba/include/python3.7m -c -c /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice_cuda.cu -o /tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/build/temp.linux-x86_64-3.7/mmdet3d/ops/spconv/src/indice_cuda.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/mmdet3d/ops/spconv/src/indice_cuda.cu:16:10: fatal error: spconv/indice.cu.h: No such file or directory
#include <spconv/indice.cu.h>
^~~~~~~~~~~~~~~~~~~~
compilation terminated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1672, in _run_ninja_build
env=env)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "", line 36, in
File "", line 34, in
File "/tmp/pip-install-ccjvyqgo/mmdet3d_456ca4f41b924818bfb4f86dfe821742/setup.py", line 312, in
zip_safe=False)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 709, in build_extensions
build_ext.build_extensions(self)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 202, in build_extension
_build_ext.build_extension(self, ext)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
depends=ext.depends)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 539, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1360, in _write_ninja_file_and_compile_objects
error_prefix='Error compiling objects for extension')
File "/home/wxr/miniconda3/envs/openmmlaba/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1682, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
hint: See above for output from the failure.
Thanks for your novel v2x dataset . I am wondering when will you release the dataset.py and those benchmark as well as ckpts.
请问ImvoxelNet训练车路协同数据集是直接使用原始模型网络还是有些更改,请问有没有相关介绍和说明?
This problem may be caused by the empty test dataset of the single-infrastructure-side
. Where can I get the test data and reproduce the results described in the paper?
Hello, what is the focal length of the lens used for the infrastructure-side camera, so that I can calculate the FOV of theinfrastructure-side camera?
When using Imvoxnet, the config file in configs/sv3d-inf/imvoxelnet/trainval_config.py
uses KittiMultiViewDataset
dataset type. However, the official version of mmdetection3d v0.17.1 doesn't have this type of dataset. In addition, I noticed that the Imvoxelnet in https://github.com/saic-vul/imvoxelnet contains KittiMultiViewDataset
dataset, while the mmdetection3d v0.8.0 version is required.
I found that only train and val data in the dataset, but what about the test part?
In https://aistudio.baidu.com/aistudio/competition/detail/522 I found https://thudair.baai.ac.cn/index, but there is nothing to download.
Hi -
We found that within for the validation dataset -
Based on the mapping information cooperative-vehicle-infrastructure/cooperative/data_info.json
Vehicle side only has batch_id
55 and 54. However on the Infra side only has batch_id 54.
I would appreciate some clarification on the missing label set 55 if that make sense.
Thanks!
When I activate the mmdetection3d virtual environment, and run the following code:
python tools/dataset_converter/dair2kitti.py --source-root ./data/DAIR-V2X/cooperative-vehicle-infrastructure/infrastructure-side \
--target-root ./data/DAIR-V2X/cooperative-vehicle-infrastructure/infrastructure-side \
--split-path ./data/split_datas/cooperative-split-data.json \
--label-type lidar --sensor-view infrastructure --no-classmerge
python tools/dataset_converter/dair2kitti.py --source-root ./data/DAIR-V2X/cooperative-vehicle-infrastructure/vehicle-side \
--target-root ./data/DAIR-V2X/cooperative-vehicle-infrastructure/vehicle-side \
--split-path ./data/split_datas/cooperative-split-data.json \
--label-type lidar --sensor-view vehicle --no-classmerge
All of the output is shown below:
================ Start to Convert ================
================ Start to Copy Raw Data ================
Other than copying the dataset, other operations seem to be terminated. May I know what is the reason?
我注意到一个问题,在您的官方网站中所提供的数据集里,激光雷达的标注是缺失的,我猜想您是按照相机视角的遮挡来进行标注从而导致标注缺失,您可以查看下_000917.json_并将其bondingbox投影到激光雷达坐标系,就会发现在近处的一些明显的车辆没有标注,这对于纯激光雷达检测是不允许的,我也很感谢您的开源工作,但这真的很困扰,最后再提一句投影不准确问题。
Hi, thanks for your excellent work!
I want to know how to control mmdetection3D==0.17.1, the tutorial on mmdetection3d didn't mention that.
您好 我想请问一下 代码中的纯图像的late fusion的部分,fusion体现在了哪里呢,我翻看代码中有关后融合的部分好像只能找到车端路段各自的检测与评估,关于二者结果级融合的过程 请问是体现在代码的哪里呢 感谢
Appreciate your work to provide the first real-world cooperative perception dataset! I am wondering whether there is any possibility of making the dataset compatible with the current popular codebases, such as OpenPCDet or MMdet3D?
Thanks for your work~
when i run Evaluation demo ,i got some error below:
Traceback (most recent call last):
File "eval.py", line 71, in <module>
SUPPROTED_MODELS[args.model].add_arguments(parser)
KeyError: 'single_veh'
how can fix it?
DAIR的路侧数据转成KITTI数据后通过可视化展示后,发现车头的位置顺时针偏了90度,即车头都变成了车的右侧,这个问题如何解决
Hi, I have followed the configs/vic3d/late-fusion-pointcloud/pointpillars/README.md to prepare the dataset, but when I try to train PointPillar model with trainval_config_i.py, the error occurred as
No such file or directory: '../../../../data/DAIR-V2X/cooperative-vehicle-infrastructure/infrastructure-side//kitti_infos_train.pkl
.
I have checked the converted dataset, which indeed doesn't contain kitti_infos_train.pkl
. So how can I generate this file, thank you!
Could you tell me the camera product type of roadside data set?Thanks!
Hi!
When I tried to Convert point cloud data from infrastructure coordinate system to vehicle coordinate system according to this guidance, I found that this code ran too slow, which took about 20 hours.
After optimization through vectorization and multi-processing, the whole conversion process only takes about 20 minutes (I used 16 processes here).
The modified code is as follows, hope this code is correct and can help.
import os
import json
import argparse
import numpy as np
from pypcd import pypcd
import open3d as o3d
from tqdm import tqdm
import errno
from concurrent import futures as futures
def read_json(path_json):
with open(path_json, "r") as load_f:
my_json = json.load(load_f)
return my_json
def mkdir_p(path):
try:
os.makedirs(path)
except OSError as exc: # Python >2.5
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
def get_virtuallidar2world(path_virtuallidar2world):
virtuallidar2world = read_json(path_virtuallidar2world)
rotation = virtuallidar2world["rotation"]
translation = virtuallidar2world["translation"]
delta_x = virtuallidar2world["relative_error"]["delta_x"]
delta_y = virtuallidar2world["relative_error"]["delta_y"]
return rotation, translation, delta_x, delta_y
def get_novatel2world(path_novatel2world):
novatel2world = read_json(path_novatel2world)
rotation = novatel2world["rotation"]
translation = novatel2world["translation"]
return rotation, translation
def get_lidar2novatel(path_lidar2novatel):
lidar2novatel = read_json(path_lidar2novatel)
rotation = lidar2novatel["transform"]["rotation"]
translation = lidar2novatel["transform"]["translation"]
return rotation, translation
def get_data(data_info, path_pcd):
for data in data_info:
name1 = os.path.split(path_pcd)[-1]
name2 = os.path.split(data["pointcloud_path"])[-1]
if name1 == name2:
return data
def trans(input_point, translation, rotation):
input_point = np.array(input_point).reshape(3, -1)
translation = np.array(translation).reshape(3, 1)
rotation = np.array(rotation).reshape(3, 3)
output_point = np.dot(rotation, input_point).reshape(3, -1) + np.array(translation).reshape(3, 1)
return output_point
def rev_matrix(R):
R = np.matrix(R)
rev_R = R.I
rev_R = np.array(rev_R)
return rev_R
def trans_point_i2v(input_point, path_virtuallidar2world, path_novatel2world, path_lidar2novatel):
# print('0:', input_point)
# virtuallidar to world
rotation, translation, delta_x, delta_y = get_virtuallidar2world(path_virtuallidar2world)
point = trans(input_point, translation, rotation) + np.array([delta_x, delta_y, 0]).reshape(3, 1)
"""
print('rotation, translation, delta_x, delta_y', rotation, translation, delta_x, delta_y)
print('1:', point)
"""
# world to novatel
rotation, translation = get_novatel2world(path_novatel2world)
new_rotation = rev_matrix(rotation)
new_translation = -np.dot(new_rotation, translation)
point = trans(point, new_translation, new_rotation)
"""
print('rotation, translation:', rotation, translation)
print('new_translation, new_rotation:', new_translation, new_rotation)
print('2:', point)
"""
# novatel to lidar
rotation, translation = get_lidar2novatel(path_lidar2novatel)
new_rotation = rev_matrix(rotation)
new_translation = -np.dot(new_rotation, translation)
point = trans(point, new_translation, new_rotation)
"""
print('rotation, translation:', rotation, translation)
print('new_translation, new_rotation:', new_translation, new_rotation)
print('3:', point)
"""
point = point.T
return point
def read_pcd(path_pcd):
pointpillar = o3d.io.read_point_cloud(path_pcd)
points = np.asarray(pointpillar.points)
return points
def show_pcd(path_pcd):
pcd = read_pcd(path_pcd)
o3d.visualization.draw_geometries([pcd])
def write_pcd(path_pcd, new_points, path_save):
pc = pypcd.PointCloud.from_path(path_pcd)
pc.pc_data["x"] = new_points[:, 0]
pc.pc_data["y"] = new_points[:, 1]
pc.pc_data["z"] = new_points[:, 2]
pc.save_pcd(path_save, compression="binary_compressed")
def trans_pcd_i2v(path_pcd, path_virtuallidar2world, path_novatel2world, path_lidar2novatel, path_save):
# (n, 3)
points = read_pcd(path_pcd)
# (n, 3)
new_points = trans_point_i2v(points.T, path_virtuallidar2world, path_novatel2world, path_lidar2novatel)
write_pcd(path_pcd, new_points, path_save)
def map_func(data, path_c, path_dest, i_data_info, v_data_info):
path_pcd_i = os.path.join(path_c, data["infrastructure_pointcloud_path"])
path_pcd_v = os.path.join(path_c, data["vehicle_pointcloud_path"])
i_data = get_data(i_data_info, path_pcd_i)
v_data = get_data(v_data_info, path_pcd_v)
path_virtuallidar2world = os.path.join(
path_c, "infrastructure-side", i_data["calib_virtuallidar_to_world_path"]
)
path_novatel2world = os.path.join(path_c, "vehicle-side", v_data["calib_novatel_to_world_path"])
path_lidar2novatel = os.path.join(path_c, "vehicle-side", v_data["calib_lidar_to_novatel_path"])
name = os.path.split(path_pcd_i)[-1]
path_save = os.path.join(path_dest, name)
trans_pcd_i2v(path_pcd_i, path_virtuallidar2world, path_novatel2world, path_lidar2novatel, path_save)
def get_i2v(path_c, path_dest, num_worker):
mkdir_p(path_dest)
path_c_data_info = os.path.join(path_c, "cooperative/data_info.json")
path_i_data_info = os.path.join(path_c, "infrastructure-side/data_info.json")
path_v_data_info = os.path.join(path_c, "vehicle-side/data_info.json")
c_data_info = read_json(path_c_data_info)
i_data_info = read_json(path_i_data_info)
v_data_info = read_json(path_v_data_info)
total = len(c_data_info)
with tqdm(total=total) as pbar:
with futures.ProcessPoolExecutor(num_worker) as executor:
res = [executor.submit(map_func, data, path_c, path_dest, i_data_info, v_data_info) for data in c_data_info]
for _ in futures.as_completed(res):
pbar.update(1)
parser = argparse.ArgumentParser("Convert The Point Cloud from Infrastructure to Ego-vehicle")
parser.add_argument(
"--source-root",
type=str,
default="./data/DAIR-V2X/cooperative-vehicle-infrastructure",
help="Raw data root about DAIR-V2X-C.",
)
parser.add_argument(
"--target-root",
type=str,
default="./data/DAIR-V2X/cooperative-vehicle-infrastructure/vic3d-early-fusion/velodyne/lidar_i2v",
help="The data root where the data with ego-vehicle coordinate is generated",
)
parser.add_argument(
"--num-worker",
type=int,
default=1,
help="Number of workers for multi-processing",
)
if __name__ == "__main__":
args = parser.parse_args()
source_root = args.source_root
target_root = args.target_root
num_worker = args.num_worker
get_i2v(source_root, target_root, num_worker)
When using this dataset, I found that in ./cooperative-vehicle-infrastructure/cooperative/data_info.json file, the pcd file in "infrastructure_pointcloud_path" attribute sometimes doesn't exist at all. Is this a problem of the dataset?
RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling cublasCreate(handle)
请问能给出方向评估的baseline吗?
When I ran the command in mmdection3d environment as follows:
python tools/train.py /HOME/scz3687/run/openmmlab-0.17.1/dair-v2x/configs/sv3d-inf/mvxnet/trainval_config.py --work-dir /HOME/scz3687/run/openmmlab-0.17.1/dair-v2x/work_dirs/sv3d_inf_mvxnet
The error was encountered as follows:
2022-07-27 10:18:51,994 - mmdet - INFO - workflow: [('train', 1)], max: 40 epochs
/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/data/run01/scz3687/openmmlab-0.17.1/mmdetection3d/mmdet3d/models/fusion_layers/coord_transform.py:34: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
if 'pcd_rotation' in img_meta else torch.eye(
2022-07-27 10:19:13,395 - mmdet - INFO - Epoch [1][50/10084] lr: 4.323e-04, eta: 1 day, 18:54:39, time: 0.383, data_time: 0.073, memory: 3474, loss_cls: 0.9940, loss_bbox: 3.7010, loss_dir: 0.1496, loss: 4.8445, grad_norm: 202.8270
2022-07-27 10:19:25,671 - mmdet - INFO - Epoch [1][100/10084] lr: 5.673e-04, eta: 1 day, 11:12:17, time: 0.246, data_time: 0.002, memory: 3474, loss_cls: 0.8204, loss_bbox: 1.6195, loss_dir: 0.1448, loss: 2.5846, grad_norm: 31.3928
2022-07-27 10:19:37,977 - mmdet - INFO - Epoch [1][150/10084] lr: 7.023e-04, eta: 1 day, 8:39:17, time: 0.246, data_time: 0.002, memory: 3474, loss_cls: 0.8210, loss_bbox: 1.4106, loss_dir: 0.1391, loss: 2.3707, grad_norm: 25.7827
2022-07-27 10:19:50,285 - mmdet - INFO - Epoch [1][200/10084] lr: 8.373e-04, eta: 1 day, 7:22:47, time: 0.246, data_time: 0.003, memory: 3482, loss_cls: 0.8008, loss_bbox: 1.4836, loss_dir: 0.1374, loss: 2.4218, grad_norm: 24.7188
2022-07-27 10:20:02,640 - mmdet - INFO - Epoch [1][250/10084] lr: 9.723e-04, eta: 1 day, 6:38:03, time: 0.247, data_time: 0.003, memory: 3482, loss_cls: 0.7371, loss_bbox: 1.5554, loss_dir: 0.1419, loss: 2.4344, grad_norm: 21.9774
2022-07-27 10:20:14,775 - mmdet - INFO - Epoch [1][300/10084] lr: 1.107e-03, eta: 1 day, 6:03:15, time: 0.243, data_time: 0.002, memory: 3482, loss_cls: 0.6303, loss_bbox: 1.4387, loss_dir: 0.1448, loss: 2.2138, grad_norm: 21.0055
/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/hooks/optimizer.py:31: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior.
return clip_grad.clip_grad_norm_(params, **self.grad_clip)
2022-07-27 10:20:27,419 - mmdet - INFO - Epoch [1][350/10084] lr: 1.242e-03, eta: 1 day, 5:48:06, time: 0.253, data_time: 0.003, memory: 3503, loss_cls: nan, loss_bbox: nan, loss_dir: nan, loss: nan, grad_norm: nan
Traceback (most recent call last):
File "tools/train.py", line 225, in <module>
main()
File "tools/train.py", line 221, in main
meta=meta)
File "/data/run01/scz3687/openmmlab-0.17.1/mmdetection3d/mmdet3d/apis/train.py", line 35, in train_model
meta=meta)
File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmdet/apis/train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 51, in train
self.call_hook('after_train_iter')
File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
getattr(hook, fn_name)(self)
File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/hooks/optimizer.py", line 35, in after_train_iter
runner.outputs['loss'].backward()
File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/_tensor.py", line 255, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/autograd/__init__.py", line 149, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
File "/HOME/scz3687/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/autograd/function.py", line 87, in apply
return self._forward_cls.backward(self, *args) # type: ignore[attr-defined]
File "/data/run01/scz3687/openmmlab-0.17.1/mmdetection3d/mmdet3d/ops/voxel/scatter_points.py", line 46, in backward
voxel_points_count, ctx.reduce_type)
RuntimeError: CUDA error: an illegal memory access was encountered
Do you have a visualization solution? @haibao-yu
mmdet3d/apis/test.py", line 52, in single_gpu_test
if batch_size == 1 and isinstance(data['img'][0],
TypeError: 'DataContainer' object is not subscriptable
Hello, thank you for your work in the vehicle-road collaboration. Here, can we realize that both the car end and the road end use a multi-modal target detection algorithm to achieve post-fusion? Is there any related documentation? Looking forward to your reply, thank you!
tools/dataset_converter/concatenate_pcd2bin.py in line 89: should be "path_c = args.source_root", not "path_c =argparse .source_root"
hi,everyone.
I have a question about how to set the range of the point cloud if I want to train in the openpcdet framework after converting this dataset to a kitti dataset.
In the kitti dataset, the range of the point cloud is POINT_CLOUD_RANGE: [0, -40, -3, 70.4, 40, 1]
Range of point clouds in openpcdet's pointpillars when using the kitti datasetPOINT_CLOUD_RANGE: [0, -39.68, -3, 69.12, 39.68, 1]
If I use V2X-I or V2X-V or V2X-C, how do I set the range of the point cloud?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.