Coder Social home page Coder Social logo

hailanyi / ted Goto Github PK

View Code? Open in Web Editor NEW
137.0 5.0 32.0 626 KB

Transformation-Equivariant 3D Object Detection for Autonomous Driving

Home Page: https://arxiv.org/abs/2211.11962

License: Apache License 2.0

Python 74.73% C++ 9.10% Cuda 15.82% C 0.32% Shell 0.02%
3d-object-detection kitti autonomous-driving

ted's Introduction

Hi! I'm a PhD candidate at Xiamen University! Welcome to my GitHub page 👋.

Hai Wu's GitHub stats

  • (2022/11 - 2023/3) VirConv ranks 1st on KITTI 3D/2D/BEV detection benchmark
  • (2022/11 - 2023/1) VirConvTrack ranks 1st on KITTI tracking benchmark
  • (2022/5 - 2022/11) TED ranks 1st on KITTI 3D detection benchmark
  • (2022/9 - 2022/11) CasTrack ranks 1st on KITTI tracking benchmark
  • (2021/11 - 2022/5) CasA attains SOTA on KITTI 3D detection benchmark

ted's People

Contributors

hailanyi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ted's Issues

KITTI test set

Hello author!
I want to ask a question:
In order to submit to KITTI and obtain better test results
When dividing training set and validation set
If I use 80%-20% data split or 90%-10% data split
How to select training samples and validation samples?
Do you have a txt file that has been divided into training set and validation set? Can you share it?
This is my email: [email protected]
Looking forward to your reply
Thank you so much

About Depth Completion.

It is really a nice work! Thanks for the code!
Is the PENet just utilized to offer depth for pusedo-points?

Pedestrian Kitti configure

Hi, author, thanks for you sharing your work; I am trying to train single class pedestrian; Can you share its config? If not what hyper-parameters I should care? FG/BG_THRESHOULD i should focus on? Thanks

CUDA error: CUBLAS_STATUS_EXECUTION_FAILED

epochs: 0%| | 0/40 [00:14<?, ?it/s]
Traceback (most recent call last): | 0/3694 [00:00<?, ?it/s]
File "train.py", line 201, in
main()
File "train.py", line 152, in main
train_model(
File "/TED/tools/train_utils/train_utils.py", line 95, in train_model
accumulated_iter = train_one_epoch(
File "TED/tools/train_utils/train_utils.py", line 47, in train_one_epoch
loss.backward()
File "/home//.local/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home//.local/lib/python3.8/site-packages/torch/autograd/init.py", line 145, in backward
Variable._execution_engine.run_backward(
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)

i am using

PyTorch version: 1.8.1+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect

Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Quadro M6000 24GB
GPU 1: Quadro M6000 24GB

Nvidia driver version: 535.113.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==1.8.1+cu111
[pip3] torchaudio==0.8.1
[pip3] torchvision==0.9.1+cu111
[conda] Could not collect


and facing above error after i do

python3 train.py --cfg_file TED-S.yaml

I am working with lidar only model

@hailanyi let me know if you have any insight on this

Code recurrence problem

Hello, author
I had a problem replicating your code.
I used two 3090 video cards with batch_size set to 4 and epochs set to 80.
After training, the 3D AP(R40) of Car on the val set of KITTI dataset was only 92.07, 85.49, 84.95, which was two points behind the 93.25, 87.99, 86.28 published by your github.
I would like to ask you, what is the cause of this?
001
002

Questions about the TED on the KITTI test set

TED is really a great job, but I have a small problem.

That is, " training data " in " These models are not suitable to directly report results on KITTI test set, please use slightly lower score threshold and train the models on all or 80% training data to achieve a desirable performance on KITTI test set. " refers to the KITTI training set (7481 samples) ? Or the train split (3712 samples)?

Looking forward to your reply!

Transformation scale

Hi, thank you for great work.

Is there any reason you fixed the scale of transformation for each transformation channel?
Is there any ablation study with randomly transformed channel?

bev_pooling.py中的一些困惑

image
image
你好,我想问一下,第一个的网格生成不加x_stride / 2是为什么?(我尝试过,好像是报错,但能不能手动设置为70.6?)
第二个是ben_align里的,我看论文的理解是生成的网格点grid是一开始是和rot_num=0对齐的,那不是应该先back-forward到最初状态,然后for-ward到当前的rot_num吗?主要是我不是很理解grid通过for-ward到当前,然后又back-forwar到rot_num=0。

Probem of the result of test set that submitt on the KITTI benchmark

Hello, your work is very meaningful, but when I reproduced this work, I verified that the results in the paper could be achieved on the validation set. However, after submitting the results of the test set on the Kitti platform, there was a significant gap between your results and yours.

What do you think are the possible problems? How can I achieve results similar to yours on the test set?

The test results of pedestrians and cyclists on the KITTI validation set are 0

TED-S.yaml is following:
CLASS_NAMES: ['Car', 'Pedestrian', 'Cyclist']

DATA_CONFIG:
BASE_CONFIG: cfgs/dataset_configs/kitti_dataset.yaml
DATASET: 'KittiDataset'
ROT_NUM: 3
USE_VAN: True

DATA_SPLIT: {
    'train': train,
    'test': val
}

INFO_PATH: {
    'train': [kitti_infos_train.pkl],
    'test': [kitti_infos_val.pkl],
}

DATA_AUGMENTOR:
    DISABLE_AUG_LIST: ['placeholder']
    AUG_CONFIG_LIST:
        - NAME: gt_sampling
          USE_ROAD_PLANE: True
          DB_INFO_PATH:
              - kitti_dbinfos_train.pkl
          PREPARE: {
              filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Cyclist:5'],
              filter_by_difficulty: [-1],
          }

          SAMPLE_GROUPS: ['Car:10', 'Pedestrian:10', 'Cyclist:10']
          NUM_POINT_FEATURES: 4
          DATABASE_WITH_FAKELIDAR: False
          REMOVE_EXTRA_WIDTH: [0.0, 0.0, -0.2]
          LIMIT_WHOLE_SCENE: False

        - NAME: da_sampling
          USE_ROAD_PLANE: True
          DB_INFO_PATH:
            - kitti_dbinfos_train.pkl
          PREPARE: {
            filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Cyclist:5'],
            filter_by_difficulty: [-1],
          }

          SAMPLE_GROUPS: ['Car:10', 'Pedestrian:10', 'Cyclist:10']

          MIN_SAMPLING_DIS: 0
          MAX_SAMPLING_DIS: 20
          OCCLUSION_NOISE: 0.2
          OCCLUSION_OFFSET: 2.
          SAMPLING_METHOD: 'LiDAR-aware'
          VERT_RES: 0.006
          HOR_RES: 0.003

          NUM_POINT_FEATURES: 4
          DATABASE_WITH_FAKELIDAR: False
          REMOVE_EXTRA_WIDTH: [0.0, 0.0, -0.2]
          LIMIT_WHOLE_SCENE: False

        - NAME: random_local_noise
          LOCAL_ROT_RANGE: [-0.78539816, 0.78539816]
          TRANSLATION_STD: [1.0, 1.0, 0.5]
          GLOBAL_ROT_RANGE: [0.0, 0.0]
          EXTRA_WIDTH: [0.2, 0.2, 0.]

        - NAME: random_world_rotation
          WORLD_ROT_ANGLE: [-0.39269908, 0.39269908]

        - NAME: random_world_scaling
          WORLD_SCALE_RANGE: [0.95, 1.05]

        - NAME: random_local_pyramid_aug
          DROP_PROB: 0.25
          SPARSIFY_PROB: 0.05
          SPARSIFY_MAX_NUM: 50
          SWAP_PROB: 0.1
          SWAP_MAX_NUM: 50

X_TRANS:
  AUG_CONFIG_LIST:
    - NAME: world_rotation
      WORLD_ROT_ANGLE: [0.39269908, 0, 0.39269908, -0.39269908, -0.39269908, 0]
    - NAME: world_flip
      ALONG_AXIS_LIST: [0, 1, 1, 0, 1, 0]
    - NAME: world_scaling
      WORLD_SCALE_RANGE: [ 0.98, 1.02, 1., 0.98, 1.02, 1.]



POINT_FEATURE_ENCODING: {
    encoding_type: absolute_coordinates_encoding_mm,
    used_feature_list: ['x', 'y', 'z', 'intensity'],
    src_feature_list: ['x', 'y', 'z', 'intensity'],
    num_features: 4
}

DATA_PROCESSOR:
    - NAME: mask_points_and_boxes_outside_range
      REMOVE_OUTSIDE_BOXES: True

    - NAME: shuffle_points
      SHUFFLE_ENABLED: {
        'train': True,
        'test': True
      }

    - NAME: transform_points_to_voxels
      VOXEL_SIZE: [0.05, 0.05, 0.05]  
      MAX_POINTS_PER_VOXEL: 5
      MAX_NUMBER_OF_VOXELS: {
        'train': 16000,
        'test': 40000
      }

MODEL:
NAME: VoxelRCNN

VFE:
    NAME: MeanVFE
    MODEL: 'max'

BACKBONE_3D:
    NAME: TeVoxelBackBone8x
    NUM_FILTERS: [16, 32, 64, 64]
    RETURN_NUM_FEATURES_AS_DICT: True
    OUT_FEATURES: 64

MAP_TO_BEV:
    NAME: BEVPool
    NUM_BEV_FEATURES: 256
    ALIGN_METHOD: 'max'

BACKBONE_2D:
    NAME: BaseBEVBackbone

    LAYER_NUMS: [4, 4]
    LAYER_STRIDES: [1, 2]
    NUM_FILTERS: [64, 128]
    UPSAMPLE_STRIDES: [1, 2]
    NUM_UPSAMPLE_FILTERS: [128, 128]

DENSE_HEAD:
    NAME: AnchorHeadSingle
    CLASS_AGNOSTIC: False

    USE_DIRECTION_CLASSIFIER: True
    DIR_OFFSET: 0.78539
    DIR_LIMIT_OFFSET: 0.0
    NUM_DIR_BINS: 2

    ANCHOR_GENERATOR_CONFIG: [
        {
            'class_name': 'Car',
            'anchor_sizes': [[3.9, 1.6, 1.56]],
            'anchor_rotations': [0, 1.57],
            'anchor_bottom_heights': [-1.78],
            'align_center': False,
            'feature_map_stride': 8,
            'matched_threshold': 0.6,
            'unmatched_threshold': 0.45
        },
        {
            'class_name': 'Pedestrian',
            'anchor_sizes': [[0.8, 0.6, 1.73]],
            'anchor_rotations': [0, 1.57],
            'anchor_bottom_heights': [-0.6],
            'align_center': False,
            'feature_map_stride': 8,
            'matched_threshold': 0.5,
            'unmatched_threshold': 0.35
        },
        {
            'class_name': 'Cyclist',
            'anchor_sizes': [[1.76, 0.6, 1.73]],
            'anchor_rotations': [0, 1.57],
            'anchor_bottom_heights': [-0.6],
            'align_center': False,
            'feature_map_stride': 8,
            'matched_threshold': 0.5,
            'unmatched_threshold': 0.35
        }
    ]
    TARGET_ASSIGNER_CONFIG:
        NAME: AxisAlignedTargetAssigner
        POS_FRACTION: -1.0
        SAMPLE_SIZE: 512
        NORM_BY_NUM_EXAMPLES: False
        MATCH_HEIGHT: False
        BOX_CODER: ResidualCoder

    LOSS_CONFIG:
        LOSS_WEIGHTS: {
            'cls_weight': 1.0,
            'loc_weight': 2.0,
            'dir_weight': 0.2,
            'code_weights': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
        }


ROI_HEAD:
    NAME: TEDSHead
    CLASS_AGNOSTIC: True

    SHARED_FC: [256, 256]
    CLS_FC: [256, 256]
    REG_FC: [256, 256]
    DP_RATIO: 0.01

    NMS_CONFIG:
        TRAIN:
            NMS_TYPE: nms_gpu
            MULTI_CLASSES_NMS: False
            NMS_PRE_MAXSIZE: 4000
            NMS_POST_MAXSIZE: 512
            NMS_THRESH: 0.8
        TEST:
            NMS_TYPE: nms_gpu
            MULTI_CLASSES_NMS: False
            USE_FAST_NMS: True
            SCORE_THRESH: 0.0
            NMS_PRE_MAXSIZE: 4000
            NMS_POST_MAXSIZE: 50
            NMS_THRESH: 0.75

    ROI_GRID_POOL:
        FEATURES_SOURCE: ['x_conv3','x_conv4']
        PRE_MLP: True
        GRID_SIZE: 6
        POOL_LAYERS:
            x_conv3:
                MLPS: [[32, 32], [32, 32]]
                QUERY_RANGES: [[2, 2, 2], [4, 4, 4]]
                POOL_RADIUS: [0.4, 0.8]
                NSAMPLE: [16, 16]
                POOL_METHOD: max_pool
            x_conv4:
                MLPS: [[32, 32], [32, 32]]
                QUERY_RANGES: [[2, 2, 2], [4, 4, 4]]
                POOL_RADIUS: [0.8, 1.6]
                NSAMPLE: [16, 16]
                POOL_METHOD: max_pool


    TARGET_CONFIG:
        BOX_CODER: ResidualCoder
        ROI_PER_IMAGE: 160
        FG_RATIO: 0.5
        SAMPLE_ROI_BY_EACH_CLASS: True
        CLS_SCORE_TYPE: roi_iou_x
        CLS_FG_THRESH: [0.75]
        CLS_BG_THRESH: [0.25]
        CLS_BG_THRESH_LO: 0.1
        HARD_BG_RATIO: 0.8
        REG_FG_THRESH: [0.55]
        ENABLE_HARD_SAMPLING: True
        HARD_SAMPLING_THRESH: [0.5]
        HARD_SAMPLING_RATIO: [0.5]


    LOSS_CONFIG:
        CLS_LOSS: BinaryCrossEntropy
        REG_LOSS: smooth-l1
        CORNER_LOSS_REGULARIZATION: True
        GRID_3D_IOU_LOSS: False
        LOSS_WEIGHTS: {
            'rcnn_cls_weight': 1.0,
            'rcnn_reg_weight': 1.0,
            'rcnn_corner_weight': 1.0,
            'rcnn_iou3d_weight': 1.0,
            'code_weights': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
        }

POST_PROCESSING:
    RECALL_THRESH_LIST: [0.3, 0.5, 0.7]
    SCORE_THRESH: 0.25
    OUTPUT_RAW_SCORE: False
    EVAL_METRIC: kitti

    NMS_CONFIG:
        MULTI_CLASSES_NMS: False
        NMS_TYPE: nms_gpu
        NMS_THRESH: 0.1
        NMS_PRE_MAXSIZE: 4096
        NMS_POST_MAXSIZE: 500

OPTIMIZATION:
BATCH_SIZE_PER_GPU: 2
NUM_EPOCHS: 40

OPTIMIZER: adam_onecycle
LR: 0.01
WEIGHT_DECAY: 0.01
MOMENTUM: 0.9

MOMS: [0.95, 0.85]
PCT_START: 0.4
DIV_FACTOR: 10
DECAY_STEP_LIST: [35, 45]
LR_DECAY: 0.1
LR_CLIP: 0.0000001

LR_WARMUP: False
WARMUP_EPOCH: 1

GRAD_NORM_CLIP: 10

PENet for custom data

Hi,
Thanks for your great work! I am trying to train and test models on my custom dataset. Could you please tell me how to train a PENet model for custom dataset? Thanks!

Without CasA or multi-stage architecture, loss from SESSD and data augmentation from SESSD, your propose really work well?

To my extensive experiments, what really works is the CasA multi-stage module and the data augmentation from SESSD which you did not mention in your paper, did you condut ablation experiments on SECOND or other single-stage framework with your proposed modules? I do not think they will work well without CasA multi-stage module and the data augmentation from SESSD.

It is very easy to achieve a good experiment results on KITTI test set. You just add CasA and data augmentation from SESSD.Then, add something else...you know... The results will be very nice, then you can write a good paper ^_^

I have doubt that whether you used the loss from SESSD, it useful to improve the moderate and hard performance, but it does not exist in your source code. It will lead to not be reproduced ^_^

The inference time of TED-M

Hello author,I had a problem replicating your code.
In the paper, the inference speed of TED-M is superior to SFD, but in my reproduction experiment, TEM-M performs worse than SFD, even worse (SFD is 0.13s per frame, TEM-M is 2.01s per frame).
What is the possible reason? thanks!

how to add 'pedestrian' and 'cyclist' in to results?

Thank u for your contribution Fristly. However, I found there are only one kind named 'car' in the valuation result. I also want to try its performance about 'pedestrian' and 'cyclist'. Could you please tell me how to add it? Thank you very much in advance. all of evaluation data about them is equal to 0.

Set Sync_bn == True后性能更差了

我一开始用了dist_train时候,设置了一个train多类的配置文件,没有复现出论文结果,后来我注意那个sync_bn默认是False,我设置为True重新训练后,结果更是差得像是跑错了一样,sync_bn影响这么大的嘛,(batch_size_pergpu==2, rtx4090 x 4)
Car [email protected], 0.70, 0.70:
bbox AP:81.5334, 72.0265, 54.3557
bev AP:80.3433, 62.9135, 53.9542
3d AP:79.4474, 62.2500, 53.4173
aos AP:73.62, 64.44, 49.18
Car [email protected], 0.70, 0.70:
bbox AP:82.1130, 69.5181, 57.1575
bev AP:78.9817, 66.3993, 56.6061
3d AP:78.1806, 63.1689, 53.5205
aos AP:73.57, 61.58, 50.86
Car [email protected], 0.50, 0.50:
bbox AP:81.5334, 72.0265, 54.3557
bev AP:81.5277, 72.1129, 63.1280
3d AP:81.5240, 72.0833, 63.1023
aos AP:73.62, 64.44, 49.18
Car [email protected], 0.50, 0.50:
bbox AP:82.1130, 69.5181, 57.1575
bev AP:82.1346, 69.5302, 59.5978
3d AP:82.1239, 69.5171, 59.5820
aos AP:73.57, 61.58, 50.86
Pedestrian [email protected], 0.50, 0.50:
bbox AP:72.9022, 66.1238, 58.4425
bev AP:57.5996, 51.4981, 45.0078
3d AP:51.1632, 44.8705, 42.4977
aos AP:55.38, 50.21, 44.26
Pedestrian [email protected], 0.50, 0.50:
bbox AP:71.9307, 67.1668, 60.5231
bev AP:55.1899, 49.7216, 43.5731
3d AP:50.3886, 44.6111, 39.9284
aos AP:52.64, 48.58, 43.49
Pedestrian [email protected], 0.25, 0.25:
bbox AP:72.9022, 66.1238, 58.4425
bev AP:73.6792, 72.2074, 64.7820
3d AP:73.6668, 72.1938, 64.6997
aos AP:55.38, 50.21, 44.26
Pedestrian [email protected], 0.25, 0.25:
bbox AP:71.9307, 67.1668, 60.5231
bev AP:75.4670, 70.9054, 64.2100
3d AP:75.4539, 70.8943, 64.1525
aos AP:52.64, 48.58, 43.49
Cyclist [email protected], 0.50, 0.50:
bbox AP:79.8341, 69.2576, 61.3650
bev AP:77.5997, 64.6157, 57.7342
3d AP:76.9519, 57.5823, 57.0640
aos AP:72.33, 60.86, 54.70
Cyclist [email protected], 0.50, 0.50:
bbox AP:84.7664, 66.9035, 62.2235
bev AP:82.1098, 62.4136, 58.1084
3d AP:81.2366, 59.7446, 55.4550
aos AP:75.65, 58.51, 54.53
Cyclist [email protected], 0.25, 0.25:
bbox AP:79.8341, 69.2576, 61.3650
bev AP:78.6522, 65.4810, 58.6798
3d AP:78.6522, 65.4810, 58.6798
aos AP:72.33, 60.86, 54.70
Cyclist [email protected], 0.25, 0.25:
bbox AP:84.7664, 66.9035, 62.2235
bev AP:83.3066, 63.5213, 59.0797
3d AP:83.3066, 63.5213, 59.0797
aos AP:75.65, 58.51, 54.53

cant install

Hello im writting with specific problem, i cant run setup.py because of cuda home environment not set. But i have already installed cuda:
nvcc --version
\nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0
and i have already installed on this environment openpcdet. There using instruction a didnt meet this problem

模型第一阶段指标不好

我使用由您提供的模型运行test.py,发现不论是TED-S还是TED-M都是第一阶段的召回和精度很低(甚至接近于0),然而第二阶段召回和精度很高这是正常的吗

How to draw graph in tensorboard?

I want to visualize a graph in tensorboard

and i add the code in /tools/train_utils/train_utils.py (line 67.)

image

but don't work and i received that...
image

how do i change the code?

Thank you in advance.

当加入了'Pedestrian', 'Cyclist'后出现了报错:AttributeError: 'EasyDict' object has no attribute 'ROI_PER_IMAGE'

我修改过后的TED-M.yaml文件如下:
`CLASS_NAMES: ['Car', 'Pedestrian', 'Cyclist']

DATA_CONFIG:
BASE_CONFIG: cfgs/dataset_configs/kitti_dataset.yaml
DATASET: 'KittiDatasetMM'
MM_PATH: 'velodyne_depth'
ROT_NUM: 3
USE_VAN: True

DATA_SPLIT: {
    'train': train,
    'test': val
}

INFO_PATH: {
    'train': [kitti_infos_train.pkl],
    'test': [kitti_infos_val.pkl],
}

DATA_AUGMENTOR:
    DISABLE_AUG_LIST: ['placeholder']
    AUG_CONFIG_LIST:
        - NAME: gt_sampling
          USE_ROAD_PLANE: True
          DB_INFO_PATH:
              - kitti_dbinfos_train_mm.pkl
          PREPARE: {
              filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Cyclist:5'],
              filter_by_difficulty: [-1],
          }

          SAMPLE_GROUPS: ['Car:10', 'Pedestrian:10', 'Cyclist:10']
          NUM_POINT_FEATURES: 8
          DATABASE_WITH_FAKELIDAR: False
          REMOVE_EXTRA_WIDTH: [0.0, 0.0, -0.2]
          LIMIT_WHOLE_SCENE: False

        - NAME: da_sampling
          USE_ROAD_PLANE: True
          DB_INFO_PATH:
              - kitti_dbinfos_train_mm.pkl
          PREPARE: {
              filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Cyclist:5'],
              filter_by_difficulty: [-1],
          }

          SAMPLE_GROUPS: ['Car:10', 'Pedestrian:10', 'Cyclist:10']

          MIN_SAMPLING_DIS: 0
          MAX_SAMPLING_DIS: 20
          OCCLUSION_NOISE: 0.2
          OCCLUSION_OFFSET: 2.
          SAMPLING_METHOD: 'LiDAR-aware'
          VERT_RES: 0.006
          HOR_RES: 0.003

          NUM_POINT_FEATURES: 8
          DATABASE_WITH_FAKELIDAR: False
          REMOVE_EXTRA_WIDTH: [0.0, 0.0, -0.2]
          LIMIT_WHOLE_SCENE: False

        - NAME: random_local_noise
          LOCAL_ROT_RANGE: [-0.78539816, 0.78539816]
          TRANSLATION_STD: [1.0, 1.0, 0.5]
          GLOBAL_ROT_RANGE: [0.0, 0.0]
          EXTRA_WIDTH: [0.2, 0.2, 0.]

        - NAME: random_world_rotation
          WORLD_ROT_ANGLE: [-0.39269908, 0.39269908]

        - NAME: random_world_scaling
          WORLD_SCALE_RANGE: [0.95, 1.05]

        - NAME: random_local_pyramid_aug
          DROP_PROB: 0.25
          SPARSIFY_PROB: 0.05
          SPARSIFY_MAX_NUM: 50
          SWAP_PROB: 0.1
          SWAP_MAX_NUM: 50

X_TRANS:
  AUG_CONFIG_LIST:
    - NAME: world_rotation
      WORLD_ROT_ANGLE: [0.39269908, 0, 0.39269908, -0.39269908, -0.39269908, 0]
    - NAME: world_flip
      ALONG_AXIS_LIST: [0, 1, 1, 0, 1, 0]
    - NAME: world_scaling
      WORLD_SCALE_RANGE: [ 0.98, 1.02, 1., 0.98, 1.02, 1.]


POINT_FEATURE_ENCODING: {
    encoding_type: absolute_coordinates_encoding_mm,
    used_feature_list: ['x', 'y', 'z', 'intensity'],
    src_feature_list: ['x', 'y', 'z', 'intensity'],
    num_features: 8
}

DATA_PROCESSOR:
    - NAME: mask_points_and_boxes_outside_range
      REMOVE_OUTSIDE_BOXES: True

    - NAME: shuffle_points
      SHUFFLE_ENABLED: {
        'train': True,
        'test': True
      }

    - NAME: transform_points_to_voxels
      VOXEL_SIZE: [0.05, 0.05, 0.05]
      MAX_POINTS_PER_VOXEL: 5
      MAX_NUMBER_OF_VOXELS: {
        'train': 16000,
        'test': 40000
      }

MODEL:
NAME: VoxelRCNN

VFE:
    NAME: MeanVFE
    MODEL: 'max'

BACKBONE_3D:
    NAME: TeMMVoxelBackBone8x
    NUM_FILTERS: [16, 32, 64, 64]
    RETURN_NUM_FEATURES_AS_DICT: True
    OUT_FEATURES: 64
    MM: True

MAP_TO_BEV:
    NAME: BEVPool
    NUM_BEV_FEATURES: 256
    ALIGN_METHOD: 'max'


BACKBONE_2D:
    NAME: BaseBEVBackbone

    LAYER_NUMS: [4, 4]
    LAYER_STRIDES: [1, 2]
    NUM_FILTERS: [64, 128]
    UPSAMPLE_STRIDES: [1, 2]
    NUM_UPSAMPLE_FILTERS: [128, 128]

DENSE_HEAD:
    NAME: AnchorHeadSingle
    CLASS_AGNOSTIC: False

    USE_DIRECTION_CLASSIFIER: True
    DIR_OFFSET: 0.78539
    DIR_LIMIT_OFFSET: 0.0
    NUM_DIR_BINS: 2

    ANCHOR_GENERATOR_CONFIG: [
        {
            'class_name': 'Car',
            'anchor_sizes': [[3.9, 1.6, 1.56]],
            'anchor_rotations': [0, 1.57],
            'anchor_bottom_heights': [-1.78],
            'align_center': False,
            'feature_map_stride': 8,
            'matched_threshold': 0.6,
            'unmatched_threshold': 0.45
        },
        {
            'class_name': 'Pedestrian',
            'anchor_sizes': [[ 0.8, 0.6, 1.73 ]],
            'anchor_rotations': [ 0, 1.57 ],
            'anchor_bottom_heights': [ -0.6 ],
            'align_center': False,
            'feature_map_stride': 8,
            'matched_threshold': 0.5,
            'unmatched_threshold': 0.35
        },
        {
            'class_name': 'Cyclist',
            'anchor_sizes': [[ 1.76, 0.6, 1.73 ]],
            'anchor_rotations': [ 0, 1.57 ],
            'anchor_bottom_heights': [ -0.6 ],
            'align_center': False,
            'feature_map_stride': 8,
            'matched_threshold': 0.5,
            'unmatched_threshold': 0.35
        }
    ]
    TARGET_ASSIGNER_CONFIG:
        NAME: AxisAlignedTargetAssigner
        POS_FRACTION: -1.0
        SAMPLE_SIZE: 512
        NORM_BY_NUM_EXAMPLES: False
        MATCH_HEIGHT: False
        BOX_CODER: ResidualCoder

    LOSS_CONFIG:
        LOSS_WEIGHTS: {
            'cls_weight': 1.0,
            'loc_weight': 2.0,
            'dir_weight': 0.2,
            'code_weights': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
        }


ROI_HEAD:
    NAME: TEDMHead
    CLASS_AGNOSTIC: True

    SHARED_FC: [256, 256]
    CLS_FC: [256, 256]
    REG_FC: [256, 256]
    DP_RATIO: 0.01

    PART:
      IN_CHANNEL: 256
      SIZE: 7
      GRID_OFFSETS: [0., 40.]
      FEATMAP_STRIDE: 0.4

    NMS_CONFIG:
        TRAIN:
            NMS_TYPE: nms_gpu
            MULTI_CLASSES_NMS: False
            NMS_PRE_MAXSIZE: 4000
            NMS_POST_MAXSIZE: 512
            NMS_THRESH: 0.8
        TEST:
            NMS_TYPE: nms_gpu
            MULTI_CLASSES_NMS: False
            USE_FAST_NMS: True
            SCORE_THRESH: 0.0
            NMS_PRE_MAXSIZE: 4000
            NMS_POST_MAXSIZE: 50
            NMS_THRESH: 0.75

    ROI_GRID_POOL:
        FEATURES_SOURCE: ['x_conv3','x_conv4']
        PRE_MLP: True
        GRID_SIZE: 6
        POOL_LAYERS:
            x_conv3:
                MLPS: [[32, 32], [32, 32]]
                QUERY_RANGES: [[2, 2, 2], [4, 4, 4]]
                POOL_RADIUS: [0.4, 0.8]
                NSAMPLE: [16, 16]
                POOL_METHOD: max_pool
            x_conv4:
                MLPS: [[32, 32], [32, 32]]
                QUERY_RANGES: [[2, 2, 2], [4, 4, 4]]
                POOL_RADIUS: [0.8, 1.6]
                NSAMPLE: [16, 16]
                POOL_METHOD: max_pool

    ROI_GRID_POOL_MM:
        FEATURES_SOURCE: ['x_conv3','x_conv4']
        PRE_MLP: True
        GRID_SIZE: 4
        POOL_LAYERS:
            x_conv3:
                MLPS: [[32, 32], [32, 32]]
                QUERY_RANGES: [[2, 2, 2], [4, 4, 4]]
                POOL_RADIUS: [0.4, 0.8]
                NSAMPLE: [16, 16]
                POOL_METHOD: max_pool
            x_conv4:
                MLPS: [[32, 32], [32, 32]]
                QUERY_RANGES: [[2, 2, 2], [4, 4, 4]]
                POOL_RADIUS: [0.8, 1.6]
                NSAMPLE: [16, 16]
                POOL_METHOD: max_pool


    TARGET_CONFIG:
        BOX_CODER: ResidualCoder
        STAGE0:
            ROI_PER_IMAGE: 160
            FG_RATIO: 0.5
            SAMPLE_ROI_BY_EACH_CLASS: True
            CLS_SCORE_TYPE: roi_iou_x
            CLS_FG_THRESH: [0.75]
            CLS_BG_THRESH: [0.25]
            CLS_BG_THRESH_LO: 0.1
            HARD_BG_RATIO: 0.8
            REG_FG_THRESH: [0.5]
            ENABLE_HARD_SAMPLING: False
            HARD_SAMPLING_THRESH: [0.5]
            HARD_SAMPLING_RATIO: [0.5]
        STAGE1:
            ROI_PER_IMAGE: 160
            FG_RATIO: 0.5
            SAMPLE_ROI_BY_EACH_CLASS: True
            CLS_SCORE_TYPE: roi_iou_x
            CLS_FG_THRESH: [0.75]
            CLS_BG_THRESH: [0.25]
            CLS_BG_THRESH_LO: 0.1
            HARD_BG_RATIO: 0.8
            REG_FG_THRESH: [0.55]
            ENABLE_HARD_SAMPLING: True
            HARD_SAMPLING_THRESH: [0.5]
            HARD_SAMPLING_RATIO: [0.5]
        STAGE2:
            ROI_PER_IMAGE: 160
            FG_RATIO: 0.5
            SAMPLE_ROI_BY_EACH_CLASS: True
            CLS_SCORE_TYPE: roi_iou_x
            CLS_FG_THRESH: [0.75]
            CLS_BG_THRESH: [0.25]
            CLS_BG_THRESH_LO: 0.1
            HARD_BG_RATIO: 0.8
            REG_FG_THRESH: [0.6]
            ENABLE_HARD_SAMPLING: True
            HARD_SAMPLING_THRESH: [0.5]
            HARD_SAMPLING_RATIO: [0.5]


    LOSS_CONFIG:
        CLS_LOSS: BinaryCrossEntropy
        REG_LOSS: smooth-l1
        CORNER_LOSS_REGULARIZATION: True
        GRID_3D_IOU_LOSS: False
        LOSS_WEIGHTS: {
            'rcnn_cls_weight': 1.0,
            'rcnn_reg_weight': 1.0,
            'rcnn_corner_weight': 1.0,
            'rcnn_iou3d_weight': 1.0,
            'code_weights': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
        }

POST_PROCESSING:
    RECALL_THRESH_LIST: [0.3, 0.5, 0.7]
    SCORE_THRESH: 0.7
    OUTPUT_RAW_SCORE: False
    EVAL_METRIC: kitti

    NMS_CONFIG:
        MULTI_CLASSES_NMS: False
        NMS_TYPE: nms_gpu
        NMS_THRESH: 0.1
        NMS_PRE_MAXSIZE: 4096
        NMS_POST_MAXSIZE: 500

OPTIMIZATION:
BATCH_SIZE_PER_GPU: 2
NUM_EPOCHS: 30

OPTIMIZER: adam_onecycle
LR: 0.01
WEIGHT_DECAY: 0.01
MOMENTUM: 0.9

MOMS: [0.95, 0.85]
PCT_START: 0.4
DIV_FACTOR: 10
DECAY_STEP_LIST: [35, 45]
LR_DECAY: 0.1
LR_CLIP: 0.0000001

LR_WARMUP: False
WARMUP_EPOCH: 1

GRAD_NORM_CLIP: 10`

Colab installation issues

Hello,
I'm trying to run this project on Colab, but when I run "python setup.py develop" I get this problem:

No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' running develop running egg_info writing pcdet.egg-info/PKG-INFO writing dependency_links to pcdet.egg-info/dependency_links.txt writing requirements to pcdet.egg-info/requires.txt writing top-level names to pcdet.egg-info/top_level.txt adding license file 'LICENSE' writing manifest file 'pcdet.egg-info/SOURCES.txt' running build_ext /usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py:387: UserWarning: The detected CUDA version (11.2) has a minor version mismatch with the version that was used to compile PyTorch (11.6). Most likely this shouldn't be a problem. warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda)) building 'pcdet.ops.votr_ops.votr_ops_cuda' extension /usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py:497: UserWarning: Can't initialize NVML warnings.warn("Can't initialize NVML") Traceback (most recent call last): File "setup.py", line 34, in <module> setup( File "/usr/local/lib/python3.8/dist-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/usr/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/usr/local/lib/python3.8/dist-packages/setuptools/command/develop.py", line 34, in run self.install_for_development() File "/usr/local/lib/python3.8/dist-packages/setuptools/command/develop.py", line 136, in install_for_development self.run_command('build_ext') File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/usr/local/lib/python3.8/dist-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/usr/local/lib/python3.8/dist-packages/Cython/Distutils/old_build_ext.py", line 186, in run _build_ext.build_ext.run(self) File "/usr/lib/python3.8/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py", line 843, in build_extensions build_ext.build_extensions(self) File "/usr/local/lib/python3.8/dist-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions _build_ext.build_ext.build_extensions(self) File "/usr/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/usr/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/usr/local/lib/python3.8/dist-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/usr/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py", line 649, in unix_wrap_ninja_compile cuda_post_cflags = unix_cuda_flags(cuda_post_cflags) File "/usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py", line 548, in unix_cuda_flags cflags + _get_cuda_arch_flags(cflags)) File "/usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py", line 1780, in _get_cuda_arch_flags arch_list[-1] += '+PTX' IndexError: list index out of range

Do you know how to solve it?
Thanks in advance

Question about TED-M

Hello, Thank you for sharing your work.

As the paper mentions TED-M version uses both RGB and Lidar point clouds. I wonder where these two modal get fused.

I can see that the two modals produce separate voxel features from TeSparseConv and get concatenated in ROI process.

However, do you perform TeBEV pool and TeVoxel pool only for Lidar point clouds?
--> if my understanding is right, why not perform on RGB pseudo points?

Thank you.

工程释放的TED_S基础模型的训练方式

作者您好,Readme有提到释放的基础模型如果要在kitti测试集上取得较好效果需要继续finetune,请问释放的基础模型只用了kitti数据集训练吗,与默认的TED-S config相比,有做过什么改动,谢谢,打扰

Distance Aware Data Aug

"During training, similar to GT-AUG (Shi, Wang, and Li 2019), we add the sampled points and bounding box into the training sample for
data augmentation." How did you do this, can you please provide an example for one frame.

TED-M 无法复现

你好!

我在复现你代码的过程中,发现只有TED-S可以复现,而TED-M在你配置文件默认设置的30个epoch以内,Car3D(R40),Mod/Hard 只有87/82,又跑了一遍后,Hard甚至只有不到78!

About Pedestrian and Cyclist Detection

TED has good results on the detection of car, but I found that the performance of pedestrian and cyclist is poor after adding them. The configuration file I use is as follows. I wrote it with reference to the configuration files of TED and Voxel RCNN. Is there anything inappropriate?

CLASS_NAMES: ['Car', 'Pedestrian', 'Cyclist']

DATA_CONFIG:
    _BASE_CONFIG_: cfgs/dataset_configs/kitti_dataset.yaml
    DATASET: 'KittiDataset'
    ROT_NUM: 3
    USE_VAN: True

    DATA_SPLIT: {
        'train': train,
        'test': val
    }

    INFO_PATH: {
        'train': [kitti_infos_train.pkl],
        'test': [kitti_infos_val.pkl],
    }

    DATA_AUGMENTOR:
        DISABLE_AUG_LIST: ['placeholder']
        AUG_CONFIG_LIST:
            - NAME: gt_sampling
              USE_ROAD_PLANE: True
              DB_INFO_PATH:
                  - kitti_dbinfos_train.pkl
              PREPARE: {
                  filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Cyclist:5'],
                  filter_by_difficulty: [-1],
              }

              SAMPLE_GROUPS: ['Car:10', 'Pedestrian:10', 'Cyclist:10']
              NUM_POINT_FEATURES: 4
              DATABASE_WITH_FAKELIDAR: False
              REMOVE_EXTRA_WIDTH: [0.0, 0.0, -0.2]
              LIMIT_WHOLE_SCENE: False

            - NAME: da_sampling
              USE_ROAD_PLANE: True
              DB_INFO_PATH:
                - kitti_dbinfos_train.pkl
              PREPARE: {
                filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Cyclist:5'],
                filter_by_difficulty: [-1],
              }

              SAMPLE_GROUPS: ['Car:10', 'Pedestrian:10', 'Cyclist:10']

              MIN_SAMPLING_DIS: 0
              MAX_SAMPLING_DIS: 20
              OCCLUSION_NOISE: 0.2
              OCCLUSION_OFFSET: 2.
              SAMPLING_METHOD: 'LiDAR-aware'
              VERT_RES: 0.006
              HOR_RES: 0.003

              NUM_POINT_FEATURES: 4
              DATABASE_WITH_FAKELIDAR: False
              REMOVE_EXTRA_WIDTH: [0.0, 0.0, -0.2]
              LIMIT_WHOLE_SCENE: False

            - NAME: random_local_noise
              LOCAL_ROT_RANGE: [-0.78539816, 0.78539816]
              TRANSLATION_STD: [1.0, 1.0, 0.5]
              GLOBAL_ROT_RANGE: [0.0, 0.0]
              EXTRA_WIDTH: [0.2, 0.2, 0.]

            - NAME: random_world_rotation
              WORLD_ROT_ANGLE: [-0.39269908, 0.39269908]

            - NAME: random_world_scaling
              WORLD_SCALE_RANGE: [0.95, 1.05]

            - NAME: random_local_pyramid_aug
              DROP_PROB: 0.25
              SPARSIFY_PROB: 0.05
              SPARSIFY_MAX_NUM: 50
              SWAP_PROB: 0.1
              SWAP_MAX_NUM: 50

    X_TRANS:
      AUG_CONFIG_LIST:
        - NAME: world_rotation
          WORLD_ROT_ANGLE: [0.39269908, 0, 0.39269908, -0.39269908, -0.39269908, 0]
        - NAME: world_flip
          ALONG_AXIS_LIST: [0, 1, 1, 0, 1, 0]
        - NAME: world_scaling
          WORLD_SCALE_RANGE: [ 0.98, 1.02, 1., 0.98, 1.02, 1.]



    POINT_FEATURE_ENCODING: {
        encoding_type: absolute_coordinates_encoding_mm,
        used_feature_list: ['x', 'y', 'z', 'intensity'],
        src_feature_list: ['x', 'y', 'z', 'intensity'],
        num_features: 4
    }

    DATA_PROCESSOR:
        - NAME: mask_points_and_boxes_outside_range
          REMOVE_OUTSIDE_BOXES: True

        - NAME: shuffle_points
          SHUFFLE_ENABLED: {
            'train': True,
            'test': True
          }

        - NAME: transform_points_to_voxels
          VOXEL_SIZE: [0.05, 0.05, 0.05]  
          MAX_POINTS_PER_VOXEL: 5
          MAX_NUMBER_OF_VOXELS: {
            'train': 16000,
            'test': 40000
          }

MODEL:
    NAME: VoxelRCNN

    VFE:
        NAME: MeanVFE
        MODEL: 'max'

    BACKBONE_3D:
        NAME: TeVoxelBackBone8x
        NUM_FILTERS: [16, 32, 64, 64]
        RETURN_NUM_FEATURES_AS_DICT: True
        OUT_FEATURES: 64

    MAP_TO_BEV:
        NAME: BEVPool
        NUM_BEV_FEATURES: 256
        ALIGN_METHOD: 'max'

    BACKBONE_2D:
        NAME: BaseBEVBackbone

        LAYER_NUMS: [4, 4]
        LAYER_STRIDES: [1, 2]
        NUM_FILTERS: [64, 128]
        UPSAMPLE_STRIDES: [1, 2]
        NUM_UPSAMPLE_FILTERS: [128, 128]

    DENSE_HEAD:
        NAME: AnchorHeadSingle
        CLASS_AGNOSTIC: False

        USE_DIRECTION_CLASSIFIER: True
        DIR_OFFSET: 0.78539
        DIR_LIMIT_OFFSET: 0.0
        NUM_DIR_BINS: 2

        ANCHOR_GENERATOR_CONFIG: [
            {
                'class_name': 'Car',
                'anchor_sizes': [[3.9, 1.6, 1.56]],
                'anchor_rotations': [0, 1.57],
                'anchor_bottom_heights': [-1.78],
                'align_center': False,
                'feature_map_stride': 8,
                'matched_threshold': 0.6,
                'unmatched_threshold': 0.45
            },
            {
                'class_name': 'Pedestrian',
                'anchor_sizes': [[0.8, 0.6, 1.73]],
                'anchor_rotations': [0, 1.57],
                'anchor_bottom_heights': [-0.6],
                'align_center': False,
                'feature_map_stride': 8,
                'matched_threshold': 0.5,
                'unmatched_threshold': 0.35
            },
            {
                'class_name': 'Cyclist',
                'anchor_sizes': [[1.76, 0.6, 1.73]],
                'anchor_rotations': [0, 1.57],
                'anchor_bottom_heights': [-0.6],
                'align_center': False,
                'feature_map_stride': 8,
                'matched_threshold': 0.5,
                'unmatched_threshold': 0.35
            }
        ]
        TARGET_ASSIGNER_CONFIG:
            NAME: AxisAlignedTargetAssigner
            POS_FRACTION: -1.0
            SAMPLE_SIZE: 512
            NORM_BY_NUM_EXAMPLES: False
            MATCH_HEIGHT: False
            BOX_CODER: ResidualCoder

        LOSS_CONFIG:
            LOSS_WEIGHTS: {
                'cls_weight': 1.0,
                'loc_weight': 2.0,
                'dir_weight': 0.2,
                'code_weights': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
            }


    ROI_HEAD:
        NAME: TEDSHead
        CLASS_AGNOSTIC: True

        SHARED_FC: [256, 256]
        CLS_FC: [256, 256]
        REG_FC: [256, 256]
        DP_RATIO: 0.01

        NMS_CONFIG:
            TRAIN:
                NMS_TYPE: nms_gpu
                MULTI_CLASSES_NMS: False
                NMS_PRE_MAXSIZE: 4000
                NMS_POST_MAXSIZE: 512
                NMS_THRESH: 0.8
            TEST:
                NMS_TYPE: nms_gpu
                MULTI_CLASSES_NMS: False
                USE_FAST_NMS: True
                SCORE_THRESH: 0.0
                NMS_PRE_MAXSIZE: 4000
                NMS_POST_MAXSIZE: 50
                NMS_THRESH: 0.75

        ROI_GRID_POOL:
            FEATURES_SOURCE: ['x_conv3','x_conv4']
            PRE_MLP: True
            GRID_SIZE: 6
            POOL_LAYERS:
                x_conv3:
                    MLPS: [[32, 32], [32, 32]]
                    QUERY_RANGES: [[2, 2, 2], [4, 4, 4]]
                    POOL_RADIUS: [0.4, 0.8]
                    NSAMPLE: [16, 16]
                    POOL_METHOD: max_pool
                x_conv4:
                    MLPS: [[32, 32], [32, 32]]
                    QUERY_RANGES: [[2, 2, 2], [4, 4, 4]]
                    POOL_RADIUS: [0.8, 1.6]
                    NSAMPLE: [16, 16]
                    POOL_METHOD: max_pool


        TARGET_CONFIG:
            BOX_CODER: ResidualCoder
            ROI_PER_IMAGE: 160
            FG_RATIO: 0.5
            SAMPLE_ROI_BY_EACH_CLASS: True
            CLS_SCORE_TYPE: roi_iou_x
            CLS_FG_THRESH: [0.75, 0.75, 0.75]
            CLS_BG_THRESH: [0.25, 0.25, 0.25]
            CLS_BG_THRESH_LO: 0.1
            HARD_BG_RATIO: 0.8
            REG_FG_THRESH: [0.55, 0.55, 0.55]
            ENABLE_HARD_SAMPLING: True
            HARD_SAMPLING_THRESH: [0.5, 0.5, 0.5]
            HARD_SAMPLING_RATIO: [0.5, 0.5, 0.5]


        LOSS_CONFIG:
            CLS_LOSS: BinaryCrossEntropy
            REG_LOSS: smooth-l1
            CORNER_LOSS_REGULARIZATION: True
            GRID_3D_IOU_LOSS: False
            LOSS_WEIGHTS: {
                'rcnn_cls_weight': 1.0,
                'rcnn_reg_weight': 1.0,
                'rcnn_corner_weight': 1.0,
                'rcnn_iou3d_weight': 1.0,
                'code_weights': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
            }

    POST_PROCESSING:
        RECALL_THRESH_LIST: [0.3, 0.5, 0.7]
        SCORE_THRESH: 0.25
        OUTPUT_RAW_SCORE: False
        EVAL_METRIC: kitti

        NMS_CONFIG:
            MULTI_CLASSES_NMS: False
            NMS_TYPE: nms_gpu
            NMS_THRESH: 0.1
            NMS_PRE_MAXSIZE: 4096
            NMS_POST_MAXSIZE: 500


OPTIMIZATION:
    BATCH_SIZE_PER_GPU: 2
    NUM_EPOCHS: 40

    OPTIMIZER: adam_onecycle
    LR: 0.01
    WEIGHT_DECAY: 0.01
    MOMENTUM: 0.9

    MOMS: [0.95, 0.85]
    PCT_START: 0.4
    DIV_FACTOR: 10
    DECAY_STEP_LIST: [35, 45]
    LR_DECAY: 0.1
    LR_CLIP: 0.0000001

    LR_WARMUP: False
    WARMUP_EPOCH: 1

    GRAD_NORM_CLIP: 10

Getting Error While Running `python3 setup.py develop`

My commands:

git clone https://github.com/hailanyi/TED.git
cd TED
pip3 install -r requirements.txt
python3 setup.py develop

Then I get errors saying that THC/THC.h is missing.

python3 setup.py develop
running develop
/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
running egg_info
writing pcdet.egg-info/PKG-INFO
writing dependency_links to pcdet.egg-info/dependency_links.txt
writing requirements to pcdet.egg-info/requires.txt
writing top-level names to pcdet.egg-info/top_level.txt
reading manifest file 'pcdet.egg-info/SOURCES.txt'
adding license file 'LICENSE'
writing manifest file 'pcdet.egg-info/SOURCES.txt'
running build_ext
building 'pcdet.ops.votr_ops.votr_ops_cuda' extension
Emitting ninja build file /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/7] c++ -MMD -MF /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/group_features.o.d -pthread -B /home/yueqian/App/miniconda3/envs/pytorch/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/TH -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/THC -I/home/yueqian/App/miniconda3/envs/pytorch/include -I/home/yueqian/App/miniconda3/envs/pytorch/include/python3.9 -c -c /home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/group_features.cpp -o /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/group_features.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=votr_ops_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
FAILED: /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/group_features.o 
c++ -MMD -MF /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/group_features.o.d -pthread -B /home/yueqian/App/miniconda3/envs/pytorch/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/TH -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/THC -I/home/yueqian/App/miniconda3/envs/pytorch/include -I/home/yueqian/App/miniconda3/envs/pytorch/include/python3.9 -c -c /home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/group_features.cpp -o /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/group_features.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=votr_ops_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
/home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/group_features.cpp:12:10: fatal error: THC/THC.h: No such file or directory
   12 | #include <THC/THC.h>
      |          ^~~~~~~~~~~
compilation terminated.
[2/7] c++ -MMD -MF /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_mapping.o.d -pthread -B /home/yueqian/App/miniconda3/envs/pytorch/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/TH -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/THC -I/home/yueqian/App/miniconda3/envs/pytorch/include -I/home/yueqian/App/miniconda3/envs/pytorch/include/python3.9 -c -c /home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/build_mapping.cpp -o /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_mapping.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=votr_ops_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
FAILED: /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_mapping.o 
c++ -MMD -MF /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_mapping.o.d -pthread -B /home/yueqian/App/miniconda3/envs/pytorch/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/TH -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/THC -I/home/yueqian/App/miniconda3/envs/pytorch/include -I/home/yueqian/App/miniconda3/envs/pytorch/include/python3.9 -c -c /home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/build_mapping.cpp -o /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_mapping.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=votr_ops_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
/home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/build_mapping.cpp:8:10: fatal error: THC/THC.h: No such file or directory
    8 | #include <THC/THC.h>
      |          ^~~~~~~~~~~
compilation terminated.
[3/7] c++ -MMD -MF /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_attention_indices.o.d -pthread -B /home/yueqian/App/miniconda3/envs/pytorch/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/TH -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/THC -I/home/yueqian/App/miniconda3/envs/pytorch/include -I/home/yueqian/App/miniconda3/envs/pytorch/include/python3.9 -c -c /home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/build_attention_indices.cpp -o /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_attention_indices.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=votr_ops_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
FAILED: /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_attention_indices.o 
c++ -MMD -MF /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_attention_indices.o.d -pthread -B /home/yueqian/App/miniconda3/envs/pytorch/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/TH -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/THC -I/home/yueqian/App/miniconda3/envs/pytorch/include -I/home/yueqian/App/miniconda3/envs/pytorch/include/python3.9 -c -c /home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/build_attention_indices.cpp -o /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_attention_indices.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=votr_ops_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
/home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/build_attention_indices.cpp:8:10: fatal error: THC/THC.h: No such file or directory
    8 | #include <THC/THC.h>
      |          ^~~~~~~~~~~
compilation terminated.
[4/7] c++ -MMD -MF /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/votr_api.o.d -pthread -B /home/yueqian/App/miniconda3/envs/pytorch/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -O2 -isystem /home/yueqian/App/miniconda3/envs/pytorch/include -fPIC -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/TH -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/THC -I/home/yueqian/App/miniconda3/envs/pytorch/include -I/home/yueqian/App/miniconda3/envs/pytorch/include/python3.9 -c -c /home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/votr_api.cpp -o /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/votr_api.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=votr_ops_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
[5/7] /home/yueqian/App/miniconda3/envs/pytorch/bin/nvcc  -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/TH -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/THC -I/home/yueqian/App/miniconda3/envs/pytorch/include -I/home/yueqian/App/miniconda3/envs/pytorch/include/python3.9 -c -c /home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/build_mapping_gpu.cu -o /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_mapping_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=votr_ops_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
[6/7] /home/yueqian/App/miniconda3/envs/pytorch/bin/nvcc  -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/TH -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/THC -I/home/yueqian/App/miniconda3/envs/pytorch/include -I/home/yueqian/App/miniconda3/envs/pytorch/include/python3.9 -c -c /home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/build_attention_indices_gpu.cu -o /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/build_attention_indices_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=votr_ops_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
[7/7] /home/yueqian/App/miniconda3/envs/pytorch/bin/nvcc  -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/TH -I/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/include/THC -I/home/yueqian/App/miniconda3/envs/pytorch/include -I/home/yueqian/App/miniconda3/envs/pytorch/include/python3.9 -c -c /home/yueqian/Dev/machine-perception/assignment/TED/pcdet/ops/votr_ops/src/group_features_gpu.cu -o /home/yueqian/Dev/machine-perception/assignment/TED/build/temp.linux-x86_64-cpython-39/pcdet/ops/votr_ops/src/group_features_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=votr_ops_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1900, in _run_ninja_build
    subprocess.run(
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/subprocess.py", line 528, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/yueqian/Dev/machine-perception/assignment/TED/setup.py", line 34, in <module>
    setup(
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/__init__.py", line 87, in setup
    return distutils.core.setup(**attrs)
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 185, in setup
    return run_commands(dist)
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
    dist.run_commands()
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
    self.run_command(cmd)
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/dist.py", line 1217, in run_command
    super().run_command(command)
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
    cmd_obj.run()
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/command/develop.py", line 34, in run
    self.install_for_development()
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/command/develop.py", line 114, in install_for_development
    self.run_command('build_ext')
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
    self.distribution.run_command(command)
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/dist.py", line 1217, in run_command
    super().run_command(command)
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
    cmd_obj.run()
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 84, in run
    _build_ext.run(self)
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
    self.build_extensions()
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 843, in build_extensions
    build_ext.build_extensions(self)
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 466, in build_extensions
    self._build_extensions_serial()
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 492, in _build_extensions_serial
    self.build_extension(ext)
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
    _build_ext.build_extension(self, ext)
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 547, in build_extension
    objects = self.compiler.compile(
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 658, in unix_wrap_ninja_compile
    _write_ninja_file_and_compile_objects(
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1573, in _write_ninja_file_and_compile_objects
    _run_ninja_build(
  File "/home/yueqian/App/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1916, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

DA-Aug

Your paper is very outstanding, can I refer to your code about DA-Aug part ?

Train - Eval Settings

Hi thanks for your code!

I have questions about your 80%-20% train-eval split. Can you show your txt file for this settings ?

Thanks
Best Regards

hi, I have some questions about TED

Hi,@hailanyi
Thanks for your contribution and the great work, I have two questions.

  1. It's really hard to download huge data from Google drive in China. Do you have a plan to upload the preprocessed data in other netdisk like Baidu netdist or Ali netdisk?

  2. I only have a 2080ti gpu with 10GB usable, I don know whether I can train it. Is the GPU memory showed in Model Zoo a 1 batch?

Happy new year and looking forward to your reply. :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.