Coder Social home page Coder Social logo

cfnet's Introduction

Center Focusing Network for Real-Time LiDAR Panoptic Segmentation

teaser

Official code for CFNet

Center Focusing Network for Real-Time LiDAR Panoptic Segmentation, Xiaoyan Li, Gang Zhang, Boyue Wang, Yongli Hu, Baocai Yin. (https://openaccess.thecvf.com/content/CVPR2023/papers/Li_Center_Focusing_Network_for_Real-Time_LiDAR_Panoptic_Segmentation_CVPR_2023_paper.pdf) Accepted by CVPR2023

NEWS

  • [2023-02-24] CFNet is accepted by CVPR 2023
  • [2022-11-17] CFNet achieves the 63.4 PQ and 68.3 mIoU on the SemanticKITTI LiDAR Panoptic Segmentation Benchmark with the inference latency of 43.5 ms on a single NVIDIA RTX 3090 GPU. teaser

1 Dependency

CUDA>=10.1
Pytorch>=1.5.1
[email protected]
[email protected]

2 Training Process

2.1 Installation
cd pytorch_lib
python setup.py install
2.2 Prepare Dataset

Please download the SemanticKITTI dataset to the folder ./data and the structure of the folder should look like:

./data
    ├── SemanticKITTI
        ├── ...
        └── dataset/
            ├──sequences
                ├── 00/         
                │   ├── velodyne/
                |   |	├── 000000.bin
                |   |	├── 000001.bin
                |   |	└── ...
                │   └── labels/ 
                |       ├── 000000.label
                |       ├── 000001.label
                |       └── ...
                ├── 08/ # for validation
                ├── 11/ # 11-21 for testing
                └── 21/
                    └── ...

And download the object bank on the SemanticKITTI to the folder ./data and the structure of the folder should look like:

./data
    ├── object_bank
        ├── bicycle
        ├── bicyclist
        ├── car
        ├── motorcycle
        ├── motorcyclist
        ├── other-vehicle
        ├── person
        ├── truck
2.3 Training Script
python3 -m torch.distributed.launch --nproc_per_node=8 train.py --config config/config_mvfcev2ctx_sgd_wce_fp32_lossv2_single_newcpaug.py

3 Evaluate Process

python3 -m torch.distributed.launch --nproc_per_node=8 evaluate.py --config config/config_mvfcev2ctx_sgd_wce_fp32_lossv2_single_newcpaug.py --start_epoch 0 --end_epoch 47

Citations

@inproceedings{licfnet2023,
  author={Li, Xiaoyan and Zhang, Gang and Wang, Boyue and Hu, Yongli and Yin, Baocai},
  booktitle={2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, 
  title={Center Focusing Network for Real-Time LiDAR Panoptic Segmentation}, 
  year={2023},
  volume={},
  number={},
  pages={13425-13434},
  doi={10.1109/CVPR52729.2023.01290}
}

cfnet's People

Contributors

gangzhang842 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.