Coder Social home page Coder Social logo

weisongwen / urbannavdataset Goto Github PK

View Code? Open in Web Editor NEW
455.0 19.0 54.0 18.33 MB

UrbanNav: an Open-Sourcing Localization Data Collected in Asian Urban Canyons, Including Tokyo and Hong Kong

Home Page: https://www.polyu-ipn-lab.com/download

positioning localization gnss lidar camera imu slam urban dataset

urbannavdataset's Introduction

UrbanNav

An Open-Sourcing Localization Dataset Collected in Asian Urban Canyons, including Tokyo and Hong Kong

This repository is the usage page of the UrbanNav dataset. Positioning and localization in deep urban canyons using low-cost sensors is still a challenging problem. The accuracy of GNSS can be severely challenged in urban canyons due to the high-rising buildings, leading to numerous Non-line-of-sight (NLOS) receptions and multipath effects. Moreover, the excessive dynamic objects can also distort the performance of LiDAR, and camera. The UrbanNav dataset wishes to provide a challenging data source to the community to further accelerate the study of accurate and robust positioning in challenging urban canyons. The dataset includes sensor measurements from GNSS receiver, LiDAR, camera and IMU, together with accurate ground truth from SPAN-CPT system. Different from the existing dataset, such as Waymo, KITTI, UrbanNav provide raw GNSS RINEX data. In this case, users can improve the performance of GNSS positioning via raw data. In short, the UrbanNav dataset pose a special focus on improving GNSS positioning in urban canyons, but also provide sensor measurements from LiDAR, camera and IMU. If you got any problems when using the dataset and cannot find a satisfactory solution in the issue list, please open a new issue and we will reply ASAP.

Key words: Positioning, Localization, GNSS Positioning, Urban Canyons, GNSS Raw Data,Dynamic Objects, GNSS/INS/LiDAR/Camera, Ground Truth

Updated Version of the dataset

If you use UrbanNav for your academic research, please consider citing our paper

  • Hsu, Li-Ta, Nobuaki Kubo, Weisong Wen, Wu Chen, Zhizhao Liu, Taro Suzuki, and Junichi Meguro. "UrbanNav: An open-sourced multisensory dataset for benchmarking positioning algorithms designed for urban areas." In Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), pp. 226-256. 2021.

Important Notes:

  • About access to GNSS RINEX file: The GNSS measurements is provided as GNSS RINEX data. We will recently open-source a package, the GraphGNSSLib, which provide easy access to the GNSS RINEX file and publish the data as customized ROS message. Meanwhile, we GraphGNSSLib also provide the capabilities of GNSS positioning and real-time kinematic (RTK) using factor graph optimization (FGO). If you wish to use the GraphGNSSLib, keep an eye on the update of this repo.
  • Dataset contribution: Researches who wish to contribute their dataset as part of the UrbanNav dataset, please feel free to contact me via email [email protected]. We wish the UrbanNav can be a platform for navigation solution development, validation and sharing.
  • Algorithm validation and contribution: Researches are welcomed to share their navigation solution results, source code to the UrbanNav dataset after a code review process, e,g, code for GNSS/INS integration or LiDAR SLAM, etc.

Objective of the Dataset:

  • Benchmarking different positioning algorithms using the open-sourced dataset.

  • Raising the awareness of the urgent navigation requirement in highly-urbanized areas especially in Asian-Pacific regions.

Contact Authors (corresponding to issues and maintenance of the currently available dataset): Weisong Wen, Feng Huang,Li-ta Hsu from the Intelligent Positioning and Navigation Laboratory, The Hong Kong Polytechnique University

Related Papers:

  • Wen, Weisong, Xiwei Bai, Li-Ta Hsu, and Tim Pfeifer. "GNSS/LiDAR Integration Aided by Self-Adaptive Gaussian Mixture Models in Urban Scenarios: An Approach Robust to Non-Gaussian Noise." In 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), pp. 647-654. IEEE, 2020.

if you use GraphGNSSLib for your academic research, please cite our related papers

Work related to urbanNav Dataset :

  • Li, Tao, Ling Pei, Yan Xiang, Qi Wu, Songpengcheng Xia, Lihao Tao, and Wenxian Yu. "P3-LOAM: PPP/LiDAR Loosely Coupled SLAM with Accurate Covariance Estimation and Robust RAIM in Urban Canyon Environment." IEEE Sensors Journal (2020). paper
  • Chen, Chao, and Guobin Chang. "PPPLib: An open-source software for precise point positioning using GPS, BeiDou, Galileo, GLONASS, and QZSS with multi-frequency observations." GPS Solutions 25, no. 1 (2020): 1-7. PPPLib Code, paper

1. Hong Kong Dataset

1.1 Sensor Setups

The platform for data collection in Hong Kong is a Honda Fit. The platform is equipped with the following sensors:

  • 3D LiDAR sensor (HDL 32E Velodyne): (360 HFOV, +10~-30 VFOV, 80m range, 10Hz)
  • IMU (Xsens Mti 10, 100 Hz, AHRS)
  • GNSS receiver: u-blox M8T or u-blox F9P (to be updated)
  • camera:(1920X1200,79.4°X96.8°, 10Hz)
  • SPAN-CPT:(RTK GNSS/INS, RMSE: 5cm, 1Hz)

1.2. Dataset 1: UrbanNav-HK-Data20190428

Brief: Dataset UrbanNav-HK-Data20190428 is collected in a typical urban canyon of Hong Kong near TST which involves high-rising buildings, numerous dynamic objects. The coordinates transformation between multiple sensors, and intrinsic measurements of camera can be found via Extrinsic Parameters, IMU Nosie and Intrinsic Parameters of Camera.

Some key features are as follows:

Date of Collection Total Size Path length Sensors
2019/04/28 42.9 GB 2.01 Km GNSS/LiDAR/Camera/IMU/SPAN-CPT
  • Download by Dropbox Link: Data INFO
    • UrbanNav-HK-Data20190428 (ROS)
      • ROSBAG file which includes:
        • GNSS positioning (solution directly from GNSS receiver): /ublox_node/fix
        • 3D LiDAR point clouds: /velodyne_points
        • Camera: /camera/image_color
        • IMU: /imu/data
        • SPAN-CPT: /novatel_data/inspvax
    • GNSS (RINEX)
      • GNSS RINEX files, to use it, we suggest to use the RTKLIB
    • IMU/SPAN-CPT (CSV)
      • IMU and SPAN-CPT data for non-ROS users.

For mainland china users, please download the dataset using the Baidu Clouds Links

  • Download by Baidu Cloud Link: Data INFO, (qm3l)
    • UrbanNav-HK-Data20190428 (ROS) (nff4)
      • ROSBAG file whihc includes:
        • GNSS positioning (solution directly from GNSS receiver): /ublox_node/fix
        • 3D LiDAR point clouds: /velodyne_points
        • Camera: /camera/image_color
        • IMU: /imu/data
        • SPAN-CPT: /novatel_data/inspvax
    • GNSS (RINEX) (gojb)
      • GNSS RINEX files, to use it, we suggest to use the RTKLIB
    • IMU/SPAN-CPT (CSV) (k3dz)
      • IMU and SPAN-CPT data for non-ROS users.

1.3. Dataset 2: UrbanNav-HK-Data20200314

Brief: Dataset UrbanNav-HK-Data2020314 is collected in a low-urbanization area in Kowloon which suitable for algorithmic verification and comparison. The coordinates transformation between multiple sensors, and intrinsic measurements of camera can be found via Extrinsic Parameters, IMU Nosie and Intrinsic Parameters of Camera.

Some key features are as follows:

Date of Collection Total Size Path length Sensors
2020/03/14 27.0 GB 1.21 Km LiDAR/Camera/IMU/SPAN-CPT
  • Download by Dropbox Link:
    • UrbanNav-HK-Data20200314 (ROS)
      • ROSBAG file which includes:
        • 3D LiDAR point clouds: /velodyne_points
        • Camera: /camera/image_color
        • IMU: /imu/data
        • SPAN-CPT: /novatel_data/inspvax
    • GNSS (RINEX)
      • GNSS RINEX files, to use it, we suggest to use the RTKLIB

For mainland china users, please download the dataset using the Baidu Clouds Links

  • Download by Baidu Cloud Link:
    • UrbanNav-HK-Data20200314 (ROS) (n71w)
      • ROSBAG file whihc includes:
        • 3D LiDAR point clouds: /velodyne_points
        • Camera: /camera/image_color
        • IMU: /imu/data
        • SPAN-CPT: /novatel_data/inspvax
    • GNSS (z8vw) (RINEX)
      • GNSS RINEX files, to use it, we suggest to use the RTKLIB

2. Tokyo Dataset

2.1 Sensor Setups

The platform for data collection in Tokyo is a Toyota Rush. The platform is equipped with the following sensors:

2.2. Dataset 1: UrbanNav-TK-20181219

Important Notes: the LiDAR calibration file for the LiDAR sensor, extrinsic parameters between sensors are not available now. If you wish to study the GNSS/LiDAR/IMU integration, we suggest using the dataset above collected in Hong Kong. However, the GNSS dataset from Tokyo is challenging which is collected in challenging urban canyons!

Date of Collection Total Size Path length Sensors
2018/12/19 4.14 GB >10 Km GNSS/LiDAR/IMU/Ground Truth
  • Download by Dropbox Link: For mainland china users, please download the dataset using the Baidu Clouds Links. Baidu Clouds Links (7xpo)

  • The dataset contains data from two runs, /Odaiba and /Shinjuku .

  • The following files are included in each dataset.

    • rover_ublox.obs and rover_trimble.obs: Rover GNSS RINEX files (5 Hz / 10 Hz)
    • imu.csv: CSV file which includes GPS time, Angular velocity, and acceleration, (50 Hz)
    • lidar.bag: ROSBAG file which includes LiDAR data /velodyne_packets
    • base_trimble.obs and base.nav: GNSS RINEX files of base station (1 Hz)
    • reference.csv: Ground truth from Applanix POS LV620 (10 Hz)
  • The travel trajectory of /Odaiba

  • The travel trajectory of /Shinjuku

3. Acknowledgements

We acknowledge the help from Guohao Zhang, Yin-chiu Kan Weichang Xu and Song Yang for data collection.

4. License

For any technical issues, please contact Weisong Wen via email [email protected]. For commercial inquiries, please contact Li-ta Hsu via email [email protected].

5. Related Publication

  1. Wen, Weisong, Guohao Zhang, and Li-Ta Hsu. "Exclusion of GNSS NLOS receptions caused by dynamic objects in heavy traffic urban scenarios using real-time 3D point cloud: An approach without 3D maps." Position, Location and Navigation Symposium (PLANS), 2018 IEEE/ION. IEEE, 2018.

  2. Wen, W.; Hsu, L.-T.*; Zhang, G. (2018) Performance analysis of NDT-based graph slam for autonomous vehicle in diverse typical driving scenarios of Hong Kong. Sensors 18, 3928.

  3. Wen, W., Zhang, G., Hsu, Li-Ta (Presenter), Correcting GNSS NLOS by 3D LiDAR and Building Height, ION GNSS+, 2018, Miami, Florida, USA.

  4. Zhang, G., Wen, W., Hsu, Li-Ta, Collaborative GNSS Positioning with the Aids of 3D City Models, ION GNSS+, 2018, Miami, Florida, USA. (Best Student Paper Award)

  5. Zhang, G., Wen, W., Hsu, Li-Ta, A Novel GNSS based V2V Cooperative Localization to Exclude Multipath Effect using Consistency Checks, IEEE PLANS, 2018, Monterey, California, USA. Copyright (c) 2018 Weisong WEN

  6. Wen Weisong., Tim Pfeifer., Xiwei Bai., Hsu, L.T.* Comparison of Extended Kalman Filter and Factor Graph Optimization for GNSS/INS Integrated Navigation System, The Journal of Navigation, 2020, (SCI. 2019 IF. 3.019, Ranking 10.7%) [Submitted]

urbannavdataset's People

Contributors

darrenwong avatar taroz avatar weisongwen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

urbannavdataset's Issues

The difference between coordinates

Hey there,
I just ran the lio-sam with the second hk dataset. It works fine, But from the observation. the map coordinate cannot coincide with the base link even at the beginning with an obvious yaw difference
The reason why I concerned about this problem is because it would lead to inconvenience when comparing with the ground truth data.
So would you give some hints about how to make the coordinates coincide so as to compare the odom output with ground truth?
image
image

ground_truth

How can I get the ground_truth from novatel_msgs/INSPVAX?
There seems no timestamp, can you provide the ground_truth file for dataset?
Thank you very much!

IMU parameter

Hi:
I cann't find the IMU parameter about Toyko dataset, for example ARW,VRW and so on. It would be better if the lever arm betweeen different sensor could be provided

Thk !

Using Higher Channel LiDAR

Hi weisongwen,

I’m so glad that I found this repo. The dataset which is specific in an urban canyon environment can provide a challenge to the developer. I plan to use this in the future. However, I found that the lidar that used in this dataset is only a 32 Channel Velodyne 32E LIDAR. I also arrived at your lidar super resolution paper with only using LiDAR data to get higher LiDAR channel such as 64 channel lidar.

I’m wondering that is it possible for me to use your lidar super resolution lidar to 64 channel of this dataset. If you don’t mind, perhaps you can share the lidar enhancement algorithm with me just for benchmarking purposes?

I hope hearing from you soon. Thanks

Ublox driver

can you provide us with the ublox driver which can record the GPS position /ublox_node/fix

Ground truth of UrbanNav-HK-Data20200314

Hi @DarrenWong @weisongwen,

Thanks for providing this dataset!

I'm using UrbanNav-HK-Data20200314, mainly the ground truth, UBLOX GNSS and LiDAR data. I have the following questions:

  1. In the extrinsic parameters the transformations between IMU and LiDAR and between IMU and SPAN-CPT are both identity matrices, are these correct? If so, does this mean that the LiDAR, Novatel and IMU data in the rosbag are already aligned?
  2. Is there a transformation between UBLOX and SPAN-CPT data?
  3. Is there any information of the base station used by SPAN-CPT? I tried looking for REFSTATION messages in the rosbag but couldn't find any. The reason why I'm asking this is that when I compute the positions using the RTK technique (double-differenced carrier phase) using UBLOX data and HKQT as the reference station, I get a trajectory that seems to have an offset from the SPAN-CPT ground truth:
    image
  4. I have also looked at your UrbanLoco dataset in California. It seems that I can only find GPS and GLONASS from UBLOX. Does it have any route with other GNSS such as Gelileo or Beidou?
  5. Is there any, or will you publish more datasets with dual-frequency raw GNSS observations?

Thank you for your work and help!

Cannot load message class for /novatel_data/inspvax.

Hi @weisongwen @DarrenWong !

Thank you for the data provided! I tried to record a clip of the ground truth topic /novatel_data/inspvax , but I didn't get it in the resulting bag. I tried to print the topic, and get the following error:

rostopic echo /novatel_data/inspvax 
ERROR: Cannot load message class for [novatel_msgs/INSPVAX]. Are your messages built?

I read the documents of novatel_span_driver, but didn't find the answer. I wonder what caused the problem.
Also I read issue#5, can I use /navsat/odom combined with /navsat/origin as the ground truth? If so, where is the origin point located? Is it the IMU coordinate system in the first frame?

Best wishes.

IMU calibration

I used imu data and image data to run vins ,the parameter file need imu bias and noise, I don't find four parameters , I also try to calibrate the imu data ,howerer, It need static and the data must continue two hours ,so I wonder where can I find the imu bias and noise parameter ,thank you

some question about extrinsic

I would like to ask is there information about lever arms between IMU and GNSS antenna to offer?It would be perfect if you can offer that and thanks for your time.

Driver's Details

How many drivers collected these datasets? Is it possible to split the data based on the drivers that collected them? (using some kind of anonymized id without revealing the identity of the drivers)

Where's the data of fisheye camera?

Hi weisongwen,

I'm so exacted to find this amazing work. But I have a little question with the setup of the camera.
I noticed that, from the picture in 1.1 Sensors Setup, there is a sky-facing fisheye camera on the top of the car. But I only found the 1920*1200 image data of a foward-facing camera in the ROS dataset. Is there actually only one camera on the car or did I just miss the fisheye camera data?

I'm looking forward to you reply. Thank you so much!

extrinsic parameter of imu and groundtruth

thank you for opening dataset,I have some issue of the dataset, I used the lantitude ,longitude and altitude in groundtruth to calculate xyz in ENU coordinate, and then I aligned the groundtruth and VIO trajectory, there is a yaw difference, I used gnss velocity to calculate yaw angle from VIO Coordinate frame to ENU coordinate frame ,however there is still alignment error, I want to know if there is a transformation between IMU coordinate frame and GNSS coordinate frame . Another question is I notice that in the extrinsic parameter file, the final col、 final row of the transformation matrix T is -0.28 and -0.36, in the standard transformation matrix, I think it is 1, so how should I understand the transformation matrix ,thank you very much.

Question

Hello, I have a question about the fisheye camera. I didn't find the fisheye view in the bag.

Acquire the timestamp from the velodyne 32 pointcloud

Hello, to extract the pointcloud information from the provided velodyne 32 topic, I use the following format
struct VelodynePointXYZIRT { PCL_ADD_POINT4D PCL_ADD_INTENSITY; uint16_t ring; float time; EIGEN_MAKE_ALIGNED_OPERATOR_NEW } EIGEN_ALIGN16; POINT_CLOUD_REGISTER_POINT_STRUCT( VelodynePointXYZIRT, (float, x, x)(float, y, y)(float, z, z)(float, intensity, intensity)(uint16_t, ring, ring)(float, time, time))
But it seems to fail to parse the timestamp information from the point, I think this format is designed for velodyne-16... Do you have any idea about what is the right format to extract point cloud information from the pointcloud topic in the provided rosbag?

Accuracy of the lidar-camera calibration

Hi,
Currently I am doing some lidar reprojection on the image plane, but there seems to a misalignment betweeen the two frames, one potential reason would be the inaccurate calibration... Therefore, I'd like to ask about the accuracy level of the camera lidar calibration....
Screenshot from 2021-09-20 11-35-13

Request to Upload a Paper PDF file

Hello, Welson. I deeply appreciate your remarkable work! I want to inquire whether uploading a PDF file of a research paper to this repository is possible. I am interested in studying this paper but have encountered difficulties downloading it. Neither ArXiv, ResearchGate, nor Sci-Hub has yielded any results. I would be immensely grateful if you could provide the PDF of the paper.

Inquiry Regarding Reference Velocity in UrbanNav-TK-20181219 Dataset

Subject: Inquiry Regarding Reference Velocity in UrbanNav-TK-20181219 Dataset

I would like to express my sincere gratitude for providing the UrbanNavDataset. It has been immensely beneficial to our research endeavors.

I am writing to ask for help regarding the reference Velocity data (Velocity X (m/s) Velocity Y (m/s) Velocity Z (m/s) in the reference.csv file) in the UrbanNav-TK-20181219 dataset, specifically for Shinjuku. During recent experiments conducted using the dataset, I noticed discrepancies in the reference Velocity values for Shinjuku. It appears there may be some conversion errors that I have failed to identify. Therefore, I am reaching out to you to inquire about any relevant information regarding the reference speed data for Shinjuku.

Any insights or guidance you could provide on this matter would be greatly appreciated.

Thank you very much for your attention to this inquiry. You can contact me at [email protected].

Best regards,
Ting Xie

imu axis

Thank you for the open source dataset , may i know how is the IMU installed for the Tokyo dataset?

Methods and Tools to Calibrate Extrinsic

Hi @DarrenWong @weisongwen !

Thank you for opening this dataset! I have a question regarding the calibration methods. I'm using my own data and also need the extrinsic among IMU, LiDAR, GNSS and groundtruth(from OxTS). I wonder what methods you used to calibrate those extrinsic in your dataset. I really appreciate it if you could list some papers about the calibration methods you employed. If you have used open-sourced tools for calibration, it would be so much the better to provide names or links. Thanks in advance.

Best wishes.

the groundtruth

Hello, can you provide the groundtruth with a high frequency?

Inquiry Regarding Labeled LOS/NLOS Dataset and CSV Format Data

Good day,

My name is Yelyzaveta Pervysheva, and I am currently working on multipath using AI technologies. I recently came across your dataset in RINEX format and found it to be quite intriguing for my research purposes.

I am particularly interested in obtaining a labeled dataset that distinguishes between Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) scenarios. Given the complexity of my research, having access to such a dataset would significantly contribute to the advancement of my work.

Additionally, I was wondering if it would be possible to acquire the same dataset in CSV format, since i struggle of convert those obs files.

Could you please let me know if such a labeled LOS/NLOS dataset is available? Furthermore, if the dataset can be provided in CSV format or if there are any conversion tools available, I would greatly appreciate your assistance in this matter.

Thank you very much for your time and consideration. You can contact me at [email protected]

Warm regards,

Yelyzaveta Pervysheva

The orientation of x, y for Lidar.

When I use pcl_viewer to present the pointcloud of one frame (.pcd file) and I use command: pcl_viewer 1556456384.020306000.pcd -ax 10. I found that the orientation of x towards to the back of the car, but in platform figure, orientation of x towards to the head of the car. Is there something wrong here?
Pointcloud

About ground truth

Hello, I also noticed another paper of yours:
Point-wise or Feature-wise? Benchmark Comparison of Public Available LiDAR Odometry Algorithms in Urban Canyons.
For the experimental part of this paper, you used two sequences that were previously made in Hong Kong. I would like to know how you used the EVO tool to get the ground truth from the bag and compare it with all the other algorithm results.
Looking forward to your reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.