Coder Social home page Coder Social logo

doppler_nlos's Introduction

Doppler NLOS Code & Datasets

This repository contains code for the paper Seeing Around Street Corners: Non-Line-of-Sight Detection and Tracking In-the-Wild Using Doppler Radar (project webpage).

Description of Files

The code and data is organized as in the following directory

./detection-train-eval.ipynb
    # Jupyter Notebook that includes code for dataloading, training, and
    evaluation with training logs for the detection task.
./tracking-train-eval.ipynb
    # Jupyter Notebook that includes code for dataloading, training, and
    evaluation with training logs for the tracking task.
./training_data
    01-0-bike # scene 1, trajectory 1, bike
        labels # labels for each timestamp
            radar_left_np
                xxx.txt # The first row is the class name, next three rows are
                the [x, y] coordinates, [longer dimension, shorter
                dimension], and the radian angle of the bounding box.
        radar_left_np # input radar data
            xxxx.npy # contains a n x 10 array, where n is the total number
            of radar points, in each row, the first two values are the x,y
            point location in the car coordinate (m), the third is the
            Doppler velocity (m/s), the fourth is the amplitude, the fifth
            is the distance from the sensor (m), the last value is the label
            for object category and should not be used as method input.
        lidar
    01-1-bike # scene 1, trajectory 2, bike
    01-2-pedestrian # scene 1, trajectory 3, pedestrian
    ...
./validation_data # file structure is the same as training_data

Data

Please download the train and validation data into the corresponding directories.

Environment

The code was tested in a Miniconda environment with Python 3.7 and TensorFlow 2.1.0. Additional package includes NumPy.

Usage

Please refer to the code and logs in the file detection-train-eval.ipynb and tracking-train-eval.ipynb for the detection and tracking tasks, respectively.

Citation

If you find it is useful, please cite

@InProceedings{scheiner2019seeing,
author={Scheiner, Nicolas and Kraus, Florian and Wei, Fangyin and Phan, Buu and Mannan, Fahim and Appenrodt, Nils and Ritter, Werner and Dickmann, J{\"u}rgen and Dietmayer, Klaus and Sick, Bernhard and others},
title={Seeing Around Street Corners: Non-Line-of-Sight Detection and Tracking In-the-Wild Using Doppler Radar},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

doppler_nlos's People

Contributors

weify627 avatar fheide avatar

Stargazers

PeiliSong avatar  avatar  avatar Jiaxi avatar  avatar  avatar Ivan Kharitonov avatar Jeff Carpenter avatar lizechng avatar  avatar  avatar  avatar Aileen Xi avatar Saketha Ramanjam avatar Jsy avatar  avatar Ben Duffy avatar Shanliang Yao avatar  avatar

Watchers

Buu T Phan avatar  avatar

doppler_nlos's Issues

questions about the dataset

Hello, Thanks for the open-source dataset. My research topic is like LOS and NLOS separation. I have therefore some questions regarding the dataset:

  1. I found that every measurement has actually 9 dimensions, dim 1,2,3,4,5,9 has been explained in the readMe file. what does other mean? can I separate nlos detections from los detections with some tags hidden in the data.
  2. Is it possible to recover the NLOS detections (find the corresponding virtual detections without finding the reflectors and using the third-bound geometry & velocity estimation)
  3. It seems that all the environmental radar points are filtered out, is it possible to release them all ?
    Thanks very much

Dataset Clarifications

Hi,

Thank you for making the dataset publicly available. I would appreciate if you could answer a few questions I have:

  1. The last dimension you described is the object category. In your dataset, there are 4 possible values for the category {-1,1,2,3}. What does each value correspond to? (LOS measurement; NLOS measurement, etc).

  2. Some of the measurements have no label. These measurements are ones in which all points have a label of -1. Do these correspond to measurements where all points are just clutter or noise?

Thank you!

question about the left and right radar data & label

thanks for the perfect work! However I have some questions regarding how to correctly use the dataset.

  1. what is the rule of naming the data and label, e.g. 00002_1568914334228925.txt, what does 00002 and 1568914334228925 respectively mean
  2. there are two source of radar data (left and right), is it necessary to combine them together in a frame?
  3. is it necessary to transform the data from different sensors (lidar, left & right radar) in the same coordinate (like the car coordinate) or it is already transformed from the sensor coordinates to the car coordinate?
  4. is the corresponding camera data available ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.