Hi @yizhou-wang , thank you for sharing the great devkit tool as well as the CRUW dataset! May I ask, is CRUW 2022 dataset open source? It seems not at https://www.cruwdataset.org/ if it's open, could you please share us the link please? Thank you!
Thanks for the great work. It seems that the annotation files contain annotations only for the radar frames. Will it be possible to provide the camera annotations as well, or a code to map the radar annotations onto camera images ?
When I access the dataset website, its layout is chaotic and then I clicked the download button, but it refers to the wrong website. Is there any wrong with the website or where can I download this dataset?
In the tutorial, I have three questions about the first example (In [11], RGB Image with seq_name 2019_04_09_BMS1000, with only a bicycle).
In the RGB image, the cyclist seems small in horizontal, while it seems very 'wide' in RF Image. Is there due to the not uniformed angle axis, or are there other possible causes?
In ROD2021, it says the radar has FOV ±60°, but the RF image, which ranges from -90° to +90°, has Radar data also in column 0. As I understood, Radar information should only appear on the x-axis from 9 to 118, corresponding to the ±60°.
In the RF Image, apart from the cyclist, many small horizontal lines have no corresponding objects in the RGB Image. Should we interpret them as noise signal?
Thanks for your attention. I’m looking forward to your reply.
According to the essay, AP and AR are the average precision and recall among different OLS threshold. As I understand, the precision should be total True Positive(TP) over TP+FP and recall should be total TP over the number of ground truth. In the code, the recall seems to be calculated in this way, while the precision is averaged over another dimension R.
This dimension seems to be calculated with recall and recThrs. I wonder what's the meaning of this dimension and is the precision consistent with the classic definition? Also, is this AP the metric used in the ROD2021 competition?
or if you can explain the meaning of the keywords in radar_cfg in sensor_config_rod2021. It would be of great help to understand the mapping process. Thanks in advance.
Hi, I have an issue of mapping the RGB image from CAM_0 to the RF image. Because the both of them can be regarded as a projection, is there any way we can directly map RGB image to the RF image?
What is the meaning of the value t_cl2rh and how do we use it?
Also, it seems that after generating the 3D position of the objects from stereo camera, you project it to the radar. Will you share the code of this part?