Coder Social home page Coder Social logo

autoware_perception_evaluation's Introduction

autoware_perception_evaluation

perception_eval is a tool to evaluate perception tasks.

Documents

English | 日本語

Overview

Evaluate Perception & Sensing task

3D tasks

Task Metrics Sub-metrics
Detection mAP AP, APH
Tracking CLEAR MOTA, MOTP, IDswitch
Prediction WIP WIP
Sensing Check Pointcloud Detection Area & Non-detection Area

2D tasks

Task Metrics Sub-metrics
Detection2D mAP AP
Tracking2D CLEAR MOTA, MOTP, IDswitch
Classification2D Accuracy Accuracy, Precision, Recall, F1score

Dataset format

We support T4Dataset format. This has same structure with NuScenes. The expected dataset directory tree is shown as below.

data_root/
    │── annotation/     ... annotation information in json format.
    │   │── sample.json
    │   │── sample_data.json
    │   │── sample_annotation.json
    │   └── ...
    └── data/           ... raw data.
        │── LIDAR_CONCAT/  # LIDAR_TOP is also OK.
        └── CAM_**/

Using perception_eval

Evaluate with ROS

perception_eval is mainly used in tier4/driving_log_replayer that is a tool to evaluate output of autoware. If you want to evaluate your perception results through ROS, use driving_log_replayer or refer test/perception_lsim.py.

Evaluate with your ML model

This is a simple example to evaluate your 3D detection ML model. Basically, most parts of the codes are same with test/perception_lsim.py, so please refer it.

from perception_eval.config import PerceptionEvaluationConfig
from perception_eval.manager import PerceptionEvaluationManager
from perception_eval.common.object import DynamicObject
from perception_eval.evaluation.result.perception_frame_config import CriticalObjectFilterConfig
from perception_eval.evaluation.result.perception_frame_config import PerceptionPassFailConfig

# REQUIRED:
#   dataset_path: str
#   model: Your 3D ML model

evaluation_config = PerceptionEvaluationConfig(
    dataset_paths=[dataset_path],
    frame_id="base_link",
    result_root_directory="./data/result",
    evaluation_config_dict={"evaluation_task": "detection",...},
    load_raw_data=True,
)

# initialize Evaluation Manager
evaluator = PerceptionEvaluationManager(evaluation_config=evaluation_config)

critical_object_filter_config = CriticalObjectFilterConfig(...)
pass_fail_config = PerceptionPassFailConfig(...)

for frame in datasets:
    unix_time = frame.unix_time
    pointcloud: numpy.ndarray = frame.raw_data["lidar"]
    outputs = model(pointcloud)
    # create a list of estimated objects with your model's outputs
    estimated_objects = [DynamicObject(unix_time=unix_time, ...) for out in outputs]
    # add frame result
    evaluator.add_frame_result(
        unix_time=unix_time,
        ground_truth_now_frame=frame,
        estimated_objects=estimated_objects,
        ros_critical_ground_truth_objects=frame.objects,
        critical_object_filter_config=critical_object_filter_config,
        frame_pass_fail_config=pass_fail_config,
    )

scene_score = evaluator.get_scene_result()

autoware_perception_evaluation's People

Contributors

boczekbartek avatar dependabot[bot] avatar hayato-m126 avatar ktro2828 avatar miursh avatar pawel-kotowski avatar scepter914 avatar shin-kyoto avatar shmpwk avatar wep21 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autoware_perception_evaluation's Issues

[TODO]: Add support of multiple FrameIDs

Description

For camera, it needs multiple camera frame, for example, frame_id = [CAM_FRONT, CAM_BACK, ...].

Purpose

Support to be able to evaluate multiple output from each camera.

Possible approaches

  • Allow to specify multiple FrameID instance in Config
  • In object matching, check whether two objects' FrameID are same

Definition of done

[IDEA]: Update frame config and critical objects

Description

Currently, the configuration class to filter out critical objects and determine pass/fail results are divided into two classes called CriticalObjectFilterConfig and PerceptionPassFailConfig.
Therefore, I would like to merge them into one class named PerceptionFrameConfig as simliar to SensingFrameConfig.

Also, update the PerceptionFrameConfig and ros_critical_objects to permit that they are optional in add_frame_result()

Purpose

Possible approaches

I'm planning the following directory tree related to frame result.

├── perception_eval
│   ├── evaluation
│   │   ├── result
│   │   │   ├── perception
│   │   │   └── sensing
....

Then, I want to enable to load them with

from perception_eval.evalutaion import PerceptionFrameConfig
from perception_eval.evaluation import SensingFrameConfig

Definition of done

[TODO]: Add support of 2D evaluation

Description

Add support of 2D evaluation task for image input.

Currently, this package only support 3D objects' evaluation.

Purpose

Possible approaches

Definition of done

[BUG]: undefined symbol: _PyGen_Send caused by scipy

Category

  • Perception
    • Detection
    • Tracking
    • Prediction
  • Sensing
  • Dependencies

Description

I got the error shown in below. It seems depending on scipy. When I updated scipy version to 3.10.0, it fixed.

My enviroment

  • Ubuntu-22.04
  • Python3.10.6
Traceback (most recent call last):
  File "/home/kotarouetake/workspace/autoware_perception_evaluation/perception_eval/sample.py", line 1, in <module>
    from perception_eval.tool import Gmm
  File "/home/kotarouetake/workspace/autoware_perception_evaluation/perception_eval/perception_eval/tool/__init__.py", line 1, in <module>
    from .gmm import Gmm
  File "/home/kotarouetake/workspace/autoware_perception_evaluation/perception_eval/perception_eval/tool/gmm.py", line 25, in <module>
    from scipy.stats import multivariate_normal
  File "/home/kotarouetake/.cache/pypoetry/virtualenvs/perception-eval-q9eGsR36-py3.10/lib/python3.10/site-packages/scipy/stats/__init__.py", line 391, in <module>
    from .stats import *
  File "/home/kotarouetake/.cache/pypoetry/virtualenvs/perception-eval-q9eGsR36-py3.10/lib/python3.10/site-packages/scipy/stats/stats.py", line 174, in <module>
    from scipy.spatial.distance import cdist
  File "/home/kotarouetake/.cache/pypoetry/virtualenvs/perception-eval-q9eGsR36-py3.10/lib/python3.10/site-packages/scipy/spatial/__init__.py", line 107, in <module>
    from . import distance, transform
  File "/home/kotarouetake/.cache/pypoetry/virtualenvs/perception-eval-q9eGsR36-py3.10/lib/python3.10/site-packages/scipy/spatial/transform/__init__.py", line 19, in <module>
    from .rotation import Rotation, Slerp
ImportError: /home/kotarouetake/.cache/pypoetry/virtualenvs/perception-eval-q9eGsR36-py3.10/lib/python3.10/site-packages/scipy/spatial/transform/rotation.cpython-310-x86_64-linux-gnu.so: undefined symbol: _PyGen_Send

Expected behavior

Actual behavior

Screenshots

To Reproduce

Additional context

[BUG]: yaw difference should concern circulation

Category

  • Perception
    • Detection
    • Tracking
    • Prediction
    • Classification
  • Sensing

Description

From the V&V team test result, yaw error looks in [-pi: pi]. But
image
image

Expected behavior

(Thinking...

Actual behavior

Screenshots

To Reproduce

Additional context

[TODO]: Update support of label name

Description

The following labels have not been registerd

AutowareLabel.CAR

  • vehicle.police
  • vehicle.fire
  • vehicle.ambulance

AutowareLabel.UNKNOWN

  • static_object.bollard
  • forklift

Purpose

Possible approaches

Definition of done

[IDEA]: Add type aliases

Description

Add type aliases in perception_eval/typing.py.

Purpose

In perception_eval, we write codes using type hinting.
Although this is very useful, sometimes this is redundant and confusing.
Therefore, I would like to add type aliases uniqued in perception_eval to avoid these problems.

Possible approaches

The typing.py is placed in the root directory of perception_eval

├── perception_eval
│      ├── typing.py
....

The following aliases are what I want to add.
Note that, some classes have not been added to perception_eval yet.

# object
ObjectType = Union[DynamicObject, DynamicObject2D]
ObjectResultType = Union[PerceptionObjectResult, SensingObjectResult]
FrameResultType = Union[PerceptionFrameResult, SensingFrameResult]

# config
EvaluationConfigType = Union[PerceptionEvaluationConfig, SensingEvaluationConfig]
FrameConfigType = Union[PerceptionFrameConfig, SensingFrameConfig]

# manager
EvaluationManagerType = Union[PerceptionEvaluationManager, SensingEvauationManager]

# label
LabelType = Union[AutowareLabel, TrafficLightLabel, BlinkerLabel, BrakeLampLabel]

# common
Vector2f = Tuple[float, float]
Vector3f = Tuple[float, float, float]
Vector2i = Tuple[int, int]
Vector3i = Tuple[int, int]
Vector4i = Tuple[int, int, int, int]

We should name like XXXType in order to prevent confusing type aliases with names of classes.

Definition of done

[IDEA]: Implement grid based statistics

Description

image-20231110-032027
Perception R&D proposal

Implement grid-based statistics as a database based (bottom-up) approach for use-case fulfillment assessment.
The grid-based perception statistics is called "perception field analysis".

Purpose

Expand current perception evaluation to process statistics on grid.

Possible approaches

Use existing DataFrame parser. Extend perception_nanlyzer3d.py for grid statistics

Definition of done

Process a sample dataset and create plots by this analysis approach.

[BUG]: For TLR evaluation instance_name should be used as uuid

Category

  • Perception
    • Detection
    • Tracking
    • Classification

Description

In TLR laneID is saved as a part of instance_name in instance.json shown as below.
"instance_name": "tlr_test::green:2133"

Although instance_token is used for all evaluation task, above laneID must be used only in TLR evaluation

Expected behavior

Actual behavior

Screenshots

To Reproduce

Additional context

[TODO]: add support of FP validation

Description

Purpose

To validate FP recognition result

Possible approaches

  • Add FP validation to EvaluationTask
  • Add support of label name for FP validation
  • Evaluation result format
    • The number of FP frames for each object
    • The ratio of FP frames

Definition of done

[BUG]: AttributeError when mathcing thresholds list is specified

Category

  • Perception
    • Classification

Description

Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/kotarouetake/workspace/autoware_perception_evaluation/perception_eval/test/perception_lsim2d.py", line 272, in <module>
    classification_lsim.callback(
  File "/home/kotarouetake/workspace/autoware_perception_evaluation/perception_eval/test/perception_lsim2d.py", line 117, in callback
    frame_result = self.evaluator.add_frame_result(
  File "/home/kotarouetake/workspace/autoware_perception_evaluation/perception_eval/perception_eval/manager/perception_evaluation_manager.py", line 124, in add_frame_result
    result.evaluate_frame(
  File "/home/kotarouetake/workspace/autoware_perception_evaluation/perception_eval/perception_eval/evaluation/result/perception_frame_result.py", line 125, in evaluate_frame
    self.pass_fail_result.evaluate(
  File "/home/kotarouetake/workspace/autoware_perception_evaluation/perception_eval/perception_eval/evaluation/result/perception_pass_fail_result.py", line 85, in evaluate
    self.tp_objects, self.fp_objects_result = self.get_tp_fp_objects_result(
  File "/home/kotarouetake/workspace/autoware_perception_evaluation/perception_eval/perception_eval/evaluation/result/perception_pass_fail_result.py", line 127, in get_tp_fp_objects_result
    tp_object_results, fp_object_results = divide_tp_fp_objects(
  File "/home/kotarouetake/workspace/autoware_perception_evaluation/perception_eval/perception_eval/evaluation/matching/objects_filter.py", line 253, in divide_tp_fp_objects
    is_correct = object_result.is_result_correct(
  File "/home/kotarouetake/workspace/autoware_perception_evaluation/perception_eval/perception_eval/evaluation/result/object_result.py", line 118, in is_result_correct
    is_matching_: bool = matching.is_better_than(matching_threshold)
AttributeError: 'NoneType' object has no attribute 'is_better_than'

Expected behavior

Actual behavior

Screenshots

To Reproduce

Additional context

[TODO]: Add support of label's attributes

Description

Currently, we convert name of label into LabelType: Union[AutowareLabel, TrafficLightLabel], which is defined by enum, and attribute information is removed.
Therefore, we can not filter out objects with its attributes.

Purpose

Add attribute information to Label instance in order to filter it out with its attribute.

Possible approaches

Define <class> Label(...) as shown below.
UPDATE: 2023/04/17

class Label:
   """
   Args:
       label (LabelType): LabelType instance. e.g. AutowareLabel.CAR.
       name (str): Original name of label. e.g. 'vehicle.car'.
       attributes (List[str]): List of attributes. Defaults to [].
   """
    def __init__(self, label: Labeltype, name: str, attributes: List[str] = []) -> None:
        self.label: LabelType = label
        self.name: str = name
        self.attributes: List[str] = attributes
        
    def contains(self, key: str) -> bool:
        """Check whether self.name contains input attribute.
        Args:
            key (str): Target name or attribute.
        Returns:
            bool: Indicates whether self.name contains input attribute.
        """
        return key in self.name or key in self.attributes
        
    def __eq__(self, other: Label) -> bool:
        # NOTE: Not comparing attribute. It is too strict.
        return self.label == other.label

TODO: How should we specify attributes to be filtered out in config.

# labels to be evaluated
target_labels: ["car", "pedestrian"]
# label names or attributes to be ignored
ignored_attributes: ["police", "fire", "child", "occlusion_state.partial", "pedestrian_state.siting_lying_down"]

Definition of done

[BUG]: eda tool - fp_results_with_high_confidence calculation

Category

  • [x ] Perception
    • [x ] Detection

Description

In eda tool, tp results are filtered instead of fp results to obtain fp results with high confidence score for visualization.

I do not have permissions to push branches to the repo so I've pushed the fix to a forked repo for reference here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.