Coder Social home page Coder Social logo

xiayuyang / auto Goto Github PK

View Code? Open in Web Editor NEW
12.0 1.0 2.0 131.51 MB

A decision-making framework based on the simulation Carla and multi-modal sensors. The implementation of paper "AUTO : A Parameterized Decision-making Framework with Multi-modality Perception for Autonomous Driving" in ICDE 2024

License: Apache License 2.0

Python 100.00%
autonomous-driving carla decision-making multi-modal reinforcement-learning

auto's Introduction

AUTO : A Parameterized Decision-making Framework with Multi-modality Perception for Autonomous Driving

image

This repo is the implementation of the following paper:

่ฎบๆ–‡

AUTO : A Parameterized Decision-making Framework with Multi-modality Perception for Autonomous Driving

The figure shows the architecture of our framework, which consists of five components: data preprocessing, state representation, actorcritic, hybrid reward function, and multi-worker training. For the data preprocessing,we first take the data from HD maps and multiple sensors (i.e., Camera, LiDAR, GNSS, and IMU) as input, based on which we respectively extract the feature vectors of lanes, vehicles, and traffic lights from it and finally generate a multi-modality state for the agent. For the state representation,we propose a lane-wised cross attention model (LCA) to learn a latent representation of the state features. It organizes as multiple agent-centric star graphs and applies cross attention to aggregate multi-modality features on each lane. Then, the aggregated results of each lane are fused as a state representation. For the actor-critic, we first introduce LCA for the actor and critic, respectively. Then, we compute an action ๐‘Ž๐‘ก using a hierarchical action structure that first decides whether to perform a lane-changing decision (high level) and then compute an exact action to execute (low level). For the hybrid reward function, we calculate a reward value for action and state, which serves as a signal to guide the agent to learn an optimal action policy. For the multi-worker training, we speed up the training of actorcritic and improve the convergence performance using distributed computation.


Code Structure

In order to reduce the required computing resources, this code directly uses the function in carla to obtain environmental features instead of using sensors.

  • algs

    • pdqn
      Implementaion for our reinforcement learning algorithm, including lane-wised cross attention model (LCA), actor-critic model and hierarchical action. Class "lane_wise_cross_attention_encoder" is the LCA network, Class "PolicyNet_multi" is the actor network, Class "QValueNet_multi" is the critic network. Class P_DQN includes a whole workflow of reinforcement learning, including action section and gradient update.
    • replay_buffer
      relay buffer of our reinforcement learning, which is used to store experiences and sample experiences.
  • gym_carla
    Gym-like carla environment for vehicle agent controlled by reinforcement learning.

    • carla_env.py
      Main module for Gym-like Carla environment, which shares the same APIs as classical Gym. Function "reset" is an initialization at the beginning of an episode and Function "step" includes state generation and reward calculation.
    • settings.py
      This module contains environment parameter list for carla_env. For example,
      the detection range $D_{tl}$ of traffic lights by the camera: $50๐‘š$,
      the detection range $D_v$ of conventional vehicles by LiDAR: $70๐‘š$,
      the number $n$ of waypoints observed by the autonomous vehicle: $10$,
      the time interval $\Delta t$ between two decisions: $0.1๐‘ $,
      the TTC threshold $\mathcal{G}$ in the safety reward: $4๐‘ $,
      the acceleration threshold $acc_{thr}$ in the comfort reward: $3๐‘š/๐‘ ^2$,
      the velocity change threshold $vel_{thr}$ in the impact reward: $0.1๐‘š/๐‘ $,
    • agent
      • basic_agent.py
        BasicAgent implements an algorithm that navigates the scene.
      • basic_lane_change_agent.py
        Basic lane-changing model for agent with several rules.
      • behavior_types.py
        This module contains different parameters sets for each behavior.
      • global_planner.py
        This module provides a high level route plan, which set a global map route for each vehicle.
      • pid_controller.py
        This module contains PID controllers to perform lateral and longitudinal control.
    • util
      • misc.py
        This file contains auxiliary functions used in Carla. For example, a route calculation function, a distance calculation, a waypoint selection function, and so on.
      • render.py
        This enables the rendering of front-eye camera view of ego vehicle in one pygame window.
      • bridge_functions.py
        This file includes transfer functions for onboard sensors.
      • extended_kalman_filter
        This file implements the extended kalman_filter function.
      • geometric_functions.py
        This file implements the orientation transformation of vehicles.
      • sensor.py
        This file implements sensor-related functions and classes.
      • wrapper.py
        This file contains auxiliary functions and classes for carla_env. It includes specific state collection functions and reward calculation functions.
  • main

    • tester
      Code for testing our reinforcement learning model.
    • trainer
      Code for training our reinforcement learning model.
    • process.py
      Two functions that are used to start a process or kill a process.

Getting started

  1. Install and setup the CARLA simulator (0.9.14), set the executable CARLA_PATH in gym_carla/setting.py

  2. Setup conda environment with cuda 11.7

$ conda create -n env_name python=3.7
$ conda activate env_name
  1. Clone the repo and Install the dependent package
$ git clone https://github.com/xiayuyang/AUTO.git
$ pip install -r requirements.txt
  1. update CARLA_PATH to your path in gym_carla/setting.py:

  2. Train the RL agent in the multi-lane scenario

$ python ./main/trainer/pdqn_multi_lane.py
  1. Test the RL agent in the multi-lane scenario
$ python ./main/tester/multi_lane_test.py

Reference

License

All code within this repository is under Apache License 2.0.

Acknowledgements

Our code is based on several repositories:

auto's People

Contributors

greenday12138 avatar xiayuyang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

auto's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.