Coder Social home page Coder Social logo

luismarzo / self-driving-golf-cart Goto Github PK

View Code? Open in Web Editor NEW

This project forked from sigmaai/self-driving-golf-cart

0.0 1.0 0.0 1.57 GB

Be Driven ๐Ÿš˜

Home Page: https://neilnie.com/the-self-driving-golf-cart-project/

License: MIT License

CMake 20.16% C 0.72% C++ 23.27% Makefile 39.67% Python 13.74% Shell 0.52% Common Lisp 1.39% JavaScript 0.53%

self-driving-golf-cart's Introduction

Drawing

Documentation Status Donate Build Status License: MIT

Introduction

Welcome! This is an open source self-driving development platform aimed for rapid prototyping, deep learning and robotics research. The system currently runs on a modified electric golf cart, but the code could work on a real car as well. Here are our goals:

Goals:

  1. Research and develop a deep learning-driven self-driving car.
  2. The vehicle should be able to navigate from point A to point B autonomously within a geofenced area.

Here are the modules in this project.

  1. End-to-End steering (Behavioral cloning)
  2. Semantic Segmentation
  3. Drive by Wire (DBW)
  4. Object Detection ๐Ÿš™
  5. Traffic Light Detection ๐Ÿšฆ
  6. Lane Detection ๐Ÿ›ฃ
  7. Localization ๐Ÿ›ฐ๏ธ (currently with GPS)

Path planning is coming soon...

For the full documentation of the development process, please visit: neilnie.com

Try it out

  1. Download/clone the repository.
  2. Make sure you have all the dependencies installed.
  3. Make sure that you have the ROS installed on your computer.
  4. cd PROJECT_DIRECTORY/ros
  5. catkin_make
  6. source devel/setup.bash
  7. roslaunch driver drive.launch

You should see this screen pop up.

image

Bon Voyage ๐Ÿ˜€

Simulation

Building a self-driving car is hard. Not everyone has access to expensive hardware. If you want to run the code inside the CARLA self-driving simulator, please refer to this documentation. The ROS system in this project can run on the CARLA simulator.

Drawing

ROS

This project is being developed using ROS. The launch files will launch the neccesary nodes as well as rviz for visualization. For more information on ROS, nodes, topics and others please refer to the README in the ./src directory.

The Autopilot System

TAS, found here in the autopilot node, uses deep learning to predict the steering commands and acceleration commands for the vehicle, only using data collected by the front facing camera.

Behavioral cloning with 3D ConvNets

Motives

Several years ago, NVIDIA proposed a novel deep learning approach allowed their car to accurately perform real-time end-to-end steering command prediction. Around the same time, Udacity held a challenge that asked researchers to create the best end-to-end steering prediction model. This component is inspired by the competition, and the goal is to further the work in behavioral cloning for self-driving vehicles.

NVIDIA's paper used a convolutional neural network with a single frame input. I believe that the single-frame-input CNN doesn't provide any temporal information which is critical in self-driving. This is the motive behind choosing the i3d architecture, which is rich in spacial-temporal information.

Model

The input of the network is a 3d convolutional block, with the shape of n * weight * height * 3. n is the length of the input sequence. Furthermore, the network also uses nine inception modules. The output layers are modified to accommodate for this regression problem. A flatten layer and a dense layer are added to the back of the network.

Drawing

Here is a video demo of deep learning model running on the autonomous golf cart.

IMAGE ALT TEXT HERE

Semantic Segmentation

Drawing

The cart understands the surrounding through semantic segmentation, which is a technique in computer that classifies each pixel in an image. The vehicle can also make decisions based on the segmentic segmentation results. The cart can change its speed based on the proximity to nearby obstacles.

We deployed the ENet architecture for segmentation. ENet is design to work well in realtime applications. For more information, please visit the paper. We used the CityScape dataset for training and the python code for training and inferencing are located in the ./src/segmentation/scripts directory.

IMAGE ALT TEXT HERE

Localization

Currently, the localization module uses GPS (Global Positioning System) to find the precise location of the vehicle. However, GPS is far from enough. Localization using lidar and radar (sensor fusion and particle filters) are currently under development.

Drawing

Furthermore, we are relying on OSM (Open Street Map) data for navigation. OSM maps provide detailed information about the paths, buildings and other landmarks in the surrounding. Currently, navigation is only limited to a geofenced area.

Drawing

Path Planning

Coming soon...

What's next?

Phase 1

We have completed phase 1 of the development process, which mainly includes:

  • Drive-by-wire system.
  • Autonomous steering system with deep learning
  • Basic obstacle avoidance system using segmentation & detection

As you might have realized, all of these above are focused on computer vision and deep learning. Currently, the vehicle can navigate autonomously in a controlled outdoor environment for about 1000 feet, swiftly avoiding obstacles and stopping for pedistrians.

Phase 2

For the second phase of the development process, we will focus on making the system safer and more reliable. Basic plans include:

  • Implement a localization system.
  • Write a path planner.
  • Collect more data in our geofenced enviroment. โœ…
  • Improve the computer hardware. โœ…
  • Improve the sensor system.

We are keeping track of all our progress here CHECKLIST.

Contact / Info

If you are interested in the detailed development process of this project, you can visit Neil's blog at neilnie.com to find out more about it. Neil will make sure to keep you posted about all of the latest development on the club.

Developers:

Drawing

Neil (Yongyang) Nie | Email | Github | Website | Linkedin

Drawing

Michael Meng | Email | Github

self-driving-golf-cart's People

Contributors

neilnie avatar xmeng17 avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.