Coder Social home page Coder Social logo

self-driving-cars-behavioural-cloning's Introduction

#Deep learning to mimic human driving behaviour

When humans learn to drive, we use our experience to drive the same route with new circumstances and also to drive new routes. In this project we use a simulator to teach a model how we would drive around a track. Afterwards, the model should mimic the human driving behaviour (behavioural cloning) and drive around the same track and a new track - which was not used for training. The agents that runs the model will only control the the steering.

Model Architecture

The model architecture used for this project is inspired on the paper 'End to End Learning for Self-Driving Cars' by NVIDIA (see resources) and Comma.ai's implementation (see resources). The final model has 7 convolutional layers and 4 fully connected layers. To avoid overfitting multiple max pooling (after 3, 5, and 7 convolution layers) and dropouts have been added. The dropouts are placed after each max pooling layer and each fully connected layer with a value of 0.5. Exponential linear units (ELUs) are used as an activation function between the layers. ELUs speed up learning indeep neural networks and avoid a vanishing gradient via the identity for positive values (https://arxiv.org/pdf/1511.07289v1.pdf).

alt tag

Total params: 1,369,997
Trainable params: 1,369,997
Non-trainable params: 0

Training data

Udacity

To train our model, we've used the trainig data provided by Udacity. The training data include images from the left, center, and right camera and the steering angle. Note: while testing only the center camera will be used as input. In total Udacity's dataset contain 8036 frames, if we include the left and right cameras the total number of training images is 24,108.

alt tag

The image size are 160x320x3. First we remove the top 40 pixels and the bottom 20 pixels from the images. The top 40 pixels include the sky and some background, which should not be relevant for the steering angle. The bottom 20 pixels include the front of the car.

alt tag

To train our model faster we have resized the images to 64x64x3. We apply these same preprocessing steps before reading images for autonomous simulation.

alt tag

Left and right cameras

It's important to "learn" our model what to do if the car moves away from the center of the road (even for a perfectly trained model on Track 1 this is advisable, because it will make sure the model is able to generalise to other tracks as well). There are two solutions to do this:

  1. Add recovery data.
  2. Use the left and right camers with an adjusted steering angle.

We have chosen for option 2, because this is more appropiate for real-world scenarios. This method has also been used in the suggested paper of NVIDIA. Our final model uses an offset of 0.25 to account for the side cameras.

Transformation and augmentation

The make our model robust we've added transformations and augmentations on our images. We do this randomly in the batch generator so that we don't have to keep all transformed and augmented images in memory.

Flip image

Track 1 contains mostly left turns and only one right turn. To remove the bias from our model, we randomly flip every image and adjust the steering angle accordingly.

alt tag

Adjust brightness

To account for differences on the track and future tracks, we've randomly changed the brightness of our training data. This helped a lot for our model to drive around on Track 2 as well.

alt tag

Building our model

First, we started with with the suggested NVIDIA model and build on top of that by testing the progress on each step. All the parameters have been emperically tested. And we tried many of the suggested improvements on Slack by fellow students (thanks!). We've finetuned al parameters until the model was able to drive around multiple laps without crossing any lines.

Running the model

The model has been tested on Track 1 and Track 2 with screen resolution 640x480 and fastest graphics quality. The trained model is able to drive around both tracks (with custom throttle: slower for bigger steering angles), for Track 1 it is able to drive around without crossing any lane lines or bumping into objects. For Track 2 the trained model is able to drive until the end of the track without bumping into objects most of the times. To run the simulator use: python drive.py model.json.

Resources

self-driving-cars-behavioural-cloning's People

Contributors

indradenbakker avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.