Coder Social home page Coder Social logo

carnd-behavioral-cloning-p3's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

carnd-behavioral-cloning-p3's Issues

import socketio

I can not import socketio in drive.py . I installed it but it gives me "no module named socketio" error.
how can I solve the problem?

python-socketio IndexError: too many indices for array message handler error

('connect ', 'c15c52397cdb4e5eafa5da78993a3b66')
message handler error
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/engineio/server.py", line 398, in _trigger_event
return self.handlersevent
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/socketio/server.py", line 520, in _handle_eio_message
self._handle_event(sid, pkt.namespace, pkt.id, pkt.data)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/socketio/server.py", line 456, in _handle_event
self._handle_event_internal(self, sid, data, namespace, id)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/socketio/server.py", line 459, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/socketio/server.py", line 488, in _trigger_event
return self.handlers[namespace]event
File "drive.py", line 64, in telemetry
steering_angle = float(model.predict(image_array[None, :, :, :], batch_size=1))
IndexError: too many indices for array
message handler error

please help me.

MoviePy couldn't find the codec associated with the filename

$python video.py --fps 30 first_working/

Creating video first_working/.mp4, FPS=30
Traceback (most recent call last):
  File "/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/video/VideoClip.py", line 276, in write_videofile
    codec = extensions_dict[ext]['codec'][0]
KeyError: ''

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "../video.py", line 27, in <module>
    main()
  File "../video.py", line 23, in main
    clip.write_videofile(video_file)
  File "<decorator-gen-51>", line 2, in write_videofile
  File "/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/decorators.py", line 54, in requires_duration
    return f(clip, *a, **k)
  File "<decorator-gen-50>", line 2, in write_videofile
  File "/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/decorators.py", line 137, in use_clip_fps_by_default
    return f(clip, *new_a, **new_kw)
  File "<decorator-gen-49>", line 2, in write_videofile
  File "/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/decorators.py", line 22, in convert_masks_to_RGB
    return f(clip, *a, **k)
  File "/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/video/VideoClip.py", line 278, in write_videofile
    raise ValueError("MoviePy couldn't find the codec associated "
ValueError: MoviePy couldn't find the codec associated with the filename. Provide the 'codec' parameter in write_videofile.

Video error: AttributeError: 'numpy.ndarray' object has no attribute 'save'

As I wanted to get a video of my finished project.
When I run : python drive.py model.h5

Everything runs with no errors

When I run:
python drive.py model.h5 run1

I get the following, no images recorded:

Using TensorFlow backend.
Creating image folder at run1
RECORDING THIS RUN ...
(13220) wsgi starting up on http://0.0.0.0:4567
(13220) accepted ('127.0.0.1', 56919)
connect 76398ec1054e40a585d1dbb386eafad8
-0.12403745949268341 0.2
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\eventlet\wsgi.py", line 481, in handle_one_response
result = self.application(self.environ, start_response)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\engineio\middleware.py", line 47, in call
return self.engineio_app.handle_request(environ, start_response)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\socketio\server.py", line 332, in handle_request
return self.eio.handle_request(environ, start_response)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\engineio\server.py", line 230, in handle_request
transport, b64)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\engineio\server.py", line 334, in _handle_connect
return s.handle_get_request(environ, start_response)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\engineio\socket.py", line 80, in handle_get_request
start_response)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\engineio\socket.py", line 119, in _upgrade_websocket
return ws(environ, start_response)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\engineio\async_eventlet.py", line 14, in call
return super(WebSocketWSGI, self).call(environ, start_response)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\eventlet\websocket.py", line 127, in call
self.handler(ws)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\engineio\socket.py", line 185, in _websocket_handler
self.receive(pkt)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\engineio\socket.py", line 49, in receive
async=self.server.async_handlers)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\engineio\server.py", line 357, in _trigger_event
return self.handlersevent
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\socketio\server.py", line 489, in _handle_eio_message
self._handle_event(sid, pkt.namespace, pkt.id, pkt.data)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\socketio\server.py", line 428, in _handle_event
self._handle_event_internal(self, sid, data, namespace, id)
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\socketio\server.py", line 431, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "C:\ProgramData\Anaconda3\envs\carnd-term1\lib\site-packages\socketio\server.py", line 460, in _trigger_event
return self.handlers[namespace]event
File "drive.py", line 60, in telemetry
image.save('{}.jpg'.format(image_filename))
AttributeError: 'numpy.ndarray' object has no attribute 'save'

Autonomous Mode Recording Error

OS: Ubuntu 16.04
GPU: NVIDIA GTX 1080
Relevant Libraries: TensorFlow 1.5, CUDA 9, cuDNN 7

Cuda looks like it is throwing an error when I try to record my model running in "Autonomous Mode".

RECORDING THIS RUN ...
(19527) wsgi starting up on http://0.0.0.0:4567
(19527) accepted ('127.0.0.1', 45273)
connect  925f45635cb94f989daec38d1c04151e
2018-01-24 16:45:44.319360: E tensorflow/stream_executor/cuda/cuda_dnn.cc:385] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2018-01-24 16:45:44.319398: E tensorflow/stream_executor/cuda/cuda_dnn.cc:352] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
2018-01-24 16:45:44.319408: F tensorflow/core/kernels/conv_ops.cc:717] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo<T>(), &algorithms) 
Aborted (core dumped)

wrong codes in model.fit_generator() ?

In the project: behavioral cloning, the last three lines in the sample codes in 18.Generators
(https://classroom.udacity.com/nanodegrees/nd013/parts/edf28735-efc1-4b99-8fbb-ba9c432239c8/modules/6b6c37bc-13a5-47c7-88ed-eb1fce9789a0/lessons/3fc8dd70-23b3-4f49-86eb-a8707f71f8dd/concepts/b602658e-8a68-44e5-9f0b-dfa746a0cc1a) are:

model.fit_generator(train_generator, steps_per_epoch= len(train_samples), validation_data=validation_generator, validation_steps=len(validation_samples), epochs=5, verbose = 1)

However, I checked the keras documentation:
(https://keras.io/models/sequential/)

steps_per_epoch: Integer. Total number of steps (batches of samples) to yield from generatorbefore declaring one epoch finished and starting the next epoch. It should typically be equal to ceil(num_samples / batch_size) Optional for Sequence: if unspecified, will use the len(generator) as a number of steps.

validation_steps: Only relevant if validation_data is a generator. Total number of steps (batches of samples) to yield from validation_data generator before stopping at the end of every epoch. It should typically be equal to the number of samples of your validation dataset divided by the batch size. Optional for Sequence: if unspecified, will use the len(validation_data) as a number of steps.

So the steps_per_epoch and validation_steps should be

steps_per_epoch= ceil(len(train_samples)/batch_size)

validation_steps= ceil(len(validation_samples)/batch_size)

?

Simulator Load File error

Dear Manav:
I get a load file error when I run: python drive.py model.h5

(BTW, There is some confusion in the community if the h5 file or the json file should be loaded.)

I run : python drive.py model.h5

I get on my windows 10 machine:
Using TensorFlow backend.
Traceback (most recent call last):
File "drive.py", line 83, in
model = load_model(args.model)
File "d:\kits\Anaconda3\envs\carnd-term1\lib\site-packages\keras\models.py", line 140, in load_model
raise ValueError('No model found in config file.')
ValueError: No model found in config file.

Here is my save code:
..........
model.summary()
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
history = model.fit(X_train, y_train, nb_epoch=10, batch_size=2000, verbose=1,
validation_data=(X_validation, y_validation))
score = model.evaluate(X_validation, y_validation)
print('Test score:', score[0])
print('Test accuracy:', score[1])
print("\nSaving model weights and configuration file.")
with open('model.json', 'w') as f:
f.write(model.to_json())
model.save_weights('model.h5')
f.close()
print("Saved model to disk")
from keras import backend as K
K.clear_session()

Thanks for any guidance you can provide me.

The project has no license

Hi @domluna, @andrewpaster,
Notice that our project doesn't have license, that's a little unprofessional for a open source project, especially there has much extends adoption and development in an online education project. "No license terms" means users don't know it can be used freely. Another reference is Can I call my program "Open Source" even if I don't use an approved license?.

So here I suggest the project add license, recommend we choose one of the below license, firstly because they're all Permissive Licenses, but they have different redistribution disclaimer:

Regards,
Brandon

UnicodeDecodeError: 'gbk' codec can't decode byte 0xbf in position 2

I tried to run drive.py with or without a model.h5 file, they all get the same error message here:

UnicodeDecodeError: 'gbk' codec can't decode byte 0xbf in position 2: illegal multibyte sequence

Looks like the issue occur when it try to import socketio.

I already installed all the libs by following the guide here CarND-Term1-Starter-Kit, the previous projects have no issue at all.

I'm running at Windows 10, 64-bit Version 1709

Console log:

(carnd-term1) λ python drive.py
Traceback (most recent call last):
  File "drive.py", line 8, in <module>
    import socketio
  File "C:\Users\Kenneth\Miniconda3\envs\carnd-term1\lib\site-packages\socketio\__init__.py", line 8, in <module>
    from .zmq_manager import ZmqManager
  File "C:\Users\Kenneth\Miniconda3\envs\carnd-term1\lib\site-packages\socketio\zmq_manager.py", line 5, in <module>
    import eventlet.green.zmq as zmq
  File "C:\Users\Kenneth\Miniconda3\envs\carnd-term1\lib\site-packages\eventlet\__init__.py", line 10, in <module>
    from eventlet import convenience
  File "C:\Users\Kenneth\Miniconda3\envs\carnd-term1\lib\site-packages\eventlet\convenience.py", line 6, in <module>
    from eventlet.green import socket
  File "C:\Users\Kenneth\Miniconda3\envs\carnd-term1\lib\site-packages\eventlet\green\socket.py", line 21, in <module>
    from eventlet.support import greendns
  File "C:\Users\Kenneth\Miniconda3\envs\carnd-term1\lib\site-packages\eventlet\support\greendns.py", line 390, in <module>
    resolver = ResolverProxy(hosts_resolver=HostsResolver())
  File "C:\Users\Kenneth\Miniconda3\envs\carnd-term1\lib\site-packages\eventlet\support\greendns.py", line 171, in __init__
    self._load()
  File "C:\Users\Kenneth\Miniconda3\envs\carnd-term1\lib\site-packages\eventlet\support\greendns.py", line 198, in _load
    lines = self._readlines()
  File "C:\Users\Kenneth\Miniconda3\envs\carnd-term1\lib\site-packages\eventlet\support\greendns.py", line 184, in _readlines
    for line in fp:
UnicodeDecodeError: 'gbk' codec can't decode byte 0xbf in position 2: illegal multibyte sequence

Paths in example data file don't match when generating your own.

The example file has IMG/center_2017_05_21_19_15_37_960.jpg, but when the recorder is generating files, it has the full path /home/[stuff]/CarND-Behavioral-Cloning-P3/data/train_3/.. combine this with the fact that image = cv2.imread(img_path) silently fails on missing files (Returning a null) and this can cause some headaches. Please just make them all relative to the csv file.

drive.py crashes when running both keras model and simulator on local GPU

It appears that the default behaviour of Tensorflow (as of version 1.5) is to allocate all remaining VRAM to a running session. However, this causes VRAM conflicts with the simulator and leads to a memory allocation error when running the computation graph.
The following error message is seen if this error occurs:

E tensorflow/stream_executor/cuda/cuda_dnn.cc:385] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E tensorflow/stream_executor/cuda/cuda_dnn.cc:352] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
F tensorflow/core/kernels/conv_ops.cc:605] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms) 

The error is also reported in tensorflow/tensorflow#6698 and keras-team/keras#8353.
One workaround for this problem is to add the following to drive.py:

import tensorflow as tf
from keras import backend as K
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
K.set_session(sess)

This is also reported in #29 but that thread does not mention the root of the problem.

drive.py runs training?

After saving my model using model.save('model.h5') I run 'python drive.py model.h5'. I expected it just to start wsgi and then run the car, but instead it runs the training over again. I must be doing something incorrectly?

drive.py crashes

When I run command python drive.py model.h5 I got error:

Using TensorFlow backend.

terminate called after throwing an instance of 'Xbyak::Error'

what():  internal error

I have Mint 18, python 3.6.4, anaconda 4.5.0

thanks

Error when checking model input: expected lambda_input_4 to have 4 dimensions, but got array with shape (32, 1)

Hello, I got an error,

Code:

import csv
import cv2
import numpy as np
import sklearn
import os
from random import shuffle


lines = []
with open('./data/driving_log.csv') as csvfile:
    reader = csv.reader(csvfile)
    for line in reader:
        lines.append(line)
images = []
measurements = []
correction = 0.15

from sklearn.model_selection import train_test_split
train_samples, validation_samples = train_test_split(lines, test_size=0.2)

def generator(samples, batch_size=32):
    num_samples = len(lines)
    while 1: # Loop forever so the generator never terminates
        shuffle(samples)
        for offset in range(0, num_samples, batch_size):
            batch_samples = samples[offset:offset+batch_size]

            images = []
            angles = []
            for batch_sample in batch_samples:
                name = './IMG/'+batch_sample[0].split('\\')[-1]
                center_image = cv2.imread(name)
                center_angle = float(batch_sample[3])
                images.append(center_image)
                angles.append(center_angle)              

            # trim image to only see section with road
            X_train = np.array(images)
            y_train = np.array(angles)
            yield sklearn.utils.shuffle(X_train, y_train)

from keras.models import Sequential
from keras.layers import Cropping2D
from keras.layers import Flatten, Dense, Lambda
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D

train_generator = generator(train_samples, batch_size=32)
validation_generator = generator(validation_samples, batch_size=32)

model = Sequential()
model.add(Lambda(lambda x: x / 127.5 - 1, input_shape=(160,320,3),output_shape=(160,320,3)))
model.add(Cropping2D(cropping=((70,25),(0,0))))
model.add(Convolution2D(24,5,5,subsample=(2,2),activation="relu"))
model.add(Convolution2D(36,5,5,subsample=(2,2),activation="relu"))
model.add(Convolution2D(48,5,5,subsample=(2,2),activation="relu"))
model.add(Convolution2D(64,3,3,activation="relu"))
model.add(Convolution2D(64,3,3,activation="relu"))
model.add(Flatten())
model.add(Dense(100))
model.add(Dense(50))
model.add(Dense(10))
model.add(Dense(1))

model.compile(loss='mse',optimizer='adam')
model.fit_generator(train_generator, samples_per_epoch= len(train_samples), validation_data=validation_generator, nb_val_samples=len(validation_samples), nb_epoch=3)

model.save('model.h5')

Error:

_Epoch 1/3

ValueError Traceback (most recent call last)
in ()
109 # model.fit(X_train, y_train, validation_split=0.2,shuffle=True,nb_epoch=3)
110
--> 111 model.fit_generator(train_generator, samples_per_epoch= len(train_samples), validation_data=validation_generator, nb_val_samples=len(validation_samples), nb_epoch=3)
112
113 model.save('model.h5')

/home/carnd/anaconda3/envs/carnd-term1/lib/python3.5/site-packages/keras/models.py in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch, **kwargs)
933 nb_worker=nb_worker,
934 pickle_safe=pickle_safe,
--> 935 initial_epoch=initial_epoch)
936
937 def evaluate_generator(self, generator, val_samples,

/home/carnd/anaconda3/envs/carnd-term1/lib/python3.5/site-packages/keras/engine/training.py in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch)
1551 outs = self.train_on_batch(x, y,
1552 sample_weight=sample_weight,
-> 1553 class_weight=class_weight)
1554
1555 if not isinstance(outs, list):

/home/carnd/anaconda3/envs/carnd-term1/lib/python3.5/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight)
1308 sample_weight=sample_weight,
1309 class_weight=class_weight,
-> 1310 check_batch_axis=True)
1311 if self.uses_learning_phase and not isinstance(K.learning_phase, int):
1312 ins = x + y + sample_weights + [1.]

/home/carnd/anaconda3/envs/carnd-term1/lib/python3.5/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size)
1028 self.internal_input_shapes,
1029 check_batch_axis=False,
-> 1030 exception_prefix='model input')
1031 y = standardize_input_data(y, self.output_names,
1032 output_shapes,

/home/carnd/anaconda3/envs/carnd-term1/lib/python3.5/site-packages/keras/engine/training.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
110 ' to have ' + str(len(shapes[i])) +
111 ' dimensions, but got array with shape ' +
--> 112 str(array.shape))
113 for j, (dim, ref_dim) in enumerate(zip(array.shape, shapes[i])):
114 if not j and not check_batch_axis:

ValueError: Error when checking model input: expected lambda_input_4 to have 4 dimensions, but got array with shape (32, 1)_

Need help understanding throttle

The drive.py seems to have changed since the time I worked on this project. Can anyone help me understand what the following line of code does:

throttle = controller.update(float(speed))

Earlier we could set throttle to a constant numerical value, but seems like we cannot do that now.

README file

In the README file section about how to save a video, this line:

python drive.py model.json run1

should be

python drive.py model.h5 run1

because of how drive.py was changed.

Locale

Hello,
I faced an issue when running drive.py to autonomously drive the car.
In practice, the steering and throttle data sent to the simulator contained the dot "." character instead of the comma ",".
This caused strange behaviours, but no errors.

To fix this I just added a "replace" call on the strings before sending them to the simulator, but I do not know if this can be the final solution.

I thought it could be caused by the locale settings because I live in Italy and have my PC set with italian locale.

I hope this post can be of any help.
Daniele

Throttle issue with "1.0"

Some folks in slack reported having trouble with default 1.0 throttle. Suggest reviewing or at least changing the default stetting to something else like ".9"

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.