Coder Social home page Coder Social logo

edjeelectronics / tensorflow-object-detection-api-tutorial-train-multiple-objects-windows-10 Goto Github PK

View Code? Open in Web Editor NEW
2.9K 2.9K 1.3K 57.77 MB

How to train a TensorFlow Object Detection Classifier for multiple object detection on Windows

License: Apache License 2.0

Python 100.00%

tensorflow-object-detection-api-tutorial-train-multiple-objects-windows-10's People

Contributors

azylinski avatar dev-hjyoo avatar edjeelectronics avatar hicraigchen avatar marco-ray avatar mgh3326 avatar moiseslodeiro avatar winter2897 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-object-detection-api-tutorial-train-multiple-objects-windows-10's Issues

Error when running the Jupyter object detection turorial script

Hi,

I have downloaded the following:

  • CUDA Version 9.0.176
  • cuDNN v7.1.2 (Mar 21, 2018), for CUDA 9.0

but still I get the following error when running the very last chunck of code for the object detection demo

object detection error

Can anyone help me solve this issue. I'm very new to coding in general and dont know how to solve this issue or even how to go about problem solving this

Thank you,

Object detection webcam video lag issue

I have trained my own object classifier using your method, the classification is perfect, but I'm facing an issue with the real-time video feed from the webcam, ie the video lags a lot​. Can you please help me this issue? Thank you.

How to skip the detected object by class

In the blow part of the code, I want to skip some objects detected in the current frame, for example, object class num 2, I don't want to draw.

# Perform the actual detection by running the model with the image as input
    (boxes, scores, classes, num) = sess.run(
        [detection_boxes, detection_scores, detection_classes, num_detections],
        feed_dict={image_tensor: frame_expanded})

    # Draw the results of the detection (aka 'visulaize the results')
    vis_util.visualize_boxes_and_labels_on_image_array(
        frame,
        np.squeeze(boxes),
        np.squeeze(classes).astype(np.int32),
        np.squeeze(scores),
        category_index,
        use_normalized_coordinates=True,
        line_thickness=8,
        min_score_thresh=0.80)

How is it possible?
Thanks!!

Configuration

For the model to train on 500 images 200000-epochs !

What is the configuration(RAM,NVIDIA Graphics) Required to finish it in 2 days.

Please can any one suggest !

batch_size>1 causes error

This repository works well with batch_size=1. I have read that increasing batch_size creates better accuracy due the the manner in which weighting is averaged. When I modify faster_rcnn_inception_v2_pets.config to this ..

train_config: {
batch_size: 10

I get this error near the very beginning of training
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,1024,576,3] vs. shape[7] = [1,576,1024,3]

Thoughts?

error when running Object_detection_image.py

I am trying to run the Object_detection_image.py with the frozen inference graph of your trained Pinochle Deck card detector. I have installed tensorflow 1.4 and it works with the demo jupyter notebook script, "object_detection_tutorial.ipynb". But when I run the Object_detection_image.py, I encountered
the error, please see attached.
error-messages-4-21-18
I really appreciate it if you could take a look and help me out.
Thanks very much for your time!

Issue on Step 4

I have a issue on Step 4. After the generate_tfrecords I receive the following error:(KeyError: filename). What show I do?

Traceback (most recent call last):
File "generate_tfrecord.py", line 102, in
tf.app.run()
File "C:\Users\Matheus Freire\AppData\Local\conda\conda\envs\tf15\lib\site- packages\tensorflow\python\platform\app.py", line 124, in run _sys.exit(main(argv))
File "generate_tfrecord.py", line 91, in main
grouped = split(examples, 'filename')
File "generate_tfrecord.py", line 42, in split
gb = df.groupby(group)
File "C:\Users\Matheus Freire\AppData\Local\conda\conda\envs\tf15\lib\site- packages\pandas\core\generic.py", line 5162, in groupby
**kwargs)
File "C:\Users\Matheus Freire\AppData\Local\conda\conda\envs\tf15\lib\site- packages\pandas\core\groupby.py", line 1848, in groupby
return klass(obj, by, **kwds)
File "C:\Users\Matheus Freire\AppData\Local\conda\conda\envs\tf15\lib\site- packages\pandas\core\groupby.py", line 516, in init
mutated=self.mutated)
File "C:\Users\Matheus Freire\AppData\Local\conda\conda\envs\tf15\lib\site- packages\pandas\core\groupby.py", line 2934, in _get_grouper
raise KeyError(gpr)
KeyError: 'filename'

Thanks!

Error while training ...

Hi,

I came across your tutorial till training, but I get the following error:

Somebody said this is related to Python 3, but you have used the python 3 (3.6). maybe the Object detection API repo has changed since you prepared your own repo. Anyway I'm confused and can not go forward

WARNING:tensorflow:From D:\Image_Processing\TFOD\models-master\models-master\research\object_detection\trainer.py:228: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.create_global_step
Traceback (most recent call last):
  File "D:\Image_Processing\TFOD\models-master\models-master\research\object_detection\utils\label_map_util.py", line 132, in load_labelmap
    text_format.Merge(label_map_string, label_map)
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\text_format.py", line 525, in Merge
    descriptor_pool=descriptor_pool)
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\text_format.py", line 579, in MergeLines
    return parser.MergeLines(lines, message)
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\text_format.py", line 612, in MergeLines
    self._ParseOrMerge(lines, message)
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\text_format.py", line 627, in _ParseOrMerge
    self._MergeField(tokenizer, message)
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\text_format.py", line 727, in _MergeField
    merger(tokenizer, message, field)
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\text_format.py", line 815, in _MergeMessageField
    self._MergeField(tokenizer, sub_message)
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\text_format.py", line 727, in _MergeField
    merger(tokenizer, message, field)
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\text_format.py", line 866, in _MergeScalarField
    value = tokenizer.ConsumeString()
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\text_format.py", line 1229, in ConsumeString
    the_bytes = self.ConsumeByteString()
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\text_format.py", line 1244, in ConsumeByteString
    the_list = [self._ConsumeSingleByteString()]
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\text_format.py", line 1263, in _ConsumeSingleByteString
    raise self.ParseError('Expected string but found: %r' % (text,))
google.protobuf.text_format.ParseError: 3:8 : Expected string but found: '‘'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\internal\python_message.py", line 1069, in MergeFromString
    if self._InternalParse(serialized, 0, length) != length:
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\internal\python_message.py", line 1091, in InternalParse
    (tag_bytes, new_pos) = local_ReadTag(buffer, pos)
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\internal\decoder.py", line 181, in ReadTag
    while six.indexbytes(buffer, pos) & 0x80:
TypeError: unsupported operand type(s) for &: 'str' and 'int'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 169, in <module>
    tf.app.run()
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\tensorflow\python\platform\app.py", line 124, in run
    _sys.exit(main(argv))
  File "train.py", line 165, in main
    worker_job_name, is_chief, FLAGS.train_dir)
  File "D:\Image_Processing\TFOD\models-master\models-master\research\object_detection\trainer.py", line 235, in train
    train_config.prefetch_queue_capacity, data_augmentation_options)
  File "D:\Image_Processing\TFOD\models-master\models-master\research\object_detection\trainer.py", line 59, in create_input_queue
    tensor_dict = create_tensor_dict_fn()
  File "train.py", line 122, in get_next
    worker_index=FLAGS.task)).get_next()
  File "D:\Image_Processing\TFOD\models-master\models-master\research\object_detection\builders\dataset_builder.py", line 140, in build
    label_map_proto_file=label_map_proto_file)
  File "D:\Image_Processing\TFOD\models-master\models-master\research\object_detection\data_decoders\tf_example_decoder.py", line 143, in __init__
    use_display_name)
  File "D:\Image_Processing\TFOD\models-master\models-master\research\object_detection\utils\label_map_util.py", line 149, in get_label_map_dict
    label_map = load_labelmap(label_map_path)
  File "D:\Image_Processing\TFOD\models-master\models-master\research\object_detection\utils\label_map_util.py", line 134, in load_labelmap
    label_map.ParseFromString(label_map_string)
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\message.py", line 185, in ParseFromString
    self.MergeFromString(serialized)
  File "C:\Users\Hesam\AppData\Local\conda\conda\envs\tfod\lib\site-packages\google\protobuf\internal\python_message.py", line 1075, in MergeFromString
    raise message_mod.DecodeError('Truncated message.')
google.protobuf.message.DecodeError: Truncated message.

"N/A" on some results of testing

Hi there, we have been training our model based off of different articles of clothing (shirts, pants, jackets, etc). It identifies them correctly in most cases, but in some cases it will identify a piece of clothing and label it with an N/A. Any solution or explanation for this? It is identifying the different articles of clothing as separate entities but does not label them correctly. I have added an example screenshot example image

Training abruptly stops

Hi,

First of all, thank you for this amazing tutorial, by far one of the best that I have come across.

Unfortunately for me, there is an issue which I am not sure why it is occurring and I am hoping you can help me out.

When I am running the training, it is abruptly stopping after some time. For the first time it ran till step 500 with a checkpoint at 319. When I ran it again, it stopped at step 876. Why is this happening?

Here is the message that I get.

INFO:tensorflow:global step 875: loss = 0.2293 (0.922 sec/step)
INFO:tensorflow:global step 876: loss = 0.8595 (1.031 sec/step)
INFO:tensorflow:global step 876: loss = 0.8595 (1.031 sec/step)
2018-04-07 10:50:16.610307: W T:\src\github\tensorflow\tensorflow\core\framework\op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.h:245 : Resource exhausted: OOM when allocating tensor with shape[3,3,576,512] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.ResourceExhaustedError'>, OOM when allocating tensor with shape[3,3,576,512] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[Node: ArithmeticOptimizer/clip_grads/clip_by_norm_151/mul_square = SquareT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[Node: ToInt32_1/_2424 = _Send[T=DT_INT32, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3425_ToInt32_1", _device="/job:localhost/replica:0/task:0/device:GPU:0"](ToInt32_1)]]

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.ResourceExhaustedError'>, OOM when allocating tensor with shape[3,3,576,512] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[Node: ArithmeticOptimizer/clip_grads/clip_by_norm_151/mul_square = SquareT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[Node: ToInt32_1/_2424 = _Send[T=DT_INT32, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3425_ToInt32_1", _device="/job:localhost/replica:0/task:0/device:GPU:0"](ToInt32_1)]]

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Traceback (most recent call last):
File "C:\Users\User\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call
return fn(*args)
File "C:\Users\User\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\client\session.py", line 1312, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "C:\Users\User\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\client\session.py", line 1420, in _call_tf_sessionrun
status, run_metadata)
File "C:\Users\User\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 516, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[3,3,576,512] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[Node: ArithmeticOptimizer/clip_grads/clip_by_norm_151/mul_square = SquareT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[Node: ToInt32_1/_2424 = _Send[T=DT_INT32, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3425_ToInt32_1", _device="/job:localhost/replica:0/task:0/device:GPU:0"](ToInt32_1)]]

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 167, in
tf.app.run()
File "C:\Users\User\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "train.py", line 163, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "C:\tensorflow1\models\research\object_detection\trainer.py", line 370, in train
saver=saver)
File "C:\Users\User\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\contrib\slim\python\slim\learning.py", line 769, in train
sess, train_op, global_step, train_step_kwargs)
File "C:\Users\User\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\contrib\slim\python\slim\learning.py", line 487, in train_step
run_metadata=run_metadata)
File "C:\Users\User\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\client\session.py", line 905, in run
run_metadata_ptr)
File "C:\Users\User\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\client\session.py", line 1140, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\User\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\client\session.py", line 1321, in _do_run
run_metadata)
File "C:\Users\User\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[3,3,576,512] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[Node: ArithmeticOptimizer/clip_grads/clip_by_norm_151/mul_square = SquareT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[Node: ToInt32_1/_2424 = _Send[T=DT_INT32, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3425_ToInt32_1", _device="/job:localhost/replica:0/task:0/device:GPU:0"](ToInt32_1)]]

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

(tensorflow1) C:\tensorflow1\models\research\object_detection>

Iteration steps needed for SSD_Mobilenet_V1_coco

I am using SSD_Mobilenet_V1_coco for training.... the loss is below 1 (no of iterations: 18695) but the model can't able to detect any objects.....what is required loss rate to detect the objects?..

ValueError: Tried to convert 't' to a tensor and failed. Error: Argument must be a dense tensor: range(0, 3) - got shape [3], but wanted [].

(tensorflow1) C:\tensorflow1\models\research\object_detection>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
INFO:tensorflow:Scale of 0 disables regularizer.
INFO:tensorflow:Scale of 0 disables regularizer.
WARNING:tensorflow:From C:\tensorflow1\models\research\object_detection\trainer.py:228: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.create_global_step
INFO:tensorflow:Scale of 0 disables regularizer.
INFO:tensorflow:Scale of 0 disables regularizer.
INFO:tensorflow:depth of additional conv before box predictor: 0
WARNING:tensorflow:From C:\tensorflow1\models\research\object_detection\core\box_predictor.py:396: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From C:\tensorflow1\models\research\object_detection\core\losses.py:316: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Traceback (most recent call last):
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 510, in _apply_op_helper
preferred_dtype=default_dtype)
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\ops.py", line 1036, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\constant_op.py", line 235, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\constant_op.py", line 214, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 442, in make_tensor_proto
_GetDenseDimensions(values)))
ValueError: Argument must be a dense tensor: range(0, 3) - got shape [3], but wanted [].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 524, in _apply_op_helper
values, as_ref=input_arg.is_ref).dtype.name
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\ops.py", line 1036, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\constant_op.py", line 235, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\constant_op.py", line 214, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 442, in make_tensor_proto
_GetDenseDimensions(values)))
ValueError: Argument must be a dense tensor: range(0, 3) - got shape [3], but wanted [].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 167, in
tf.app.run()
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "train.py", line 163, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "C:\tensorflow1\models\research\object_detection\trainer.py", line 255, in train
train_config.optimizer)
File "C:\tensorflow1\models\research\object_detection\builders\optimizer_builder.py", line 50, in build
learning_rate = _create_learning_rate(config.learning_rate)
File "C:\tensorflow1\models\research\object_detection\builders\optimizer_builder.py", line 109, in _create_learning_rate
learning_rate_sequence, config.warmup)
File "C:\tensorflow1\models\research\object_detection\utils\learning_schedules.py", line 169, in manual_stepping
[0] * num_boundaries))
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2619, in where
return gen_math_ops._select(condition=condition, x=x, y=y, name=name)
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 4503, in _select
"Select", condition=condition, t=x, e=y, name=name)
File "C:\anaconda\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 528, in _apply_op_helper
(input_name, err))
ValueError: Tried to convert 't' to a tensor and failed. Error: Argument must be a dense tensor: range(0, 3) - got shape [3], but wanted [].

(tensorflow1) C:\tensorflow1\models\research\object_detection>

I got this error when running train.py command. can you help?

Evaluation

Hello.
I run your tutorial I am wondering about evaluation process. Could you make the video for it.

Count the number of objects

I have to count the number of objects in each frame of the video and if the number of objects is less than the previous count ..i have to notify that there is missing of objects..can u help me to do this..plzz

tensorboard embedding

Hello,

In tensorboard I see the the Projector tab with the PCA and T_SNE visualizations (1024 points and 36 dimension)

How can I create the tsv and sprite to visualize my labels and images there?

I don't know how to extract the features from the Faster r-cnn inception V2 so I could create the tsv and sprite for those 1024 points, or whatever is needed for label and image visualization.

Thank you!

predict error

I have trained my own datasets for about 30K steps,then when I predicted,it detected nothing,I changed threshold to 0.01,still nothing.Then I printed class scores,found all scores are 0.So I tested official SSD coco model to see if detection file works ,it works okay,but official faster rcnn coco model does not,even with an error:

Traceback (most recent call last):
  File "D:\Anaconda35\lib\site-packages\tensorflow\python\client\session.py", line 1323, in _do_call
    return fn(*args)
  File "D:\Anaconda35\lib\site-packages\tensorflow\python\client\session.py", line 1302, in _run_fn
    status, run_metadata)
  File "D:\Anaconda35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'identical_element_shapes' not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=<unknown>; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: Preprocessor/map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=<unknown>, identical_element_shapes=true, tensor_array_name="", _device="/job:localhost/replica:0/task:0/device:GPU:0"](Preprocessor/map/TensorArrayUnstack/strided_slice). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
	 [[Node: Preprocessor/map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=<unknown>, identical_element_shapes=true, tensor_array_name="", _device="/job:localhost/replica:0/task:0/device:GPU:0"](Preprocessor/map/TensorArrayUnstack/strided_slice)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "G:/__TF_examples/models/research/object_detection/demo1.py", line 98, in <module>
    feed_dict={image_tensor: image_expanded})
  File "D:\Anaconda35\lib\site-packages\tensorflow\python\client\session.py", line 889, in run
    run_metadata_ptr)
  File "D:\Anaconda35\lib\site-packages\tensorflow\python\client\session.py", line 1120, in _run
    feed_dict_tensor, options, run_metadata)
  File "D:\Anaconda35\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _do_run
    options, run_metadata)
  File "D:\Anaconda35\lib\site-packages\tensorflow\python\client\session.py", line 1336, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'identical_element_shapes' not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=<unknown>; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: Preprocessor/map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=<unknown>, identical_element_shapes=true, tensor_array_name="", _device="/job:localhost/replica:0/task:0/device:GPU:0"](Preprocessor/map/TensorArrayUnstack/strided_slice). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
	 [[Node: Preprocessor/map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=<unknown>, identical_element_shapes=true, tensor_array_name="", _device="/job:localhost/replica:0/task:0/device:GPU:0"](Preprocessor/map/TensorArrayUnstack/strided_slice)]]

Caused by op 'Preprocessor/map/TensorArray', defined at:
  File "<string>", line 1, in <module>
  File "D:\Anaconda35\lib\idlelib\run.py", line 124, in main
    ret = method(*args, **kwargs)
  File "D:\Anaconda35\lib\idlelib\run.py", line 351, in runcode
    exec(code, self.locals)
  File "G:/__TF_examples/models/research/object_detection/demo1.py", line 68, in <module>
    tf.import_graph_def(od_graph_def, name='')
  File "D:\Anaconda35\lib\site-packages\tensorflow\python\framework\importer.py", line 313, in import_graph_def
    op_def=op_def)
  File "D:\Anaconda35\lib\site-packages\tensorflow\python\framework\ops.py", line 2956, in create_op
    op_def=op_def)
  File "D:\Anaconda35\lib\site-packages\tensorflow\python\framework\ops.py", line 1470, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): NodeDef mentions attr 'identical_element_shapes' not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=<unknown>; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: Preprocessor/map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=<unknown>, identical_element_shapes=true, tensor_array_name="", _device="/job:localhost/replica:0/task:0/device:GPU:0"](Preprocessor/map/TensorArrayUnstack/strided_slice). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
	 [[Node: Preprocessor/map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=<unknown>, identical_element_shapes=true, tensor_array_name="", _device="/job:localhost/replica:0/task:0/device:GPU:0"](Preprocessor/map/TensorArrayUnstack/strided_slice)]]

Main issue is why all scores are 0? Thanks for answer.

NO MODULE NAMED 'DEPLOYMENT'

Hi, I have been getting this problem when I try and train the ai. Please could somebody help me.
Thanks
Error:

`
(tensorflow1) C:\Users\owner\Documents\tensorflow1\models-master\research\object_detection>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_resnet50_pets.config
Traceback (most recent call last):
File "train.py", line 49, in
from object_detection import trainer
File "C:\Users\owner\Anaconda3\envs\tensorflow1\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\trainer.py", line 33, in
from deployment import model_deploy
ModuleNotFoundError: No module named 'deployment'

`

Training breaks halfway, Although restartable, but it break again frequently

Training could start, but breaks halfway. I had to restart several times from where it last breaks.
Although it looks like hardware limitation, however i'm using a powerful laptop, Windows 10, GTX1060 GPU, 16GB DDR3, Intel i7&-7700HQ 2.8GHz, installed Cuda 9.0 and cudnn 7.05. I hope someone can give me advise on this. Thank you.

Below shows training in progress, and break error.

INFO:tensorflow:global step 345: loss = 0.2802 (0.166 sec/step)
INFO:tensorflow:global step 346: loss = 0.8893 (0.191 sec/step)
INFO:tensorflow:global step 346: loss = 0.8893 (0.191 sec/step)
2018-05-29 12:27:57.173433: W T:\src\github\tensorflow\tensorflow\core\framework\op_kernel.cc:1318] OP_REQUIRES failed at queue_ops.cc:105 : Invalid argument: Shape mismatch in tuple component 16. Expected [1,?,?,3], got [1,1,591,500,3]
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>, Shape mismatch in tuple component 16. Expected [1,?,?,3], got [1,1,591,500,3]
[[Node: batch/padding_fifo_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_STRING, DT_INT32, DT_FLOAT, DT_INT32, DT_FLOAT, ..., DT_INT32, DT_INT32, DT_INT32, DT_STRING, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](batch/padding_fifo_queue, IteratorGetNext, Shape_9, IteratorGetNext:1, Shape_5, RandomHorizontalFlip/cond_1/Merge, Shape_1, IteratorGetNext:3, Shape_10, IteratorGetNext:4, Shape_8, IteratorGetNext:5, Shape_6, IteratorGetNext:6, Shape, IteratorGetNext:7, Shape_2, ExpandDims_1, Shape_4, IteratorGetNext:9, Shape_9, IteratorGetNext:10, Shape_9, IteratorGetNext:11, Shape_9)]]
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>, Shape mismatch in tuple component 16. Expected [1,?,?,3], got [1,1,591,500,3]
[[Node: batch/padding_fifo_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_STRING, DT_INT32, DT_FLOAT, DT_INT32, DT_FLOAT, ..., DT_INT32, DT_INT32, DT_INT32, DT_STRING, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](batch/padding_fifo_queue, IteratorGetNext, Shape_9, IteratorGetNext:1, Shape_5, RandomHorizontalFlip/cond_1/Merge, Shape_1, IteratorGetNext:3, Shape_10, IteratorGetNext:4, Shape_8, IteratorGetNext:5, Shape_6, IteratorGetNext:6, Shape, IteratorGetNext:7, Shape_2, ExpandDims_1, Shape_4, IteratorGetNext:9, Shape_9, IteratorGetNext:10, Shape_9, IteratorGetNext:11, Shape_9)]]
INFO:tensorflow:global step 347: loss = 0.4380 (0.185 sec/step)
INFO:tensorflow:global step 347: loss = 0.4380 (0.185 sec/step)
INFO:tensorflow:Finished training! Saving model to disk.
INFO:tensorflow:Finished training! Saving model to disk.
Traceback (most recent call last):
File "train.py", line 184, in
tf.app.run()
File "C:\Users\default.LAPTOP-2CI68M4P\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "train.py", line 180, in main
graph_hook_fn=graph_rewriter_fn)
File "C:\tensorflow1\models\research\object_detection\trainer.py", line 399, in train
saver=saver)
File "C:\Users\default.LAPTOP-2CI68M4P\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\contrib\slim\python\slim\learning.py", line 784, in train
ignore_live_threads=ignore_live_threads)
File "C:\Users\default.LAPTOP-2CI68M4P\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\training\supervisor.py", line 828, in stop
ignore_live_threads=ignore_live_threads)
File "C:\Users\default.LAPTOP-2CI68M4P\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\training\coordinator.py", line 389, in join
six.reraise(*self._exc_info_to_raise)
File "C:\Users\default.LAPTOP-2CI68M4P\Anaconda3\envs\tensorflow1\lib\site-packages\six.py", line 693, in reraise
raise value
File "C:\Users\default.LAPTOP-2CI68M4P\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\training\queue_runner_impl.py", line 252, in _run
enqueue_callable()
File "C:\Users\default.LAPTOP-2CI68M4P\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\client\session.py", line 1244, in _single_operation_run
self._call_tf_sessionrun(None, {}, [], target_list, None)
File "C:\Users\default.LAPTOP-2CI68M4P\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\client\session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape mismatch in tuple component 16. Expected [1,?,?,3], got [1,1,591,500,3]
[[Node: batch/padding_fifo_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_STRING, DT_INT32, DT_FLOAT, DT_INT32, DT_FLOAT, ..., DT_INT32, DT_INT32, DT_INT32, DT_STRING, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](batch/padding_fifo_queue, IteratorGetNext, Shape_9, IteratorGetNext:1, Shape_5, RandomHorizontalFlip/cond_1/Merge, Shape_1, IteratorGetNext:3, Shape_10, IteratorGetNext:4, Shape_8, IteratorGetNext:5, Shape_6, IteratorGetNext:6, Shape, IteratorGetNext:7, Shape_2, ExpandDims_1, Shape_4, IteratorGetNext:9, Shape_9, IteratorGetNext:10, Shape_9, IteratorGetNext:11, Shape_9)]]

(tensorflow1) C:\tensorflow1\models\research\object_detection>

To decrease FPS

Can anyone suggest a method to decrease the frame per second.......My FPS is 29 ...I want to convert it to 1FPS

Not sure about that

Excellent tutorial!
One issue I see is here: "There should be some images where the desired object is partially obscured, overlapped with something else, or only halfway in the picture."
Learning partially obscured objects is not helping to detect partially obscured objects, except the case when you use same object, partially obscured in the same way, during training and detection.

How to cite you?

Hello! I'd like to thank you a lot for such a precise guide! I largely used it for my research project and I would cite you the way you prefer. May you help me? Thank you.

ValueError: Tried to convert 't' to a tensor and failed.

ValueError: Tried to convert 't' to a tensor and failed. Error: Argument must be a dense tensor: range(0, 3) - got shape [3], but wanted [].
I have such a problem. but i dont know how to solve it, can you help me .thanks a lot!

inference_graph in c++

how to use inference_graph in c++. do you have any idea. I want to call trained files in C++ tensorflow

I see no object detected after ~10000 training steps

I'm trying to detect teeth from x-ray images, but after training about 10k steps, the loss is still around 1 and doesn't seem to be decreasing. Sometimes, it jumped up to 100000 and decreased after that. And after 10000 steps I tried to run image detection on some photos but it could not detect anything. Is there something that I have to configure? Thank you very much.

training Error: could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED

Your tutorial is excellant and I was able to modify it to work completely with custom training without a gpu. Now I have a new computer with GPU and I am having the error shown in the subject and shown in the excerpt at the bottom. I have upgraded the cuda to 9.0 and the CuDNN to 7.1.2.

I found this link regarding the error
tensorflow/tensorflow#6698
strickon comments

which details how to fix this issue which is due to an apparent gpu memory issue. This is supposed to keep tensorflow from allocating too much gpu memory by adding the following...

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)

My problem is that I don't know how to implement this change within the framework of your github tutorial.

Need help. Thanks in advance

excerpt of output from train.py

2018-04-09 10:19:22.033369: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1344] Found device 0 with properties:
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:01:00.0
totalMemory: 8.00GiB freeMemory: 6.63GiB
2018-04-09 10:19:22.033539: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1423] Adding visible gpu devices: 0
2018-04-09 10:20:33.854527: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-04-09 10:20:33.854633: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:917] 0
2018-04-09 10:20:33.855370: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:930] 0: N
2018-04-09 10:20:33.856016: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6403 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
INFO:tensorflow:Restoring parameters from training/model.ckpt-0
INFO:tensorflow:Restoring parameters from training/model.ckpt-0
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Starting Session.
INFO:tensorflow:Starting Session.
INFO:tensorflow:Saving checkpoint to path training/model.ckpt
INFO:tensorflow:Saving checkpoint to path training/model.ckpt
INFO:tensorflow:Starting Queues.
INFO:tensorflow:Starting Queues.
INFO:tensorflow:global_step/sec: 0
INFO:tensorflow:global_step/sec: 0
2018-04-09 10:20:48.489995: E T:\src\github\tensorflow\tensorflow\stream_executor\cuda\cuda_dnn.cc:403] could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
2018-04-09 10:20:48.494500: E T:\src\github\tensorflow\tensorflow\stream_executor\cuda\cuda_dnn.cc:407] error retrieving driver version: Unimplemented: kernel reported driver version not implemented on Windows
2018-04-09 10:20:48.510150: F T:\src\github\tensorflow\tensorflow\core\kernels\conv_ops.cc:712] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo(), &algorithms)

Download step 2c

Hello,

first of all I really enjoyed working with your OpenCV project - thank you for sharing your knowledge.

As I wanted to dig in deeper e.g. with cards overlapping each other my guess is that this approach will fit my needs better. I cannot find the files you are referring to in step 2c, I am too early or is there any problems getting these files downloaded?

tia
Spaceman918

error while xml_to_csv

I have 150 files .xml in test directory and 563 files .xml in train directory. when i run xml_to_csv.py, i get an error like this:

$ python xml_to_csv.py 
Traceback (most recent call last):
  File "xml_to_csv.py", line 35, in <module>
    main()
  File "xml_to_csv.py", line 31, in main
    xml_df = xml_to_csv(image_path)
  File "xml_to_csv.py", line 10, in xml_to_csv
    tree = ET.parse(xml_file)
  File "/usr/lib/python3.5/xml/etree/ElementTree.py", line 1195, in parse
    tree.parse(source, parser)
  File "/usr/lib/python3.5/xml/etree/ElementTree.py", line 596, in parse
    self._root = parser._parse_whole(source)
xml.etree.ElementTree.ParseError: junk after document element: line 3, column 1

can anyone help me fix this?

Can multiple GPUs be used for model training?

I use google cloud for model training, with 1 GPU, the time is about 0.5s / step but when adding 2 GPUs the time is still the same. What should I do to achieve the best time?

The project is great, thank you for sharing. Thanks very much

Running xml_to_csv.py

Hi,

When i run the xml_to_csv.py script to create the CSV files containing the filename, width, height, class, xmin, ymin, xmax & ymax, it compiles all the information in the "A" column instead of creating a column for each variable.
test_labels one coloumn

I suspect the issue lies within how the script is written, but I dont know how to fix the issue, maybe someone here can help me?

xml to csv script code

Thank you,

Gpu memory issue

My Gpu total memory is 8gb but when i run tha program it says total available memory is 1gb....what may be the reason?.....until training there is no issues but when i run the program(object_detection_video.py) the video hardly moves to next frame..can u plzz help me with this

Error please urgent help !

i go this error then i did like (epratheeban ) i changed 167 line then i got another error

Traceback (most recent call last):
File "train.py", line 49, in
from object_detection import trainer
File "C:\tensorflow1\models\research\object_detection\trainer.py", line 26, in
from object_detection.builders import optimizer_builder
File "C:\tensorflow1\models\research\object_detection\builders\optimizer_builder.py", line 19, in
from object_detection.utils import learning_schedules
File "C:\tensorflow1\models\research\object_detection\utils\learning_schedules.py", line 167
rate_index = tf.reduce_max(tf.where(tf.greater_equal(global_step, boundaries),
^
IndentationError: unindent does not match any outer indentation level

please help me i need it sooooooo bad

Getting a type error when running train.py

getting this error when running train.py

(tensorflow1) C:\tensorflow1\models\research\object_detection>python train.py --
logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_in
ception_v2_pets.config
WARNING:tensorflow:From C:\Users\Grant\Anaconda3\envs\tensorflow1\lib\site-packa
ges\tensorflow\contrib\learn\python\learn\datasets\base.py:198: retry (from tens
orflow.contrib.learn.python.learn.datasets.base) is deprecated and will be remov
ed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
INFO:tensorflow:Scale of 0 disables regularizer.
INFO:tensorflow:Scale of 0 disables regularizer.
WARNING:tensorflow:From C:\tensorflow1\models\research\object_detection\trainer.
py:228: create_global_step (from tensorflow.contrib.framework.python.ops.variabl
es) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.create_global_step
Traceback (most recent call last):
File "C:\tensorflow1\models\research\object_detection\utils\label_map_util.py"
, line 134, in load_labelmap
text_format.Merge(label_map_string, label_map)
File "C:\Users\Grant\Anaconda3\envs\tensorflow1\lib\site-packages\google\proto
buf\text_format.py", line 533, in Merge
descriptor_pool=descriptor_pool)
File "C:\Users\Grant\Anaconda3\envs\tensorflow1\lib\site-packages\google\proto
buf\text_format.py", line 587, in MergeLines
return parser.MergeLines(lines, message)
File "C:\Users\Grant\Anaconda3\envs\tensorflow1\lib\site-packages\google\proto
buf\text_format.py", line 620, in MergeLines
self._ParseOrMerge(lines, message)
File "C:\Users\Grant\Anaconda3\envs\tensorflow1\lib\site-packages\google\proto
buf\text_format.py", line 635, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "C:\Users\Grant\Anaconda3\envs\tensorflow1\lib\site-packages\google\proto
buf\text_format.py", line 735, in _MergeField
merger(tokenizer, message, field)
File "C:\Users\Grant\Anaconda3\envs\tensorflow1\lib\site-packages\google\proto
buf\text_format.py", line 822, in _MergeMessageField
raise tokenizer.ParseErrorPreviousToken('Expected "%s".' % (end_token,))
google.protobuf.text_format.ParseError: 13:9 : Expected "}".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 167, in
tf.app.run()
File "C:\Users\Grant\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\p
ython\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "train.py", line 163, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "C:\tensorflow1\models\research\object_detection\trainer.py", line 235, i
n train
train_config.prefetch_queue_capacity, data_augmentation_options)
File "C:\tensorflow1\models\research\object_detection\trainer.py", line 59, in
create_input_queue
tensor_dict = create_tensor_dict_fn()
File "train.py", line 120, in get_next
dataset_builder.build(config)).get_next()
File "C:\tensorflow1\models\research\object_detection\builders\dataset_builder
.py", line 153, in build
label_map_proto_file=label_map_proto_file)
File "C:\tensorflow1\models\research\object_detection\data_decoders\tf_example
decoder.py", line 233, in _init
use_display_name)
File "C:\tensorflow1\models\research\object_detection\utils\label_map_util.py"
, line 151, in get_label_map_dict
label_map = load_labelmap(label_map_path)
File "C:\tensorflow1\models\research\object_detection\utils\label_map_util.py"
, line 136, in load_labelmap
label_map.ParseFromString(label_map_string)
TypeError: a bytes-like object is required, not 'str'

Restart Training

After training for a long time, like 4 hours, I terminated the training processing.
If I want to restart the training, should I train based on previous status? If so, how do "restart"?
If I add more images into "image" folder, should I train my model from "scratch", from the "first step"?

error First step cannot be zero when running train.py

i tried to use the same images (card) provided, i just delete all the processed file (csv,dll) and follow all the step.
And when i tried to issue python train.py
I got this error

Traceback (most recent call last):
  File "train.py", line 184, in <module>
    tf.app.run()
  File "C:\Users\MRCPP-Fablab\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
    _sys.exit(main(argv))
  File "train.py", line 180, in main
    graph_hook_fn=graph_rewriter_fn)
  File "E:\tensor\models\research\object_detection\trainer.py", line 288, in train
    train_config.optimizer)
  File "E:\tensor\models\research\object_detection\builders\optimizer_builder.py", line 50, in build
    learning_rate = _create_learning_rate(config.learning_rate)
  File "E:\tensor\models\research\object_detection\builders\optimizer_builder.py", line 109, in _create_learning_rate
    learning_rate_sequence, config.warmup)
  File "E:\tensor\models\research\object_detection\utils\learning_schedules.py", line 156, in manual_stepping
    raise ValueError('First step cannot be zero.')
ValueError: First step cannot be zero.

Any clues why this happen?

AssertionError: `eval_dir` is missing.

Hi!
when I try to eval the model ,like this

python object_detection/eval.py
--logtostderr
--pipeline_config_path=object_detection/VOC_car/ssd_mobilenet_v1_voc2012.config
--checkpoint_dir=object_detection/VOC_car/ssd_mobilenet_train_logs
--eval_dir=object_detection/VOC_car/ssd_mobilenet_val_logs \

then report errors

Traceback (most recent call last):
  File "object_detection/eval.py", line 135, in <module>
    tf.app.run()
  File "D:\Anaconda3\envs\python3\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
    _sys.exit(main(argv))
  File "object_detection/eval.py", line 85, in main
    assert FLAGS.eval_dir, '`eval_dir` is missing.'
AssertionError: `eval_dir` is missing.

I can't understand it. so searching for help!

setting the path

I think you missed% in PYTHONPATH for the line- (tensorflow1) C:> set PATH=%PATH%;PYTHONPATH

it should be (tensorflow1) C:> set PATH=%PATH%;%PYTHONPATH%

restart training error "No module named 'deployment'",set %pythonpath% not work

issue the command "python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config", to restart training, error "No module named 'deployment'" , issue “activate tensorflow1” to re-enter the environment, and then issue the commands given in Step 2e still not work. have set %pythonpath%, but how to set %path%

error when training data in the example

Hi,

Firstly, thanks for your detailed explanation. However, when I tried to train the model I had the following error which I am not sure where is it:

`(tensorflow1) D:\tensorflow1\models\research\object_detection>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
WARNING:tensorflow:From D:\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
INFO:tensorflow:Scale of 0 disables regularizer.
INFO:tensorflow:Scale of 0 disables regularizer.
WARNING:tensorflow:From D:\tensorflow1\models\research\object_detection\trainer.py:228: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.create_global_step
INFO:tensorflow:Scale of 0 disables regularizer.
INFO:tensorflow:Scale of 0 disables regularizer.
INFO:tensorflow:depth of additional conv before box predictor: 0
WARNING:tensorflow:From D:\tensorflow1\models\research\object_detection\core\box_predictor.py:396: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From D:\tensorflow1\models\research\object_detection\core\losses.py:316: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Traceback (most recent call last):
File "train.py", line 167, in
tf.app.run()
File "D:\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "train.py", line 163, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "D:\tensorflow1\models\research\object_detection\trainer.py", line 255, in train
train_config.optimizer)
File "D:\tensorflow1\models\research\object_detection\builders\optimizer_builder.py", line 50, in build
learning_rate = _create_learning_rate(config.learning_rate)
File "D:\tensorflow1\models\research\object_detection\builders\optimizer_builder.py", line 109, in _create_learning_rate
learning_rate_sequence, config.warmup)
File "D:\tensorflow1\models\research\object_detection\utils\learning_schedules.py", line 156, in manual_stepping
raise ValueError('First step cannot be zero.')
ValueError: First step cannot be zero.`

Thanks!

Object_detection_webcam.py error

Dear colleagues,

I'm getting an error running Object_detection_webcam.py on both python2 and python3.
Could you please advice what can be a solution?

RESTART: /home/nikolay/tensorflow1/models/research/object_detection/Object_detection_webcam.py

Traceback (most recent call last):
File "/home/nikolay/tensorflow1/models/research/object_detection/Object_detection_webcam.py", line 103, in
feed_dict={image_tensor: frame_expanded})
File "/home/nikolay/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 905, in run
run_metadata_ptr)
File "/home/nikolay/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1106, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/home/nikolay/.local/lib/python2.7/site-packages/numpy/core/numeric.py", line 492, in asarray
return array(a, dtype, copy=False, order=order)
TypeError: long() argument must be a string or a number, not 'NoneType'

Protoc issues. Can't generate name_pd2.py file.

first, I find there is no protoc.exe on my machine. I download the latest release from here. and make my path enviroment variable include it.
then, I jump to "models\research" directory in command prompt and run the command:

protoc --python_out=. .\object_detection\protos\anchor_generator.proto .\object_detection\protos\argmax_matcher.proto .\object_detection\protos\bipartite_matcher.proto .\object_detection\protos\box_coder.proto .\object_detection\protos\box_predictor.proto .\object_detection\protos\eval.proto .\object_detection\protos\faster_rcnn.proto .\object_detection\protos\faster_rcnn_box_coder.proto .\object_detection\protos\grid_anchor_generator.proto .\object_detection\protos\hyperparams.proto .\object_detection\protos\image_resizer.proto .\object_detection\protos\input_reader.proto .\object_detection\protos\losses.proto .\object_detection\protos\matcher.proto .\object_detection\protos\mean_stddev_box_coder.proto .\object_detection\protos\model.proto .\object_detection\protos\optimizer.proto .\object_detection\protos\pipeline.proto .\object_detection\protos\post_processing.proto .\object_detection\protos\preprocessor.proto .\object_detection\protos\region_similarity_calculator.proto .\object_detection\protos\square_box_coder.proto .\object_detection\protos\ssd.proto .\object_detection\protos\ssd_anchor_generator.proto .\object_detection\protos\string_int_label_map.proto .\object_detection\protos\train.proto .\object_detection\protos\keypoint_box_coder.proto .\object_detection\protos\multiscale_anchor_generator.proto

But I don't get a name_pb2.py . And the protoc command also doesn't output any message.
Someone else can help me to fix this issue?? thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.