Coder Social home page Coder Social logo

east's Introduction

EAST: An Efficient and Accurate Scene Text Detector

This is a Keras implementation of EAST based on a Tensorflow implementation made by argman.

The original paper by Zhou et al. is available on arxiv.

  • Only RBOX geometry is implemented
  • Differences from the original paper
    • Uses ResNet-50 instead of PVANet
    • Uses dice loss function instead of balanced binary cross-entropy
    • Uses AdamW optimizer instead of the original Adam

The implementation of AdamW optimizer is borrowed from this repository.

The code should run under both Python 2 and Python 3.

Requirements

Keras 2.0 or higher, and TensorFlow 1.0 or higher should be enough.

The code should run with Keras 2.1.5. If you use Keras 2.2 or higher, you have to remove ZeroPadding2D from the model.py file. Specifically, replace the line containing ZeroPadding2D with x = concatenate([x, resnet.get_layer('activation_10').output], axis=3).

I will add a list of packages and their versions under which no errors should occur later.

Data

You can use your own data, but the annotation files need to conform the ICDAR 2015 format.

ICDAR 2015 dataset can be downloaded from this site. You need the data from Task 4.1 Text Localization.
You can also download the MLT dataset, which uses the same annotation style as ICDAR 2015, there.

Alternatively, you can download a training dataset consisting of all training images from ICDAR 2015 and ICDAR 2013 datasets with annotation files in ICDAR 2015 format here.
You can also get a subset of validation images from the MLT 2017 dataset containing only images with text in the Latin alphabet for validation here.
The original datasets are distributed by the organizers of the Robust Reading Competition and are licensed under the CC BY 4.0 license.

Training

You need to put all of your training images and their corresponding annotation files in one directory. The annotation files have to be named gt_IMAGENAME.txt.
You also need a directory for validation data, which requires the same structure as the directory with training images.

Training is started by running train.py. It accepts several arguments including path to training and validation data, and path where you want to save trained checkpoint models. You can see all of the arguments you can specify in the train.py file.

Execution example

python train.py --gpu_list=0,1 --input_size=512 --batch_size=12 --nb_workers=6 --training_data_path=../data/ICDAR2015/train_data/ --validation_data_path=../data/MLT/val_data_latin/ --checkpoint_path=tmp/icdar2015_east_resnet50/

You can download a model trained on ICDAR 2015 and 2013 here. It achieves 0.802 F-score on ICDAR 2015 test set. You also need to download this JSON file of the model to be able to use it.

Test

The images you want to classify have to be in one directory, whose path you have to pass as an argument. Classification is started by running eval.py with arguments specifying path to the images to be classified, the trained model, and a directory which you want to save the output in.

Execution example

python eval.py --gpu_list=0 --test_data_path=../data/ICDAR2015/test/ --model_path=tmp/icdar2015_east_resnet50/model_XXX.h5 --output_dir=tmp/icdar2015_east_resnet50/eval/

Detection examples

image_1 image_2 image_3 image_4 image_5 image_6 image_7 image_8 image_9

east's People

Contributors

dependabot[bot] avatar janzd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

east's Issues

Syntax Error

Hi
Thanks for sharing the code.
I am getting syntax error on line 101 in train.py:
pred_score_maps, pred_geo_maps = self.model.predict([data[0][0], data[0][1], data[0][2], data[0][3])
I am using Keras 2.1.5
with TensorFlow 1.9.0
and Python 3.5.4 on windows Windows 10

loss is too low

I train my dataset , the result show 723/1625 [============>.................] - ETA: 11:46 - loss: 0.0012 - pred_score_map_loss: 2.9373e-04 - pred_geo_map_loss:1.8351e-04 . I want to know whether this is right ?

Issue : Could not load pretrained model

Hi @kurapan,
When I am trying to load pretrained model, it is giving error "ValueError: No model found in config file.".
Can you please provide an updated model file ? Or help in rectifying this error.

Traceback (most recent call last):
File "eval.py", line 195, in
main()
File "eval.py", line 142, in main
model = load_model('EAST_IC15_13_model.h5')
File "/usr/local/lib/python3.6/site-packages/keras/engine/saving.py", line 258, in load_model
raise ValueError('No model found in config file.')
ValueError: No model found in config file.

Thank you.

Fine-tuning the model

I would like to fine-tune the model with my own data, so I am wondering what is the better way to do that. Should I just train the model with my data, or should I train just the last layers?

tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match:

Hello. I am trying to implement this in Tensorflow 2. So I have converted all the codes in tf2.1.0.
But during training in the first epoch itself I get this error:
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 6653, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [16,128,128,128] vs. shape[1] = [16,64,64,128] [Op:ConcatV2] name: concat

I am not able to figure out what is the issue. It might be with the EAST model I am implementing. Since ResNet has now been implemented from tensorflow.keras so I have changed the concat layers.

Code for models.py:
#import keras
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2D, concatenate, BatchNormalization, Lambda, Input, multiply, add, ZeroPadding2D, Activation, Layer, MaxPooling2D, Dropout
from tensorflow.keras import regularizers
#import keras.backend as K
import tensorflow as tf
import numpy as np

RESIZE_FACTOR = 2

def resize_bilinear(x):
return tf.image.resize(x, size=(tf.shape(x)[1]*RESIZE_FACTOR, tf.shape(x)[2]*RESIZE_FACTOR))

class EAST_model:

def __init__(self, input_size=512):
    input_image = Input(shape=(None, None, 3), name='input_image')
    overly_small_text_region_training_mask = Input(shape=(None, None, 1), name='overly_small_text_region_training_mask')
    text_region_boundary_training_mask = Input(shape=(None, None, 1), name='text_region_boundary_training_mask')
    target_score_map = Input(shape=(None, None, 1), name='target_score_map')
    resnet = ResNet50(input_tensor=input_image, weights='imagenet', include_top=False, pooling=None)
    x = resnet.get_layer('conv3_block4_2_relu').output
    x = Lambda(resize_bilinear, name='resize_1')(x)
    x = concatenate([x, resnet.get_layer('conv3_block1_2_relu').output], axis=3)
    x = Conv2D(128, (1, 1), padding='same', kernel_regularizer=regularizers.l2(1e-5))(x)
    x = BatchNormalization(momentum=0.997, epsilon=1e-5, scale=True)(x)
    x = Activation('relu')(x)
    x = Conv2D(128, (3, 3), padding='same', kernel_regularizer=regularizers.l2(1e-5))(x)
    x = BatchNormalization(momentum=0.997, epsilon=1e-5, scale=True)(x)
    x = Activation('relu')(x)

    x = Lambda(resize_bilinear, name='resize_2')(x)
    x = concatenate([x, resnet.get_layer('conv3_block2_1_relu').output], axis=3)
    x = Conv2D(64, (1, 1), padding='same', kernel_regularizer=regularizers.l2(1e-5))(x)
    x = BatchNormalization(momentum=0.997, epsilon=1e-5, scale=True)(x)
    x = Activation('relu')(x)
    x = Conv2D(64, (3, 3), padding='same', kernel_regularizer=regularizers.l2(1e-5))(x)
    x = BatchNormalization(momentum=0.997, epsilon=1e-5, scale=True)(x)
    x = Activation('relu')(x)

    x = Lambda(resize_bilinear, name='resize_3')(x)
    x = concatenate([x, resnet.get_layer('conv1_relu').output], axis=3)
    x = Conv2D(32, (1, 1), padding='same', kernel_regularizer=regularizers.l2(1e-5))(x)
    x = BatchNormalization(momentum=0.997, epsilon=1e-5, scale=True)(x)
    x = Activation('relu')(x)
    x = Conv2D(32, (3, 3), padding='same', kernel_regularizer=regularizers.l2(1e-5))(x)
    x = BatchNormalization(momentum=0.997, epsilon=1e-5, scale=True)(x)
    x = Activation('relu')(x)

    x = Conv2D(32, (3, 3), padding='same', kernel_regularizer=regularizers.l2(1e-5))(x)
    x = BatchNormalization(momentum=0.997, epsilon=1e-5, scale=True)(x)
    x = Activation('relu')(x)

    pred_score_map = Conv2D(1, (1, 1), activation=tf.nn.sigmoid, name='pred_score_map')(x)
    rbox_geo_map = Conv2D(4, (1, 1), activation=tf.nn.sigmoid, name='rbox_geo_map')(x) 
    rbox_geo_map = Lambda(lambda x: x * input_size)(rbox_geo_map)
    angle_map = Conv2D(1, (1, 1), activation=tf.nn.sigmoid, name='rbox_angle_map')(x)
    angle_map = Lambda(lambda x: (x - 0.5) * np.pi / 2)(angle_map)
    pred_geo_map = concatenate([rbox_geo_map, angle_map], axis=3, name='pred_geo_map')

    model = Model(inputs=[input_image, overly_small_text_region_training_mask, text_region_boundary_training_mask, target_score_map], outputs=[pred_score_map, pred_geo_map])
    
    self.model = model
    self.input_image = input_image
    self.overly_small_text_region_training_mask = overly_small_text_region_training_mask
    self.text_region_boundary_training_mask = text_region_boundary_training_mask
    self.target_score_map = target_score_map
    self.pred_score_map = pred_score_map
    self.pred_geo_map = pred_geo_map

Please let me know. I have been trying to solve this issue for a while now.

NotImplementedError: Cannot convert a symbolic Tensor (truediv_4:0) to a numpy array.

Hello. While training in the first epoch itself, I keep getting this error:

File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 767, in on_epoch
yield epoch_logs
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 342, in fit
total_epochs=epochs)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 181, in run_one_epoch
step += 1
File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/contextlib.py", line 119, in exit
next(self.gen)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 789, in on_batch
self.progbar.on_batch_end(step, batch_logs)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/keras/callbacks.py", line 781, in on_batch_end
self.progbar.update(self.seen, self.log_values)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 559, in update
avg = np.mean(self._values[k][0] / max(1, self._values[k][1]))
File "<array_function internals>", line 6, in mean
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 3335, in mean
out=out, **kwargs)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/numpy/core/_methods.py", line 135, in _mean
arr = asanyarray(a)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/numpy/core/_asarray.py", line 138, in asanyarray
return array(a, dtype, copy=False, order=order, subok=True)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 728, in array
" array.".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (truediv_2:0) to a numpy array.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 279, in
main()
File "train.py", line 276, in main
history = parallel_model.fit(train_data_generator, epochs=FLAGS.max_epochs, steps_per_epoch=train_samples_count/FLAGS.batch_size, callbacks=callbacks, verbose=1)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 397, in fit
prefix='val_')
File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/contextlib.py", line 130, in exit
self.gen.throw(type, value, traceback)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 772, in on_epoch
self.progbar.on_epoch_end(epoch, epoch_logs)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/keras/callbacks.py", line 789, in on_epoch_end
self.progbar.update(self.seen, self.log_values)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 559, in update
avg = np.mean(self._values[k][0] / max(1, self._values[k][1]))
File "<array_function internals>", line 6, in mean
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 3335, in mean
out=out, **kwargs)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/numpy/core/_methods.py", line 135, in _mean
arr = asanyarray(a)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/numpy/core/_asarray.py", line 138, in asanyarray
return array(a, dtype, copy=False, order=order, subok=True)
File "/Users/nesaraf/Desktop/tf2Env/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 728, in array
" array.".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (truediv_4:0) to a numpy array.

Not able to figure out why? Please let me know.

Locality Aware NMS as separate package

The bit where you compile the NMS package is ... brilliant ... if a bit scary. It might be a bit cleaner if that was its own separate package on PyPI and this package were purely python.

I can help get that done and build self contained wheels for it as well (at least for Linux). Interested?

error

/home/saju/.local/lib/python3.6/site-packages/keras_applications/resnet50.py:265: UserWarning: The output shape of ResNet50(include_top=False) has been changed since Keras 2.2.0.
warnings.warn('The output shape of ResNet50(include_top=False) '

Ran out of input

when i run use_multiprocessing = true have a error
image

and when use_multiprocessing = false, have error
image

killed

Downloaded your code all over as well as the datasets ICDAR2013+2015 for training and val_data_latin for validation. Starting the training and it abruptly ends with "killed".

The last part of of the terminal output:
Total params: 24,237,478
Trainable params: 24,183,398
Non-trainable params: 54,080
________________________________________________________________________________Epoch 1/3
12 training images in /home/johan/Documents/agellis_projects/temp/ICDAR2013+2015_12/train_data/
Killed

Error: timeout value too large in multiprocessor

Hi,
I am getting this error.
Did you every get any such error.
If so, can you please suggest how to resolve it?

  File "D:/Documents/PythonScripts/Keras_EAST/train.py", line 213, in main
    val_data = data_processor.load_data(FLAGS)

  File "D:\Documents\PythonScripts\Keras_EAST\data_processor.py", line 864, in load_data
    loaded_data = pool.map_async(load_data_process, zip(image_files, itertools.repeat(FLAGS), itertools.repeat(is_train))).get(9999999)

  File "c:\winpython\python-3.5.4.amd64\lib\multiprocessing\pool.py", line 638, in get
    self.wait(timeout)

  File "c:\winpython\python-3.5.4.amd64\lib\multiprocessing\pool.py", line 635, in wait
    self._event.wait(timeout)

  File "c:\winpython\python-3.5.4.amd64\lib\threading.py", line 549, in wait
    signaled = self._cond.wait(timeout)

Terminology

Could you explain the following terms:

geo_maps
score_maps
image_fns
geo_map_channels
overly_small_text_region_training_mask
text_region_boundary_training_mask

Query about Transfer Learning?

HI, firstly this is a great library and great work on porting it over.

I have a query regarding using the precompiled EAST model for transfer learning? I am able to download and run the precompiled model but since my dataset is small, I thought the best approach is to apply transfer learning using the model as a feature extractor.

However, I am trying to get the output of lower layers in the network rather than the higher layers?

Are there any approaches to this? Are there any specific layers I should get the output from?

Thank you for any advice / help.

Error loading pretrained model.json

Hi. Thank you very much for your EAST implementation in Keras.
When I download the pretrained model from the link in README and run eval.py, it raised an error.
File "eval.py", line 194, in
main()
File "eval.py", line 140, in main
model = model_from_json(loaded_model_json, custom_objects={'tf': tf, 'RESIZE_FACTOR': RESIZE_FACTOR})
File "/usr/local/lib/python3.5/dist-packages/keras/engine/saving.py", line 492, in model_from_json
return deserialize(config, custom_objects=custom_objects)
File "/usr/local/lib/python3.5/dist-packages/keras/layers/init.py", line 55, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py", line 145, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python3.5/dist-packages/keras/engine/network.py", line 1032, in from_config
process_node(layer, node_data)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/network.py", line 991, in process_node
layer(unpack_singleton(input_tensors), **kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/base_layer.py", line 457, in call
output = self.call(inputs, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/layers/core.py", line 687, in call
return self.function(inputs, **arguments)
File "/home/u00012/code/Text-Detection/EAST-keras/model.py", line 13, in resize_bilinear
return tf.image.resize_bilinear(x, size=[K.shape(x)[1]*RESIZE_FACTOR, K.shape(x)[2]*RESIZE_FACTOR])

error

Exception in thread Thread-6:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/saju/.local/lib/python3.6/site-packages/keras/utils/data_utils.py", line 666, in _run
with closing(self.executor_fn(_SHARED_SEQUENCES)) as executor:
File "/home/saju/.local/lib/python3.6/site-packages/keras/utils/data_utils.py", line 661, in
initargs=(seqs, self.random_seed))
File "/usr/lib/python3.6/multiprocessing/context.py", line 119, in Pool
context=self.get_context())
File "/usr/lib/python3.6/multiprocessing/pool.py", line 175, in init
self._repopulate_pool()
File "/usr/lib/python3.6/multiprocessing/pool.py", line 236, in _repopulate_pool
self._wrap_exception)
File "/usr/lib/python3.6/multiprocessing/pool.py", line 255, in _repopulate_pool_static
w.start()
File "/usr/lib/python3.6/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/usr/lib/python3.6/multiprocessing/context.py", line 277, in _Popen
return Popen(process_obj)
File "/usr/lib/python3.6/multiprocessing/popen_fork.py", line 19, in init
self._launch(process_obj)
File "/usr/lib/python3.6/multiprocessing/popen_fork.py", line 66, in _launch
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory

data_processor: Undefined variable

Line 479 and 481 in data_processor you have "overly_small_text_mask" which doesn't exist.
Should be "overly_small_text_region_training_mask".
This will generate errors when loading validation data for some reason.

Then line 693 still in data_processor you have "im_padded" which doesn't exist neither. It should be "im" considering the line above.

Perforamnce of Kurapan EAST

I tried to execute the prediction code of argman east and kurapan east . Accuracy doesn't vary much rather performance vary a lot. For example with a "X" CPU configuration, argman EAST took on an avg - 0.4 to 0.5 sec to do prediction where as Kurapan EAST is consuming on an average 60 sec or more for an image on the same machine
is this because of tf.slim vs tf.keras? or is there anything else which delay the prediction
please give me the clarity.

TypeError: can't pickle generator objects

I get the following error when setting "use_multiprocessing=True". Why, and how should I solve this?

Traceback (most recent call last): File "train.py", line 264, in <module> main() File "train.py", line 260, in main history = parallel_model.fit_generator(train_data_generator, epochs=FLAGS.max_epochs, steps_per_epoch=train_samples_count/FLAGS.batch_size, workers=FLAGS.nb_workers, use_multiprocessing=True, max_queue_size=10, callbacks=callbacks, verbose=1) File "/home/johan/virtualenvs/east/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/home/johan/virtualenvs/east/lib/python3.6/site-packages/keras/engine/training.py", line 2176, in fit_generator enqueuer.start(workers=workers, max_queue_size=max_queue_size) File "/home/johan/virtualenvs/east/lib/python3.6/site-packages/keras/utils/data_utils.py", line 726, in start thread.start() File "/usr/lib/python3.6/multiprocessing/process.py", line 105, in start self._popen = self._Popen(self) File "/usr/lib/python3.6/multiprocessing/context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/usr/lib/python3.6/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/usr/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 32, in _init_ super().__init__(process_obj) File "/usr/lib/python3.6/multiprocessing/popen_fork.py", line 19, in _init_ self._launch(process_obj) File "/usr/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.6/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: can't pickle generator objects swig/python detected a memory leak of type 'int64_t *', no destructor found.

How should i annotate my images for training my own model

Hi Kurapan,
Thanks very much for the implementation in Keras. Iam planning to train the whole model for my custom dataset. How should i annotate my images. I mean i have already annotated my images with x1,x2,y1,y2 but i dont have any detected word on it. Also i used just rectangular bounding box. Should i use polygons for it? How should i do it ? Is there any specific tool in which i can get annotations in gttext file format .

for example:
886,144,934,141,932,157,884,160,smrt
869,67,920,61,923,85,872,91,citi

error2

WARNING:tensorflow:From /home/saju/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-03-14 02:22:16.307659: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-14 02:22:17.039148: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2195050000 Hz
2019-03-14 02:22:17.105185: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x1ce8d10 executing computations on platform Host. Devices:
2019-03-14 02:22:17.105277: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,

Training the model using my own data

Hi kurapan,

I have built my own data, which consist of 200 images, and I trained the model for 80 iterations. However, the model performance have not improved; it became worst. Should I train the model more or collect more data?

Thanks

Accuracy comparison

Have you ever compared the accuracy between EAST using tensorflow and this algorithm?

EOFError when training

Starting the training I get the EOFError shown below.
What am I doing wrong?

(keras2) johan@johan-VirtualBox:~/Documents/school_projects/temp/EAST-master$ python3 train.py --gpu_list=1 --input_size=512 --batch_size=12 --nb_workers=6 --training_data_path=/home/johan/Documents/school_projects/temp/train_data40 --validation_data_path=/home/johan/Documents/school_projects/temp/train_data40 --checkpoint_path=/home/johan/Documents/school_projects/temp/EAST-master/temp --max_epochs=3
/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:471: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:472: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:473: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:474: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:475: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
Using TensorFlow backend.
Number of validation images : 40
Training with 1 GPU
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.


Layer (type) Output Shape Param # Connected to

input_image (InputLayer) (None, None, None, 3 0


conv1_pad (ZeroPadding2D) (None, None, None, 3 0 input_image[0][0]


conv1 (Conv2D) (None, None, None, 6 9472 conv1_pad[0][0]


bn_conv1 (BatchNormalization) (None, None, None, 6 256 conv1[0][0]


activation_1 (Activation) (None, None, None, 6 0 bn_conv1[0][0]


max_pooling2d_1 (MaxPooling2D) (None, None, None, 6 0 activation_1[0][0]


res2a_branch2a (Conv2D) (None, None, None, 6 4160 max_pooling2d_1[0][0]


bn2a_branch2a (BatchNormalizati (None, None, None, 6 256 res2a_branch2a[0][0]


activation_2 (Activation) (None, None, None, 6 0 bn2a_branch2a[0][0]


res2a_branch2b (Conv2D) (None, None, None, 6 36928 activation_2[0][0]


bn2a_branch2b (BatchNormalizati (None, None, None, 6 256 res2a_branch2b[0][0]


activation_3 (Activation) (None, None, None, 6 0 bn2a_branch2b[0][0]


res2a_branch2c (Conv2D) (None, None, None, 2 16640 activation_3[0][0]


res2a_branch1 (Conv2D) (None, None, None, 2 16640 max_pooling2d_1[0][0]


bn2a_branch2c (BatchNormalizati (None, None, None, 2 1024 res2a_branch2c[0][0]


bn2a_branch1 (BatchNormalizatio (None, None, None, 2 1024 res2a_branch1[0][0]


add_1 (Add) (None, None, None, 2 0 bn2a_branch2c[0][0]
bn2a_branch1[0][0]


activation_4 (Activation) (None, None, None, 2 0 add_1[0][0]


res2b_branch2a (Conv2D) (None, None, None, 6 16448 activation_4[0][0]


bn2b_branch2a (BatchNormalizati (None, None, None, 6 256 res2b_branch2a[0][0]


activation_5 (Activation) (None, None, None, 6 0 bn2b_branch2a[0][0]


res2b_branch2b (Conv2D) (None, None, None, 6 36928 activation_5[0][0]


bn2b_branch2b (BatchNormalizati (None, None, None, 6 256 res2b_branch2b[0][0]


activation_6 (Activation) (None, None, None, 6 0 bn2b_branch2b[0][0]


res2b_branch2c (Conv2D) (None, None, None, 2 16640 activation_6[0][0]


bn2b_branch2c (BatchNormalizati (None, None, None, 2 1024 res2b_branch2c[0][0]


add_2 (Add) (None, None, None, 2 0 bn2b_branch2c[0][0]
activation_4[0][0]


activation_7 (Activation) (None, None, None, 2 0 add_2[0][0]


res2c_branch2a (Conv2D) (None, None, None, 6 16448 activation_7[0][0]


bn2c_branch2a (BatchNormalizati (None, None, None, 6 256 res2c_branch2a[0][0]


activation_8 (Activation) (None, None, None, 6 0 bn2c_branch2a[0][0]


res2c_branch2b (Conv2D) (None, None, None, 6 36928 activation_8[0][0]


bn2c_branch2b (BatchNormalizati (None, None, None, 6 256 res2c_branch2b[0][0]


activation_9 (Activation) (None, None, None, 6 0 bn2c_branch2b[0][0]


res2c_branch2c (Conv2D) (None, None, None, 2 16640 activation_9[0][0]


bn2c_branch2c (BatchNormalizati (None, None, None, 2 1024 res2c_branch2c[0][0]


add_3 (Add) (None, None, None, 2 0 bn2c_branch2c[0][0]
activation_7[0][0]


activation_10 (Activation) (None, None, None, 2 0 add_3[0][0]


res3a_branch2a (Conv2D) (None, None, None, 1 32896 activation_10[0][0]


bn3a_branch2a (BatchNormalizati (None, None, None, 1 512 res3a_branch2a[0][0]


activation_11 (Activation) (None, None, None, 1 0 bn3a_branch2a[0][0]


res3a_branch2b (Conv2D) (None, None, None, 1 147584 activation_11[0][0]


bn3a_branch2b (BatchNormalizati (None, None, None, 1 512 res3a_branch2b[0][0]


activation_12 (Activation) (None, None, None, 1 0 bn3a_branch2b[0][0]


res3a_branch2c (Conv2D) (None, None, None, 5 66048 activation_12[0][0]


res3a_branch1 (Conv2D) (None, None, None, 5 131584 activation_10[0][0]


bn3a_branch2c (BatchNormalizati (None, None, None, 5 2048 res3a_branch2c[0][0]


bn3a_branch1 (BatchNormalizatio (None, None, None, 5 2048 res3a_branch1[0][0]


add_4 (Add) (None, None, None, 5 0 bn3a_branch2c[0][0]
bn3a_branch1[0][0]


activation_13 (Activation) (None, None, None, 5 0 add_4[0][0]


res3b_branch2a (Conv2D) (None, None, None, 1 65664 activation_13[0][0]


bn3b_branch2a (BatchNormalizati (None, None, None, 1 512 res3b_branch2a[0][0]


activation_14 (Activation) (None, None, None, 1 0 bn3b_branch2a[0][0]


res3b_branch2b (Conv2D) (None, None, None, 1 147584 activation_14[0][0]


bn3b_branch2b (BatchNormalizati (None, None, None, 1 512 res3b_branch2b[0][0]


activation_15 (Activation) (None, None, None, 1 0 bn3b_branch2b[0][0]


res3b_branch2c (Conv2D) (None, None, None, 5 66048 activation_15[0][0]


bn3b_branch2c (BatchNormalizati (None, None, None, 5 2048 res3b_branch2c[0][0]


add_5 (Add) (None, None, None, 5 0 bn3b_branch2c[0][0]
activation_13[0][0]


activation_16 (Activation) (None, None, None, 5 0 add_5[0][0]


res3c_branch2a (Conv2D) (None, None, None, 1 65664 activation_16[0][0]


bn3c_branch2a (BatchNormalizati (None, None, None, 1 512 res3c_branch2a[0][0]


activation_17 (Activation) (None, None, None, 1 0 bn3c_branch2a[0][0]


res3c_branch2b (Conv2D) (None, None, None, 1 147584 activation_17[0][0]


bn3c_branch2b (BatchNormalizati (None, None, None, 1 512 res3c_branch2b[0][0]


activation_18 (Activation) (None, None, None, 1 0 bn3c_branch2b[0][0]


res3c_branch2c (Conv2D) (None, None, None, 5 66048 activation_18[0][0]


bn3c_branch2c (BatchNormalizati (None, None, None, 5 2048 res3c_branch2c[0][0]


add_6 (Add) (None, None, None, 5 0 bn3c_branch2c[0][0]
activation_16[0][0]


activation_19 (Activation) (None, None, None, 5 0 add_6[0][0]


res3d_branch2a (Conv2D) (None, None, None, 1 65664 activation_19[0][0]


bn3d_branch2a (BatchNormalizati (None, None, None, 1 512 res3d_branch2a[0][0]


activation_20 (Activation) (None, None, None, 1 0 bn3d_branch2a[0][0]


res3d_branch2b (Conv2D) (None, None, None, 1 147584 activation_20[0][0]


bn3d_branch2b (BatchNormalizati (None, None, None, 1 512 res3d_branch2b[0][0]


activation_21 (Activation) (None, None, None, 1 0 bn3d_branch2b[0][0]


res3d_branch2c (Conv2D) (None, None, None, 5 66048 activation_21[0][0]


bn3d_branch2c (BatchNormalizati (None, None, None, 5 2048 res3d_branch2c[0][0]


add_7 (Add) (None, None, None, 5 0 bn3d_branch2c[0][0]
activation_19[0][0]


activation_22 (Activation) (None, None, None, 5 0 add_7[0][0]


res4a_branch2a (Conv2D) (None, None, None, 2 131328 activation_22[0][0]


bn4a_branch2a (BatchNormalizati (None, None, None, 2 1024 res4a_branch2a[0][0]


activation_23 (Activation) (None, None, None, 2 0 bn4a_branch2a[0][0]


res4a_branch2b (Conv2D) (None, None, None, 2 590080 activation_23[0][0]


bn4a_branch2b (BatchNormalizati (None, None, None, 2 1024 res4a_branch2b[0][0]


activation_24 (Activation) (None, None, None, 2 0 bn4a_branch2b[0][0]


res4a_branch2c (Conv2D) (None, None, None, 1 263168 activation_24[0][0]


res4a_branch1 (Conv2D) (None, None, None, 1 525312 activation_22[0][0]


bn4a_branch2c (BatchNormalizati (None, None, None, 1 4096 res4a_branch2c[0][0]


bn4a_branch1 (BatchNormalizatio (None, None, None, 1 4096 res4a_branch1[0][0]


add_8 (Add) (None, None, None, 1 0 bn4a_branch2c[0][0]
bn4a_branch1[0][0]


activation_25 (Activation) (None, None, None, 1 0 add_8[0][0]


res4b_branch2a (Conv2D) (None, None, None, 2 262400 activation_25[0][0]


bn4b_branch2a (BatchNormalizati (None, None, None, 2 1024 res4b_branch2a[0][0]


activation_26 (Activation) (None, None, None, 2 0 bn4b_branch2a[0][0]


res4b_branch2b (Conv2D) (None, None, None, 2 590080 activation_26[0][0]


bn4b_branch2b (BatchNormalizati (None, None, None, 2 1024 res4b_branch2b[0][0]


activation_27 (Activation) (None, None, None, 2 0 bn4b_branch2b[0][0]


res4b_branch2c (Conv2D) (None, None, None, 1 263168 activation_27[0][0]


bn4b_branch2c (BatchNormalizati (None, None, None, 1 4096 res4b_branch2c[0][0]


add_9 (Add) (None, None, None, 1 0 bn4b_branch2c[0][0]
activation_25[0][0]


activation_28 (Activation) (None, None, None, 1 0 add_9[0][0]


res4c_branch2a (Conv2D) (None, None, None, 2 262400 activation_28[0][0]


bn4c_branch2a (BatchNormalizati (None, None, None, 2 1024 res4c_branch2a[0][0]


activation_29 (Activation) (None, None, None, 2 0 bn4c_branch2a[0][0]


res4c_branch2b (Conv2D) (None, None, None, 2 590080 activation_29[0][0]


bn4c_branch2b (BatchNormalizati (None, None, None, 2 1024 res4c_branch2b[0][0]


activation_30 (Activation) (None, None, None, 2 0 bn4c_branch2b[0][0]


res4c_branch2c (Conv2D) (None, None, None, 1 263168 activation_30[0][0]


bn4c_branch2c (BatchNormalizati (None, None, None, 1 4096 res4c_branch2c[0][0]


add_10 (Add) (None, None, None, 1 0 bn4c_branch2c[0][0]
activation_28[0][0]


activation_31 (Activation) (None, None, None, 1 0 add_10[0][0]


res4d_branch2a (Conv2D) (None, None, None, 2 262400 activation_31[0][0]


bn4d_branch2a (BatchNormalizati (None, None, None, 2 1024 res4d_branch2a[0][0]


activation_32 (Activation) (None, None, None, 2 0 bn4d_branch2a[0][0]


res4d_branch2b (Conv2D) (None, None, None, 2 590080 activation_32[0][0]


bn4d_branch2b (BatchNormalizati (None, None, None, 2 1024 res4d_branch2b[0][0]


activation_33 (Activation) (None, None, None, 2 0 bn4d_branch2b[0][0]


res4d_branch2c (Conv2D) (None, None, None, 1 263168 activation_33[0][0]


bn4d_branch2c (BatchNormalizati (None, None, None, 1 4096 res4d_branch2c[0][0]


add_11 (Add) (None, None, None, 1 0 bn4d_branch2c[0][0]
activation_31[0][0]


activation_34 (Activation) (None, None, None, 1 0 add_11[0][0]


res4e_branch2a (Conv2D) (None, None, None, 2 262400 activation_34[0][0]


bn4e_branch2a (BatchNormalizati (None, None, None, 2 1024 res4e_branch2a[0][0]


activation_35 (Activation) (None, None, None, 2 0 bn4e_branch2a[0][0]


res4e_branch2b (Conv2D) (None, None, None, 2 590080 activation_35[0][0]


bn4e_branch2b (BatchNormalizati (None, None, None, 2 1024 res4e_branch2b[0][0]


activation_36 (Activation) (None, None, None, 2 0 bn4e_branch2b[0][0]


res4e_branch2c (Conv2D) (None, None, None, 1 263168 activation_36[0][0]


bn4e_branch2c (BatchNormalizati (None, None, None, 1 4096 res4e_branch2c[0][0]


add_12 (Add) (None, None, None, 1 0 bn4e_branch2c[0][0]
activation_34[0][0]


activation_37 (Activation) (None, None, None, 1 0 add_12[0][0]


res4f_branch2a (Conv2D) (None, None, None, 2 262400 activation_37[0][0]


bn4f_branch2a (BatchNormalizati (None, None, None, 2 1024 res4f_branch2a[0][0]


activation_38 (Activation) (None, None, None, 2 0 bn4f_branch2a[0][0]


res4f_branch2b (Conv2D) (None, None, None, 2 590080 activation_38[0][0]


bn4f_branch2b (BatchNormalizati (None, None, None, 2 1024 res4f_branch2b[0][0]


activation_39 (Activation) (None, None, None, 2 0 bn4f_branch2b[0][0]


res4f_branch2c (Conv2D) (None, None, None, 1 263168 activation_39[0][0]


bn4f_branch2c (BatchNormalizati (None, None, None, 1 4096 res4f_branch2c[0][0]


add_13 (Add) (None, None, None, 1 0 bn4f_branch2c[0][0]
activation_37[0][0]


activation_40 (Activation) (None, None, None, 1 0 add_13[0][0]


res5a_branch2a (Conv2D) (None, None, None, 5 524800 activation_40[0][0]


bn5a_branch2a (BatchNormalizati (None, None, None, 5 2048 res5a_branch2a[0][0]


activation_41 (Activation) (None, None, None, 5 0 bn5a_branch2a[0][0]


res5a_branch2b (Conv2D) (None, None, None, 5 2359808 activation_41[0][0]


bn5a_branch2b (BatchNormalizati (None, None, None, 5 2048 res5a_branch2b[0][0]


activation_42 (Activation) (None, None, None, 5 0 bn5a_branch2b[0][0]


res5a_branch2c (Conv2D) (None, None, None, 2 1050624 activation_42[0][0]


res5a_branch1 (Conv2D) (None, None, None, 2 2099200 activation_40[0][0]


bn5a_branch2c (BatchNormalizati (None, None, None, 2 8192 res5a_branch2c[0][0]


bn5a_branch1 (BatchNormalizatio (None, None, None, 2 8192 res5a_branch1[0][0]


add_14 (Add) (None, None, None, 2 0 bn5a_branch2c[0][0]
bn5a_branch1[0][0]


activation_43 (Activation) (None, None, None, 2 0 add_14[0][0]


res5b_branch2a (Conv2D) (None, None, None, 5 1049088 activation_43[0][0]


bn5b_branch2a (BatchNormalizati (None, None, None, 5 2048 res5b_branch2a[0][0]


activation_44 (Activation) (None, None, None, 5 0 bn5b_branch2a[0][0]


res5b_branch2b (Conv2D) (None, None, None, 5 2359808 activation_44[0][0]


bn5b_branch2b (BatchNormalizati (None, None, None, 5 2048 res5b_branch2b[0][0]


activation_45 (Activation) (None, None, None, 5 0 bn5b_branch2b[0][0]


res5b_branch2c (Conv2D) (None, None, None, 2 1050624 activation_45[0][0]


bn5b_branch2c (BatchNormalizati (None, None, None, 2 8192 res5b_branch2c[0][0]


add_15 (Add) (None, None, None, 2 0 bn5b_branch2c[0][0]
activation_43[0][0]


activation_46 (Activation) (None, None, None, 2 0 add_15[0][0]


res5c_branch2a (Conv2D) (None, None, None, 5 1049088 activation_46[0][0]


bn5c_branch2a (BatchNormalizati (None, None, None, 5 2048 res5c_branch2a[0][0]


activation_47 (Activation) (None, None, None, 5 0 bn5c_branch2a[0][0]


res5c_branch2b (Conv2D) (None, None, None, 5 2359808 activation_47[0][0]


bn5c_branch2b (BatchNormalizati (None, None, None, 5 2048 res5c_branch2b[0][0]


activation_48 (Activation) (None, None, None, 5 0 bn5c_branch2b[0][0]


res5c_branch2c (Conv2D) (None, None, None, 2 1050624 activation_48[0][0]


bn5c_branch2c (BatchNormalizati (None, None, None, 2 8192 res5c_branch2c[0][0]


add_16 (Add) (None, None, None, 2 0 bn5c_branch2c[0][0]
activation_46[0][0]


activation_49 (Activation) (None, None, None, 2 0 add_16[0][0]


resize_1 (Lambda) (None, None, None, 2 0 activation_49[0][0]


concatenate_1 (Concatenate) (None, None, None, 3 0 resize_1[0][0]
activation_40[0][0]


conv2d_1 (Conv2D) (None, None, None, 1 393344 concatenate_1[0][0]


batch_normalization_1 (BatchNor (None, None, None, 1 512 conv2d_1[0][0]


activation_50 (Activation) (None, None, None, 1 0 batch_normalization_1[0][0]


conv2d_2 (Conv2D) (None, None, None, 1 147584 activation_50[0][0]


batch_normalization_2 (BatchNor (None, None, None, 1 512 conv2d_2[0][0]


activation_51 (Activation) (None, None, None, 1 0 batch_normalization_2[0][0]


resize_2 (Lambda) (None, None, None, 1 0 activation_51[0][0]


concatenate_2 (Concatenate) (None, None, None, 6 0 resize_2[0][0]
activation_22[0][0]


conv2d_3 (Conv2D) (None, None, None, 6 41024 concatenate_2[0][0]


batch_normalization_3 (BatchNor (None, None, None, 6 256 conv2d_3[0][0]


activation_52 (Activation) (None, None, None, 6 0 batch_normalization_3[0][0]


conv2d_4 (Conv2D) (None, None, None, 6 36928 activation_52[0][0]


batch_normalization_4 (BatchNor (None, None, None, 6 256 conv2d_4[0][0]


activation_53 (Activation) (None, None, None, 6 0 batch_normalization_4[0][0]


resize_3 (Lambda) (None, None, None, 6 0 activation_53[0][0]


zero_padding2d_1 (ZeroPadding2D (None, None, None, 2 0 activation_10[0][0]


concatenate_3 (Concatenate) (None, None, None, 3 0 resize_3[0][0]
zero_padding2d_1[0][0]


conv2d_5 (Conv2D) (None, None, None, 3 10272 concatenate_3[0][0]


batch_normalization_5 (BatchNor (None, None, None, 3 128 conv2d_5[0][0]


activation_54 (Activation) (None, None, None, 3 0 batch_normalization_5[0][0]


conv2d_6 (Conv2D) (None, None, None, 3 9248 activation_54[0][0]


batch_normalization_6 (BatchNor (None, None, None, 3 128 conv2d_6[0][0]


activation_55 (Activation) (None, None, None, 3 0 batch_normalization_6[0][0]


conv2d_7 (Conv2D) (None, None, None, 3 9248 activation_55[0][0]


batch_normalization_7 (BatchNor (None, None, None, 3 128 conv2d_7[0][0]


activation_56 (Activation) (None, None, None, 3 0 batch_normalization_7[0][0]


rbox_geo_map (Conv2D) (None, None, None, 4 132 activation_56[0][0]


rbox_angle_map (Conv2D) (None, None, None, 1 33 activation_56[0][0]


lambda_1 (Lambda) (None, None, None, 4 0 rbox_geo_map[0][0]


lambda_2 (Lambda) (None, None, None, 1 0 rbox_angle_map[0][0]


pred_score_map (Conv2D) (None, None, None, 1 33 activation_56[0][0]


pred_geo_map (Concatenate) (None, None, None, 5 0 lambda_1[0][0]
lambda_2[0][0]

Total params: 24,237,478
Trainable params: 24,183,398
Non-trainable params: 54,080


/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/engine/training.py:2087: UserWarning: Using a generator with use_multiprocessing=True and multiple workers may duplicate your data. Please consider using thekeras.utils.Sequence class. UserWarning('Using a generator with use_multiprocessing=True`'
Traceback (most recent call last):
Traceback (most recent call last):
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 678, in _data_generator_task
self.queue.put((True, generator_output))
File "", line 2, in put
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/managers.py", line 757, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 678, in _data_generator_task
self.queue.put((True, generator_output))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "", line 2, in put
File "/usr/lib/python3.6/multiprocessing/managers.py", line 757, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
File "/usr/lib/python3.6/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
EOFError
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 678, in _data_generator_task
self.queue.put((True, generator_output))
File "", line 2, in put
File "/usr/lib/python3.6/multiprocessing/managers.py", line 757, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
Process Process-8:
Process Process-13:
Process Process-10:
Traceback (most recent call last):
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 678, in _data_generator_task
self.queue.put((True, generator_output))
File "", line 2, in put
File "/usr/lib/python3.6/multiprocessing/managers.py", line 757, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
Process Process-11:
Traceback (most recent call last):
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 678, in _data_generator_task
self.queue.put((True, generator_output))
File "", line 2, in put
File "/usr/lib/python3.6/multiprocessing/managers.py", line 757, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
Process Process-12:
EOFError
EOFError

During handling of the above exception, another exception occurred:

EOFError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
Traceback (most recent call last):

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 688, in _data_generator_task
self.queue.put((False, e))
File "", line 2, in put
File "/usr/lib/python3.6/multiprocessing/managers.py", line 756, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 688, in _data_generator_task
self.queue.put((False, e))
File "", line 2, in put
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/managers.py", line 756, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 688, in _data_generator_task
self.queue.put((False, e))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "", line 2, in put
File "/usr/lib/python3.6/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
File "/usr/lib/python3.6/multiprocessing/managers.py", line 756, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
BrokenPipeError: [Errno 32] Broken pipe
File "/usr/lib/python3.6/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
EOFError
Traceback (most recent call last):
EOFError

During handling of the above exception, another exception occurred:

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 676, in _data_generator_task
self.queue.qsize() < self.max_queue_size):
Traceback (most recent call last):
File "", line 2, in qsize
File "/usr/lib/python3.6/multiprocessing/managers.py", line 756, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
BrokenPipeError: [Errno 32] Broken pipe
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 688, in _data_generator_task
self.queue.put((False, e))
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 688, in _data_generator_task
self.queue.put((False, e))
File "", line 2, in put
File "", line 2, in put
File "/usr/lib/python3.6/multiprocessing/managers.py", line 756, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/usr/lib/python3.6/multiprocessing/managers.py", line 756, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
BrokenPipeError: [Errno 32] Broken pipe
Process Process-9:
BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/johan/virtualenvs/keras2/lib/python3.6/site-packages/keras/utils/data_utils.py", line 688, in _data_generator_task
self.queue.put((False, e))
File "", line 2, in put
File "/usr/lib/python3.6/multiprocessing/managers.py", line 756, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
40 training images in /home/johan/Documents/school_projects/temp/train_data40
40 training images in /home/johan/Documents/school_projects/temp/train_data40
40 training images in /home/johan/Documents/school_projects/temp/train_data40
40 training images in /home/johan/Documents/school_projects/temp/train_data40
40 training images in /home/johan/Documents/school_projects/temp/train_data40
Epoch 1/3
40 training images in /home/johan/Documents/school_projects/temp/train_data40
Killed

IndexError: index 1 is out of bounds for axis 0 with size 1

Traceback (most recent call last):
File "train.py", line 249, in
main()
File "train.py", line 246, in main
history = parallel_model.fit_generator(train_data_generator, epochs=FLAGS.max_epochs, steps_per_epoch=train_samples_count/FLAGS.batch_size, use_multiprocessing=False, callbacks=callbacks, verbose=1)
File "C:\Users\VINOTH KUMAR S\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\VINOTH KUMAR S\Anaconda3\lib\site-packages\keras\engine\training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\VINOTH KUMAR S\Anaconda3\lib\site-packages\keras\engine\training_generator.py", line 251, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "C:\Users\VINOTH KUMAR S\Anaconda3\lib\site-packages\keras\callbacks.py", line 79, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "train.py", line 104, in on_epoch_end
input_image_summary = make_image_summary(((data[0][0][i] + 1) * 127.5).astype('uint8'))
IndexError: index 1 is out of bounds for axis 0 with size 1

overly_small_text_region_training_mask

If some the text in an image is to small, it is excluded from the training by this piece of code:
if min(poly_h, poly_w) < FLAGS.min_text_size: cv2.fillPoly(overly_small_text_region_training_mask, poly.astype(np.int32)[np.newaxis, :, :], 0)

However, the same text is excluded also if "tag" is true. WHY?
if tag: cv2.fillPoly(overly_small_text_region_training_mask, poly.astype(np.int32)[np.newaxis, :, :], 0)

how can i use the json model

is there any code in your github to use the json model? I have trained the model and the model parameters have been saved in the json file.How can i use?

Model difference to original code

Thank you for the great implementation. Did you by any chance do some comparison of the model output with the argman code you used as a template? I mean anything other than checking for similar precision&recall performance?

I tried to convert argman's tf.slim based models to your keras style but the topology is very different (mostly because the ResNet implementation with slim's argscope are defined differently). So now I wanted to check if the models "work" the same as with the argman approach if I retrain them with keras. So did you maybe do any layer-by-layer comparisons or reproducibility checks for your new implementation and the argman implementation to verify your approach?

ValueError: Error when checking input: expected input_image to have 4 dimensions, but got array with shape (0, 1)

When running more than 5 epochs i get the error "ValueError: Error when checking input: expected input_image to have 4 dimensions, but got array with shape (0, 1)"
Why?

Epoch 1/10 4 training images in /home/ims 2/1 [=============================================] - 69s 35s/step - loss: 0.0168 - pred_score_map_loss: 0.0100 - pred_geo_map_loss: 0.0000e+00 Epoch 2/10 2/1 [=============================================] - 47s 24s/step - loss: 0.0167 - pred_score_map_loss: 0.0100 - pred_geo_map_loss: 0.0000e+00 Epoch 3/10 2/1 [=============================================] - 38s 19s/step - loss: 0.0167 - pred_score_map_loss: 0.0100 - pred_geo_map_loss: 0.0000e+00 Epoch 4/10 2/1 [=============================================] - 37s 19s/step - loss: 0.0166 - pred_score_map_loss: 0.0100 - pred_geo_map_loss: 0.0000e+00 Epoch 5/10 2/1 [=============================================] - 37s 19s/step - loss: 0.0166 - pred_score_map_loss: 0.0100 - pred_geo_map_loss: 0.0000e+00 Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/jj/.vscode/extensions/ms-python.python-2020.3.71659/pythonFiles/lib/python/debugpy/no_wheels/debugpy/__main__.py", line 45, in <module> cli.main() File "/home/jj/.vscode/extensions/ms-python.python-2020.3.71659/pythonFiles/lib/python/debugpy/no_wheels/debugpy/../debugpy/server/cli.py", line 429, in main run() File "/home/jj/.vscode/extensions/ms-python.python-2020.3.71659/pythonFiles/lib/python/debugpy/no_wheels/debugpy/../debugpy/server/cli.py", line 266, in run_file runpy.run_path(options.target, run_name=compat.force_str("__main__")) File "/usr/lib/python3.6/runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/jjEAST-master/train.py", line 264, in <module> main() File "/home/jj/EAST-maste/train.py", line 260, in main history = parallel_model.fit_generator(train_data_generator, epochs=FLAGS.max_epochs, steps_per_epoch=train_samples_count/FLAGS.batch_size, workers=FLAGS.nb_workers, use_multiprocessing=False, max_queue_size=10, callbacks=callbacks, verbose=1) File "/jj/virtualenvs/east_keras/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/home/jj/virtualenvs/east_keras/lib/python3.6/site-packages/keras/engine/training.py", line 2262, in fit_generator callbacks.on_epoch_end(epoch, epoch_logs) File "/home/jj/Documents/virtualenvs/east_keras/lib/python3.6/site-packages/keras/callbacks.py", line 77, in on_epoch_end callback.on_epoch_end(epoch, logs) File "/home/jj/Documents//EAST-master/train.py", line 157, in on_epoch_end batch_size=FLAGS.batch_size) File "/home/jj/Documents/virtualenvs/east_keras/lib/python3.6/site-packages/keras/engine/training.py", line 1768, in evaluate batch_size=batch_size) File "/home/jj/Documents/virtualenvs/east_keras/lib/python3.6/site-packages/keras/engine/training.py", line 1476, in _standardize_user_data exception_prefix='input') File "/home/jj/Documents/virtualenvs/east_keras/lib/python3.6/site-packages/keras/engine/training.py", line 113, in _standardize_input_data 'with shape ' + str(data_shape)) ValueError: Error when checking input: expected input_image to have 4 dimensions, but got array with shape (0, 1)

Regarding model giving 0 bounding boxes

Hi @kurapan
I trained my model on 500 images to test the architecture for 800 iterations. Though even after training it gives me 0 bounding boxes even for the images I trained it on. Any idea why is this happening?
Thanks for your help!!

Regards
Nidhi

InvalidArgumentError

Thank you for your work.
I faced the following error while training the model:
Traceback (most recent call last): File "train.py", line 249, in <module> main() File "train.py", line 246, in main history = parallel_model.fit_generator(train_data_generator, epochs=FLAGS.max_epochs, steps_per_epoch=train_samples_count/FLAGS.batch_size, workers=FLAGS.nb_workers, use_multiprocessing=True, max_queue_size=10, callbacks=callbacks, verbose=1) File "/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 1418, in fit_generator initial_epoch=initial_epoch) File "/usr/local/lib/python3.5/dist-packages/keras/engine/training_generator.py", line 217, in fit_generator class_weight=class_weight) File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 1217, in train_on_batch outputs = self.train_function(ins) File "/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py", line 2715, in __call__ return self._call(inputs) File "/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py", line 2675, in _call fetched = self._callable_fn(*array_vals) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1382, in __call__ run_metadata_ptr) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 519, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [4,64,128,128] vs. shape[1] = [4,256,129,129] [[Node: concatenate_3/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _class=["loc:@training/AdamW/gradients/concatenate_3/concat_grad/Slice"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](concatenate_3/concat-0-TransposeNHWCToNCHW-LayoutOptimizer, zero_padding2d_1/Pad, loss/pred_geo_map_loss/split_1-0-LayoutOptimizer)]] [[Node: loss/pred_score_map_loss/Mean_2/_3637 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_23550_loss/pred_score_map_loss/Mean_2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

I use Keras 2.2.0 and it displays the warning like this:
/usr/local/lib/python3.5/dist-packages/keras_applications/resnet50.py:265: UserWarning: The output shape of ResNet50(include_top=False) has been changed since Keras 2.2.0.

Is this the cause of this issue?
Best regards.

OSError: Unable to open file h5py.h5f

I have been able to train the model, but when trying to test the model, I got this error message:
File "/home/paperspace/.local/lib/python3.6/site-packages/h5py/_hl/files.py", line 170, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 85, in h5py.h5f.open
OSError: Unable to open file (file read failed: time = Thu Feb 21 01:07:23 2019

Any suggestion?

Getting Error while started training

UserWarning('Using a generator with use_multiprocessing=True'
Epoch 1/800
Exception in thread Thread-6:
Traceback (most recent call last):
File "C:\Users\Aquib\Anaconda3\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Users\Aquib\Anaconda3\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Aquib\Anaconda3\lib\site-packages\keras\utils\data_utils.py", line 666, in _run
with closing(self.executor_fn(_SHARED_SEQUENCES)) as executor:
File "C:\Users\Aquib\Anaconda3\lib\site-packages\keras\utils\data_utils.py", line 661, in
initargs=(seqs, self.random_seed))
File "C:\Users\Aquib\Anaconda3\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())
File "C:\Users\Aquib\Anaconda3\lib\multiprocessing\pool.py", line 174, in init
self._repopulate_pool()
File "C:\Users\Aquib\Anaconda3\lib\multiprocessing\pool.py", line 239, in _repopulate_pool
w.start()
File "C:\Users\Aquib\Anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\Aquib\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\Aquib\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Users\Aquib\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle generator objects

Input params

Hi Jan
Can you briefly exlain the input params for training such as:
input_size : Is this the minimum size of the input image? What is images of variables sizes are used as input?
nb_worker?

errror

/home/saju/EAST-master/data_processor.py:236: RuntimeWarning: invalid value encountered in float_scalars
return np.linalg.norm(np.cross(p2 - p1, p1 - p3)) / np.linalg.norm(p2 - p1

AssertionError: can only join a child process

Running the code gives this error:
File "/usr/lib/python3.6/multiprocessing/process.py", line 122, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process

The error comes from row 862 in the file data_processor.py
pool = Pool(FLAGS.nb_workers)

Can you explain model for training ?

I read paper "EAST: An Efficient and Accurate Scene Text Detector" but i don't understand some point.
In your code
model = Model(inputs=[input_image, overly_small_text_region_training_mask, text_region_boundary_training_mask, target_score_map], outputs=[pred_score_map, pred_geo_map])

What is overly_small_text_region_training_mask, text_region_boundary_training_mask, target_score_map ? Where is it come and how to calculation .

How can I change the maximum width of the detection frame

Hello, thank you for providing the Keras code. Can you adjust the maximum width of the detection frame during the detecting? And I can't completely mark the text data of the whole line although I have labeled the entire line of text in the training data. Do you have any suggestions or comments?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.