Coder Social home page Coder Social logo

glassywing / text-detection-ocr Goto Github PK

View Code? Open in Web Editor NEW
284.0 16.0 116.0 135.76 MB

Chinese text detection and recognition based on CTPN + DENSENET using Keras and Tensor Flow,使用keras和tensorflow基于CTPN+Densenet实现的中文文本检测和识别

License: Apache License 2.0

Python 99.74% Shell 0.26%

text-detection-ocr's Introduction

text-detection

简介

为了能将一张图像中的多行文本识别出来,可以将该任务分为两步:

  1. 检测图像中每一行文本的位置
  2. 根据位置从原始图像截取出一堆子图像
  3. 只需识别出子图像的文字,再进行排序组合即可

因此,采用两类模型:

  1. 文本检测:CTPN
  2. 文本识别:Densenet + ctc

安装

运行环境

OS: win10 Python: 3.6

安装步骤

  1. 安装tensorflow 1.9.0,若电脑配置了gpu环境,请选择gpu版,否则选择cpu版
pip install tensorflow==1.9.0 # 1. for cpu
pip install tensorflow-gpu==1.9.0 # 2.  for gpu
  1. 安装该包
python setup.py sdist
cd dist/
pip install dlocr-0.1.tar.gz

执行速度

图像大小 处理器 文本行数量 速度
500kb 1070ti 20 420ms
500kb Tesla k80 20 1s

使用

OCR

用于识别一张图片中的文字

  • 编程方式

    import time
    import dlocr
    
    if __name__ == '__main__':
        ocr = dlocr.get_or_create()
        start = time.time()
    
        bboxes, texts = ocr.detect("../asset/demo_ctpn.png")
        print('\n'.join(texts))
        print(f"cost: {(time.time() - start) * 1000}ms")

    get_or_create() 支持以下参数用于使用自己训练的模型:

    • ctpn_weight_path
    • ctpn_config_path
    • densenet_weight_path
    • densenet_config_path
    • dict_path

    参数说明:见以下命令行方式中的参数说明。

  • 命令行方式

    > python -m dlocr -h
    
    usage: text_detection_app.py [-h] [--image_path IMAGE_PATH]
                                [--dict_file_path DICT_FILE_PATH]
                                [--densenet_config_path DENSENET_CONFIG_PATH]
                                [--ctpn_config_path CTPN_CONFIG_PATH]
                                [--ctpn_weight_path CTPN_WEIGHT_PATH]
                                [--densenet_weight_path DENSENET_WEIGHT_PATH]
                                [--adjust ADJUST]
    
    optional arguments:
      -h, --help            show this help message and exit
      --image_path IMAGE_PATH
                            图像位置
      --dict_file_path DICT_FILE_PATH
                            字典文件位置
      --densenet_config_path DENSENET_CONFIG_PATH
                            densenet模型配置文件位置
      --ctpn_config_path CTPN_CONFIG_PATH
                            ctpn模型配置文件位置
      --ctpn_weight_path CTPN_WEIGHT_PATH
                            ctpn模型权重文件位置
      --densenet_weight_path DENSENET_WEIGHT_PATH
                            densenet模型权重文件位置
      --adjust ADJUST       是否对倾斜的文本进行旋转
    
  1. ctpn模型权重文件位置不指定默认使用weights/weights-ctpnlstm-init.hdf5
  2. ctpn模型配置文件位置不指定默认使用config/ctpn-default.json
  3. densenet模型权重文件位置不指定默认使用weights/weights-densent-init.hdf5
  4. densenet模型配置文件位置不指定默认使用config/densent-default.json
  5. 字典文件位置不指定默认使用dictionary/char_std_5990.txt

示例:

python -m dlocr  --image_path asset/demo_ctpn.png

CTPN

用于定于图像中文字的位置

  • 编程方式

    from dlocr import ctpn
    
    if __name__ == '__main__':
        ctpn = ctpn.get_or_create()
        ctpn.predict("asset/demo_ctpn.png", "asset/demo_ctpn_labeled.jpg")
  • 命令行方式

    > python dlocr.ctpn_predict.py -h
    
    usage: ctpn_predict.py [-h] [--image_path IMAGE_PATH]
                          [--config_file_path CONFIG_FILE_PATH]
                          [--weights_file_path WEIGHTS_FILE_PATH]
                          [--output_file_path OUTPUT_FILE_PATH]
    
    optional arguments:
      -h, --help            show this help message and exit
      --image_path IMAGE_PATH
                            图像位置
      --config_file_path CONFIG_FILE_PATH
                            模型配置文件位置
      --weights_file_path WEIGHTS_FILE_PATH
                            模型权重文件位置
      --output_file_path OUTPUT_FILE_PATH
                            标记文件保存位置
    1. 权重文件位置不指定默认使用weights/weights-ctpnlstm-init.hdf5
    2. 配置文件位置不指定默认使用config/ctpn-default.json

    示例:

    python ctpn_predict.py --image_path asset/demo_ctpn.png --output_file_path asset/demo_ctpn_labeled.jpg

Densenet

用于识别固定图像高度中的文字,默认图像高度为32

  • 编程方式

    from dlocr.densenet import load_dict, default_dict_path
    from dlocr import densenet
    
    if __name__ == '__main__':
        densenet = densenet.get_or_create()
        text, img = densenet.predict("asset/demo_densenet.jpg", load_dict(default_dict_path))
        print(text)
  • 命令行方式

    > python dlocr.densenet_predict.py -h
    
    usage: densenetocr_predict.py [-h] [--image_path IMAGE_PATH]
                                  [--dict_file_path DICT_FILE_PATH]
                                  [--config_file_path CONFIG_FILE_PATH]
                                  [--weights_file_path WEIGHTS_FILE_PATH]
    
    optional arguments:
      -h, --help            show this help message and exit
      --image_path IMAGE_PATH
                            图像位置
      --dict_file_path DICT_FILE_PATH
                            字典文件位置
      --config_file_path CONFIG_FILE_PATH
                            模型配置文件位置
      --weights_file_path WEIGHTS_FILE_PATH
                            模型权重文件位置
  1. 权重文件位置不指定默认使用weights/weights-densent-init.hdf5
  2. 配置文件位置不指定默认使用config/densent-default.json
  3. 字典文件位置不指定默认使用dictionary/char_std_5990.txt

示例:

python densenetocr_predict.py --image_path asset/demo_densenet.jpg

训练

数据集说明

  • CTPN 训练使用的数据集格式与VOC数据集格式相同,目录格式如下:

    - VOCdevkit
        - VOC2007
            - Annotations
            - ImageSets
            - JPEGImages
  • Densenet + ctc 使用的数据集分为3部分

    • 文字图像
    • 标注文件:包括图像路径与所对应的文本标记(train.txt, test.txt)
    • 字典文件:包含数据集中的所有文字 (char_std_5990.txt)

数据集链接:

关于创建自己的文本识别数据集,可参考:https://github.com/Sanster/text_renderer

CTPN 训练

> python -m dlocr.ctpn_train -h

usage: ctpn_train.py [-h] [-ie INITIAL_EPOCH] [--epochs EPOCHS] [--gpus GPUS]
                     [--images_dir IMAGES_DIR] [--anno_dir ANNO_DIR]
                     [--config_file_path CONFIG_FILE_PATH]
                     [--weights_file_path WEIGHTS_FILE_PATH]
                     [--save_weights_file_path SAVE_WEIGHTS_FILE_PATH]

optional arguments:
  -h, --help            show this help message and exit
  -ie INITIAL_EPOCH, --initial_epoch INITIAL_EPOCH
                        初始迭代数
  --epochs EPOCHS       迭代数
  --gpus GPUS           gpu的数量
  --images_dir IMAGES_DIR
                        图像位置
  --anno_dir ANNO_DIR   标注文件位置
  --config_file_path CONFIG_FILE_PATH
                        模型配置文件位置
  --weights_file_path WEIGHTS_FILE_PATH
                        模型初始权重文件位置
  --save_weights_file_path SAVE_WEIGHTS_FILE_PATH
                        保存模型训练权重文件位置

ctpn 的训练需要传入2个必要参数:

  1. 图像目录位置
  2. 标注文件目录位置

<模型配置文件位置> 用于指定模型的一些参数,若不指定,将使用默认配置:

{
  "image_channels": 3,  // 图像通道数
  "vgg_trainable": true, // vgg 模型是否可训练
  "lr": 1e-05   // 初始学习率
}

<保存模型训练权重文件位置> 若不指定,会保存到当前目录下的model文件夹

训练情况:

...

Epoch 17/20
6000/6000 [==============================] - 4036s 673ms/step - loss: 0.0895 - rpn_class_loss: 0.0360 - rpn_regress_loss: 0.0534
Epoch 18/20
6000/6000 [==============================] - 4075s 679ms/step - loss: 0.0857 - rpn_class_loss: 0.0341 - rpn_regress_loss: 0.0516
Epoch 19/20
6000/6000 [==============================] - 4035s 673ms/step - loss: 0.0822 - rpn_class_loss: 0.0324 - rpn_regress_loss: 0.0498
Epoch 20/20
6000/6000 [==============================] - 4165s 694ms/step - loss: 0.0792 - rpn_class_loss: 0.0308 - rpn_regress_loss: 0.0484

Densenet 训练

> python -m dlocr.densenet_train -h
usage: densenet_train.py [-h] [-ie INITIAL_EPOCH] [-bs BATCH_SIZE]
                         [--epochs EPOCHS] [--gpus GPUS]
                         [--images_dir IMAGES_DIR]
                         [--dict_file_path DICT_FILE_PATH]
                         [--train_file_path TRAIN_FILE_PATH]
                         [--test_file_path TEST_FILE_PATH]
                         [--config_file_path CONFIG_FILE_PATH]
                         [--weights_file_path WEIGHTS_FILE_PATH]
                         [--save_weights_file_path SAVE_WEIGHTS_FILE_PATH]

optional arguments:
  -h, --help            show this help message and exit
  -ie INITIAL_EPOCH, --initial_epoch INITIAL_EPOCH
                        初始迭代数
  -bs BATCH_SIZE, --batch_size BATCH_SIZE
                        小批量处理大小
  --epochs EPOCHS       迭代数
  --gpus GPUS           gpu的数量
  --images_dir IMAGES_DIR
                        图像位置
  --dict_file_path DICT_FILE_PATH
                        字典文件位置
  --train_file_path TRAIN_FILE_PATH
                        训练文件位置
  --test_file_path TEST_FILE_PATH
                        测试文件位置
  --config_file_path CONFIG_FILE_PATH
                        模型配置文件位置
  --weights_file_path WEIGHTS_FILE_PATH
                        模型初始权重文件位置
  --save_weights_file_path SAVE_WEIGHTS_FILE_PATH
                        保存模型训练权重文件位置

Densnet 的训练需要4个必要参数:

  1. 训练图像位置
  2. 字典文件位置
  3. 训练文件位置
  4. 测试文件位置

<模型配置文件位置> 用于指定模型使用的配置文件路径,若不指定,默认配置如下:

{
  "lr": 0.0005, // 初始学习率
  "num_classes": 5990, // 字典大小
  "image_height": 32,   // 图像高
  "image_channels": 1,  // 图像通道数
  "maxlen": 50,         // 最长文本长度
  "dropout_rate": 0.2,  //  随机失活率
  "weight_decay": 0.0001, // 权重衰减率
  "filters": 64         // 模型第一层的核数量
}

<保存模型训练权重文件位置> 若不指定,会保存到当前目录下的model文件夹

训练情况:

Epoch 3/100
25621/25621 [==============================] - 15856s 619ms/step - loss: 0.1035 - acc: 0.9816 - val_loss: 0.1060 - val_acc: 0.9823
Epoch 4/100
25621/25621 [==============================] - 15651s 611ms/step - loss: 0.0798 - acc: 0.9879 - val_loss: 0.0848 - val_acc: 0.9878
Epoch 5/100
25621/25621 [==============================] - 16510s 644ms/step - loss: 0.0732 - acc: 0.9889 - val_loss: 0.0815 - val_acc: 0.9881
Epoch 6/100
25621/25621 [==============================] - 15621s 610ms/step - loss: 0.0691 - acc: 0.9895 - val_loss: 0.0791 - val_acc: 0.9886
Epoch 7/100
25621/25621 [==============================] - 15782s 616ms/step - loss: 0.0666 - acc: 0.9899 - val_loss: 0.0787 - val_acc: 0.9887
Epoch 8/100
25621/25621 [==============================] - 15560s 607ms/step - loss: 0.0645 - acc: 0.9903 - val_loss: 0.0771 - val_acc: 0.9888

其它

训练好的权重文件

链接: https://pan.baidu.com/s/1HaeLO-fV_WCtTZl4DQvrzw 提取码: ihdx

参考

  1. https://github.com/YCG09/chinese_ocr
  2. https://github.com/xiaomaxiao/keras_ocr

text-detection-ocr's People

Contributors

glassywing avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

text-detection-ocr's Issues

合作交流

您好,我们是一家央企的人工智能公司(中译语通科技股份有限公司)主要从事大数据、智慧城市、机器翻译、知识图谱、语音识别、ocr识别等技术的研发,我是这边的技术负责人在github上看到您的开源系统很感兴趣,希望和您进一步沟通交流。

您可以加我微信:18611586751

can only join an iterable

with open('train_semantic.txt','w',encoding='utf-8') as f:
for i in range(len(X_train)):
str1 = " ".join(X_train[i])+"\t"+"label"+str(y_train[i])+'\n'
f.write(str1)


TypeError Traceback (most recent call last)
in
1 with open('train_semantic.txt','w',encoding='utf-8') as f:
2 for i in range(len(X_train)):
----> 3 str1 = " ".join(X_train[i])+"\t"+"label"+str(y_train[i])+'\n'
4 f.write(str1)
TypeError: can only join an iterable

How to address this issue?

请问TypeError: can only join an iterable需要怎么解决

我将densenet_predict.py中的image_path默认进行了设置
parser.add_argument("--image_path", help="图像位置",
default='E:\Python\Recognition\dlocr\picture\1.jpg')
结果运行报了错,请问怎么办
Traceback (most recent call last):
File "E:/Python/Recognition/dlocr/densenet_predict.py", line 37, in
print('\n'.join(densenet.predict(image_path, id_to_char)[1]))
TypeError: can only join an iterable

labers

自定义数据,是不是只要用labelimg像制作faster-rcnn的标签那样子?还需要不需要额外的处理

叠词丢字问题

识别结果中重叠字会出现丢字的现象,如“2003”,识别出的结果为“203”;“共和国国家”识别结果为“共和国家”。请问这个问题有没有解决思路,感谢作者开源!

densenet CRNN训练时报错

训练时会报错:
ValueError: Error when checking input: expected the_input to have 4 dimensions, but got array with shape (64, 1)
请问该怎么解决?

CTPN检测

为什么使用自己训练的模型,测试时,什么都没检测出来。

数据标签

请问densenet 识别的数据标签是怎么制作的

运行时GPU内存耗尽

CPU能够识别,想试一下GPU版本的tensorflow识别速度上是否有提升,但是GPU内存比较少,结果悲剧。

ResourceExhaustedError: OOM when allocating tensor with shape[1,64,710,896] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[Node: block1_conv2/convolution = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](block1_conv1/Relu, block1_conv2/kernel/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

	 [[Node: rpn_class/Reshape/_1549 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_633_rpn_class/Reshape", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

想问一下@GlassyWing, GPU最少需要多少内存,是否有办法调整?

使用ctpn做训练时内存很快耗尽,有警告

D:\python3\lib\site-packages\tensorflow\python\ops\gradients_impl.py:100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
报错信息如上
@GlassyWing

training error

hi @GlassyWing

Thanks for your excellent work!

I trained the model and error happen as follows

ValueError: Error when checking target: expected rpn_class to have shape (None, None, 2) but got array with shape (1, 1, 7130)

what can I do to proceed training?

Thanks

训练集Chinese_dataset 下的label 文件load的时候报错

报错信息如下: 且每次报错还指向不同的labeltxt,请问有啥解决办法吗
File "F:\Python Project\text-detection-ocr\dlocr\densenet\data_loader.py", line 111, in
for img, label_len, input_len, label in executor.map(lambda t: load_single_example(*t), image_labels):
File "F:\Python Project\text-detection-ocr\dlocr\densenet\data_loader.py", line 96, in load_single_example
label[0: len(image_label)] = [int(i) - 1 for i in image_label] #int 改成float by chenz
File "F:\Python Project\text-detection-ocr\dlocr\densenet\data_loader.py", line 96, in
label[0: len(image_label)] = [int(i) - 1 for i in image_label] #int 改成float by chenz
ValueError: invalid literal for int() with base 10: '也受到了牵连,老是嘟着嘴,无'

关于Densenet CTC训练设置问题

博主您好,感谢您开源如此优秀的工程。

想向您请教一下如何修改代码:

1.怎样修改代码进行迁移训练?
2.怎样中断训练过程,记录中断点,并在下次训练的时候继续从中断点开始,而不是从头训练

感谢您的分享,期待您的解答和指点

Time cost unacceptable!!!!To big

Hi There!
OS:

 win 10
 anaconda 4.8.3
 CUDA 9.0 Cudnn7.6.5
 Nvidia  driver 445.87

Demo:

import time
import dlocr

if __name__ == '__main__':
    ocr = dlocr.get_or_create()
    start = time.time()

    bboxes, texts = ocr.detect("C:\\PycharmProjects\\text-detection-ocr-master\\asset\\demo_ctpn.png")
    print('\n'.join(texts))
    print(f"cost: {(time.time() - start) * 1000}ms")

log:

Using TensorFlow backend.
2020-05-31 14:59:01.218166: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2020-05-31 14:59:01.311524: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1392] Found device 0 with properties: 
name: GeForce GTX 1070 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:09:00.0
totalMemory: 8.00GiB freeMemory: 6.62GiB
2020-05-31 14:59:01.311826: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1471] Adding visible gpu devices: 0
2020-05-31 14:59:52.947413: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-31 14:59:52.947594: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:958]      0 
2020-05-31 14:59:52.947706: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0:   N 
2020-05-31 14:59:52.951287: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6386 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:09:00.0, compute capability: 6.1)
156/卞之琳代表作
儿翻动了一下。她还不敢断定不是在梦里,弯起了臂,把乳头边的孩子拢紧了一
点儿,一只手轻轻地抚摩他的小身体,爱的想把他一口吞在肚子里,免被什么魔
鬼见了,一把抓了去,于是她又愁了,于是她又悲了,可是没有忘了哭声,便不
觉哼起来了:
妈在这儿呢,心肝儿睡罢,
别闹啦,别闹哭了妈妈。……
她真也要哭了。可是又觉得好笑:怎么她这样孩子气的,不,简直变成孩子
了——把自己的孩子当作自己的母亲了!不,有什么可笑呢!她反过来一想,这
倒也有道理,她仿佛从不曾有过-一个母亲,至少不记得有过。小时候从小邻居们
的母亲身上看出来,母亲好比一个鸟巢:不论逢到落荫,下雪,或是刮风,小鸟
有这个地方好躲。你觉得闷损得很罢?来!做个好梦去罢,向这含笑的巢门里-一
钻!不错,她小时候,什么叫做挨饿,什么叫做受寒,她是很懂得的。没事的时
候,一个人坐在门槛上,等天黑,也是常有的事。她也想找个巢儿,可是在哪儿
呢?坏脾气的父亲给她的常是打,托寄养的伯父给她的常是骂,伯母给她的常常
是白眼。要是巢儿,她自然得不到这些东西的。后来人大了,她却还常有觅巢的
痴念。她的丈夫待她还不算坏,可是他终年难得有几次在家,就是回来的时候,
对于酒似乎比对于她要关心的多。然而他却给了她一个小巢了。现在她有时候觉
得日子过得太慢,那再不要独自坐在门槛上,瞪着眼看天了,只要看凯儿的小口
cost: 4948.700428009033ms

Process finished with exit code 0

cost: 4948.700428009033ms ,Time is too expensive, have you encountered this situation?

训练ctpn时出现AttributeError: 'NoneType' object has no attribute 'shape'错误

我使用您的模型能够很好的运行,但是我下载数据集,想重新训练时,出现了下面的错误,如果您有空可以帮我看一下吗?
Epoch 1/20
Traceback (most recent call last):
File "dlocr\ctpn_train.py", line 71, in
initial_epoch=args.initial_epoch)
File "D:\anaconda3\envs\carla\lib\site-packages\dlocr\ctpn\core.py", line 148, in train
self.parallel_model.fit_generator(train_data_generator, epochs=epochs, **kwargs)
File "D:\anaconda3\envs\carla\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "D:\anaconda3\envs\carla\lib\site-packages\keras\engine\training.py", line 1658, in fit_generator
initial_epoch=initial_epoch)
File "D:\anaconda3\envs\carla\lib\site-packages\keras\engine\training_generator.py", line 181, in fit_generator
generator_output = next(output_generator)
File "D:\anaconda3\envs\carla\lib\site-packages\keras\utils\data_utils.py", line 733, in get
six.reraise(*sys.exc_info())
File "D:\anaconda3\envs\carla\lib\site-packages\six.py", line 703, in reraise
raise value
File "D:\anaconda3\envs\carla\lib\site-packages\keras\utils\data_utils.py", line 702, in get
inputs = future.get(timeout=30)
File "D:\anaconda3\envs\carla\lib\multiprocessing\pool.py", line 657, in get
raise self._value
File "D:\anaconda3\envs\carla\lib\multiprocessing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "D:\anaconda3\envs\carla\lib\site-packages\keras\utils\data_utils.py", line 641, in next_sample
return six.next(_SHARED_SEQUENCES[uid])
File "D:\anaconda3\envs\carla\lib\site-packages\dlocr\ctpn\data_loader.py", line 47, in load_data
h, w, c = img.shape
AttributeError: 'NoneType' object has no attribute 'shape'
请问这个问题怎么解决?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.