Coder Social home page Coder Social logo

attention-ocr's People

Contributors

brishtiteveja avatar da03 avatar imoonkey avatar linjm avatar mgaitan avatar sivanke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

attention-ocr's Issues

warning in train - tf 0.9

tf - 0.9

WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.BasicLSTMCell object at 0x11db4fb50>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.

traing on Synth 90K data set

Hi,

I am trying to train the model using Synth 90K data set. I downloaded the data set and shuffled the transcriptions (annotation.txt with about 9M records) before training. After 40k steps I get the perplexity about 5.0, and then testing on SVT data set I get much worse result than with the model you provided.

I therefore just wonder if you could share the setup you used to train your model. Did you use the whole data set, or some additional filtering was applied (I see in config some file mnt/train_shuffled_words.txt)? Or maybe you used some additional options besides those stated in README?

Thanks in advance

Could you give me some advise, how to set resize height to 64 or 128 or some else.

Thank you for your help and this project.

I think with bigger image, could get better result. So, I try to train with bigger image such as resize height=64 or 128 but not 32. But I haven't found how to do this, I didn't find how to change cnn and buckets.

Could you give me some advise, or recommend some thesises?

Looking forward to get your help. Thank you.

I got this problem when testing model,did anyone got this same? How can I figure it? thanks

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [768,1024] rhs shape= [1024,2048]
[[Node: save/Assign_2 = Assign[T=DT_FLOAT, _class=["loc:@bidirectional_rnn/bw/basic_lstm_cell/kernel"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/gpu:0"](bidirectional_rnn/bw/basic_lstm_cell/kernel, save/RestoreV2_2/_23)]]
[[Node: save/RestoreV2_9/_54 = _SendT=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_131_save/RestoreV2_9", _device="/job:localhost/replica:0/task:0/cpu:0"]]

floydhub & tensorflow & keras

I trained sample dataset on floydhub tf version 1.0 keras 2.0.6 then I downloaded the weigths on my local and tried to test with evaluation data but it does not work.
I got this:

InvalidArgumentError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to get matching files on /output/translate.ckpt-100: Not found: /output
[[Node: save/RestoreV2_9 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_9/tensor_names, save/RestoreV2_9/shape_and_slices)]]

When I train and run locally it works.. what can be the issue?

Thanks in advance

train fail

hello, I tried to train your data with your code, I use the tensorflow as the backend and I meet a problem. how can i solve it? Iโ€™d like to receive your answer, thanks!

File "~/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 496, in __iter__
 raise TypeError("'Tensor' object is not iterable.")
TypeError: 'Tensor' object is not iterable.

whenrun src/launcher.py, failed reporting error:Tensor name "conv_conv7/BatchNorm/moving_mean" not found in checkpoint files model/translate.ckpt-47200

I want to test and v this model, but when run src/launcher.py, met error:

2017-06-13 14:02:31,749 root INFO loading data
2017-06-13 14:02:31,750 root INFO phase: test
2017-06-13 14:02:31,750 root INFO model_dir: model
2017-06-13 14:02:31,751 root INFO load_model: True
2017-06-13 14:02:31,751 root INFO output_dir: results
2017-06-13 14:02:31,751 root INFO steps_per_checkpoint: 500
2017-06-13 14:02:31,751 root INFO batch_size: 1
2017-06-13 14:02:31,751 root INFO num_epoch: 1000
2017-06-13 14:02:31,751 root INFO learning_rate: 1
2017-06-13 14:02:31,751 root INFO reg_val: 0
2017-06-13 14:02:31,751 root INFO max_gradient_norm: 5.000000
2017-06-13 14:02:31,751 root INFO clip_gradients: True
2017-06-13 14:02:31,751 root INFO valid_target_length inf
2017-06-13 14:02:31,751 root INFO target_vocab_size: 39
2017-06-13 14:02:31,751 root INFO target_embedding_size: 10.000000
2017-06-13 14:02:31,751 root INFO attn_num_hidden: 128
2017-06-13 14:02:31,752 root INFO attn_num_layers: 2
2017-06-13 14:02:31,752 root INFO visualize: True
2017-06-13 14:02:31,752 root INFO buckets
2017-06-13 14:02:31,752 root INFO [(16, 32), (27, 32), (35, 32), (64, 32), (80, 32)]
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
2017-06-13 14:03:10,971 root INFO Reading model parameters from model/translate.ckpt-47200
W tensorflow/core/framework/op_kernel.cc:993] Not found: Tensor name "conv_conv7/BatchNorm/moving_mean" not found in checkpoint files model/translate.ckpt-47200
[[Node: save/RestoreV2_21 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_21/tensor_names, save/RestoreV2_21/shape_and_slices)]]
W tensorflow/core/framework/op_kernel.cc:993] Not found: Tensor name "conv_conv7/BatchNorm/moving_mean" not found in checkpoint files model/translate.ckpt-47200
[[Node: save/RestoreV2_21 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_21/tensor_names, save/RestoreV2_21/shape_and_slices)]]
W tensorflow/core/framework/op_kernel.cc:993] Not found: Tensor name "conv_conv7/BatchNorm/moving_mean" not found in checkpoint files model/translate.ckpt-47200
[[Node: save/RestoreV2_21 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_21/tensor_names, save/RestoreV2_21/shape_and_slices)]]
W tensorflow/core/framework/op_kernel.cc:993] Not found: Tensor name "conv_conv7/BatchNorm/moving_mean" not found in checkpoint files model/translate.ckpt-47200
[[Node: save/RestoreV2_21 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_21/tensor_names, save/RestoreV2_21/shape_and_slices)]]
Traceback (most recent call last):
File "src/launcher.py", line 156, in
main(sys.argv[1:], exp_config.ExpConfig)
File "src/launcher.py", line 152, in main
session = sess)
File "/home/amax/hexx/text_rcnn_test/Attention-OCR/src/model/model.py", line 204, in init
self.saver_all.restore(self.sess, ckpt.model_checkpoint_path)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1439, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 767, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 965, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1015, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1035, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Tensor name "conv_conv7/BatchNorm/moving_mean" not found in checkpoint files model/translate.ckpt-47200
[[Node: save/RestoreV2_21 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_21/tensor_names, save/RestoreV2_21/shape_and_slices)]]
[[Node: save/RestoreV2_29/_23 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_250_save/RestoreV2_29", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]

Caused by op u'save/RestoreV2_21', defined at:
File "src/launcher.py", line 156, in
main(sys.argv[1:], exp_config.ExpConfig)
File "src/launcher.py", line 152, in main
session = sess)
File "/home/amax/hexx/text_rcnn_test/Attention-OCR/src/model/model.py", line 198, in init
self.saver_all = tf.train.Saver(tf.all_variables())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1051, in init
self.build()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1081, in build
restore_sequentially=self._restore_sequentially)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 675, in build
restore_sequentially, reshape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 402, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 242, in restore_op
[spec.tensor.dtype])[0])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 668, in restore_v2
dtypes=dtypes, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2395, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1264, in init
self._traceback = _extract_stack()

NotFoundError (see above for traceback): Tensor name "conv_conv7/BatchNorm/moving_mean" not found in checkpoint files model/translate.ckpt-47200
[[Node: save/RestoreV2_21 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_21/tensor_names, save/RestoreV2_21/shape_and_slices)]]
[[Node: save/RestoreV2_29/_23 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_250_save/RestoreV2_29", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]

Anyone who know the reason ,please help me solve this problem. thanks.

error with tf 0.10.0

Hi, thanks for your work.
I've got error with tf 0.10.0 like this:
" File ".../Attention-OCR/src/model/model.py", line 164, in init
for old_value, new_value in cnn_model.model.updates:
File ".../site-packages/tensorflow/python/framework/ops.py", line 499, in iter
raise TypeError("'Tensor' object is not iterable.")
TypeError: 'Tensor' object is not iterable.

Is that probably the wrong version of tf or keras? Which version of tf and keras are support?

Error while using the pretrained Synth 90K model

Hey please help,
I am trying to run the code on the pretrained model as provided by you. But I get the following error when I run:
python src/launcher.py --phase=test --visualize --data-path=my_img/image_path.txt --data-base-dir=my_img --log-path=log.txt --load-model --model-dir=model --output-dir=results

Error:
NotFoundError (see above for traceback): Tensor name "bidirectional_rnn/bw/basic
_lstm_cell/bias" not found in checkpoint files model\translate.ckpt-47200
[[Node: save/RestoreV2_1 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:l
ocalhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_1/tensor_n
ames, save/RestoreV2_1/shape_and_slices)]]

I am running this on Windows using Tensorflow 1.2.1

Please help

Run error: TypeError("'Tensor' object is not iterable.")

hi guys,

When I run "python src/launcher.py --phase=train --data-path=sample/sample.txt --data-base-dir=sample --log-path=log.txt --no-load-model", the following errors came out:

Traceback (most recent call last):
File "src/launcher.py", line 129, in
main(sys.argv[1:], exp_config.ExpConfig)
File "src/launcher.py", line 125, in main
session = sess)
File "/home/aaron/projects/Attention-OCR/src/model/model.py", line 164, in init
for old_value, new_value in cnn_model.model.updates:
File "/home/aaron/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 495, in iter
raise TypeError("'Tensor' object is not iterable.")
TypeError: 'Tensor' object is not iterable.

AttributeError: 'InputLayer' object has no attribute 'set_input'

python -V
Python 2.7.10

python src/launcher.py --phase=train --data-path=sample/sample.txt --data-base-dir=sample --log-path=log.txt --no-load-model
Using TensorFlow backend.
2017-01-09 11:52:13,224 root INFO loading data
2017-01-09 11:52:13,225 root INFO data_path: sample/sample.txt
2017-01-09 11:52:13,225 root INFO phase: train
2017-01-09 11:52:13,225 root INFO batch_size: 64
2017-01-09 11:52:13,225 root INFO num_epoch: 1000
2017-01-09 11:52:13,225 root INFO steps_per_checkpoint 500
2017-01-09 11:52:13,225 root INFO target_vocab_size: 39
2017-01-09 11:52:13,225 root INFO model_dir: train
2017-01-09 11:52:13,225 root INFO target_embedding_size: 10
2017-01-09 11:52:13,226 root INFO attn_num_hidden: 128
2017-01-09 11:52:13,226 root INFO attn_num_layers: 2
2017-01-09 11:52:13,226 root INFO buckets
2017-01-09 11:52:13,226 root INFO [(16, 11), (27, 17), (35, 19), (64, 22), (80, 32)]
Traceback (most recent call last):
File "src/launcher.py", line 142, in
main(sys.argv[1:], exp_config.ExpConfig)
File "src/launcher.py", line 138, in main
old_model_version = parameters.old_model_version)
File "/Users/logiz/www/test/Attention-OCR/src/model/model.py", line 119, in init
cnn_model = CNN(self.img_data)
File "/Users/logiz/www/test/Attention-OCR/src/model/cnn.py", line 30, in init
self._build_network(input_tensor)
File "/Users/logiz/www/test/Attention-OCR/src/model/cnn.py", line 43, in _build_network
input_layer.set_input(input_tensor=input_tensor)
AttributeError: 'InputLayer' object has no attribute 'set_input'

how to fix,thank you.

Open license

What is the license of this software? Can I use it as Open Source Software?

Visualize not working

I'm using this code to train on the Washington Database, which contain binarized and normalized cropped images like this:

image

which has a ground-truth label of 'williamsburgh'.

With default settings, I do well during training (loss 0.001056, perplexity 1.00) but at test time get a very low character accuracy (11%).

When I visualize on the test set, I'll get blank images returned

image_3

even when the translation is correct
word.txt

Accuracy on SVT data set

I was trying to evaluate the performance of your tool on SVT test data

As suggested by README, I downloaded the model from http://www.cs.cmu.edu/~yuntiand/model.tgz and the data from http://www.cs.cmu.edu/~yuntiand/evaluation_data.tgz

After running

python src/launcher.py --phase=test --visualize --data-path=evaluation_data/svt/test.txt --data-base-dir=evaluation_data/svt --log-path=log.txt --load-model --model-dir=model --output-dir=results

I get

INFO     3.166667 out of 6 correct

...

INFO     437.192016 out of 647 correct

I was expecting to get about 80% accuracy, but the results seem to be worse.

Do you know what is the expected accuracy of this tool with the provided model and how it can possibly improved to achieve state-of-the art result?

Thanks

failed to run cuBLAS routine cublasSgemm_v2: CUBLAS_STATUS_EXECUTION_FAILED

I am get the following error when trying to run the model in the "train" phase -

2017-05-30 05:39:23,518 root  INFO     max_gradient_norm: 5.000000
2017-05-30 05:39:23,518 root  INFO     clip_gradients: True
2017-05-30 05:39:23,518 root  INFO     valid_target_length inf
2017-05-30 05:39:23,518 root  INFO     target_vocab_size: 39
2017-05-30 05:39:23,518 root  INFO     target_embedding_size: 10.000000
2017-05-30 05:39:23,518 root  INFO     attn_num_hidden: 128
2017-05-30 05:39:23,518 root  INFO     attn_num_layers: 2
2017-05-30 05:39:23,519 root  INFO     visualize: True
2017-05-30 05:39:23,519 root  INFO     buckets
2017-05-30 05:39:23,519 root  INFO     [(16, 11), (27, 17), (35, 19), (64, 22), (80, 32)]
2017-05-30 05:41:51,137 root  INFO     Created model with fresh parameters.
Train: :   0%|          | 0/156 [00:00<?, ?it/s]2017-05-30 05:46:19,134 root  INFO     Generating first batch)
E tensorflow/stream_executor/cuda/cuda_blas.cc:472] failed to run cuBLAS routine cublasSgemm_v2: CUBLAS_STATUS_EXECUTION_FAILED

input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
Traceback (most recent call last):
  File "src/launcher.py", line 148, in <module>
    main(sys.argv[1:], exp_config.ExpConfig)
  File "src/launcher.py", line 145, in main
    model.launch()
  File "/home/sprabh6/Attention-OCR/src/model/model.py", line 300, in launch
    summaries, step_loss, step_logits, _ = self.step(encoder_masks, img_data, zero_paddings, decoder_inputs, target_weights, bucket_id, self.forward_only)
  File "/home/sprabh6/Attention-OCR/src/model/model.py", line 411, in step
    outputs = self.sess.run(output_feed, input_feed)
  File "/home/sprabh6/anaconda/envs/tf_1.0_keras_1/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run
    run_metadata_ptr)
  File "/home/sprabh6/anaconda/envs/tf_1.0_keras_1/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _run
    feed_dict_string, options, run_metadata)
  File "/home/sprabh6/anaconda/envs/tf_1.0_keras_1/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run
    target_list, options, run_metadata)
  File "/home/sprabh6/anaconda/envs/tf_1.0_keras_1/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1035, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Blas SGEMM launch failed : a.shape=(64, 522), b.shape=(522, 128), m=64, n=128, k=522
	 [[Node: model_with_buckets/embedding_attention_decoder_1/attention_decoder/attention_decoder/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](model_with_buckets/embedding_attention_decoder_1/attention_decoder/attention_decoder/concat, embedding_attention_decoder/attention_decoder/weights/read)]]
	 [[Node: conv_conv5/BatchNorm/AssignMovingAvg/_270 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_28061_conv_conv5/BatchNorm/AssignMovingAvg", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Caused by op u'model_with_buckets/embedding_attention_decoder_1/attention_decoder/attention_decoder/MatMul', defined at:
  File "src/launcher.py", line 148, in <module>
    main(sys.argv[1:], exp_config.ExpConfig)
  File "src/launcher.py", line 144, in main
    session = sess)
  File "/home/sprabh6/Attention-OCR/src/model/model.py", line 151, in __init__
    use_gru = use_gru)
  File "/home/sprabh6/Attention-OCR/src/model/seq2seq_model.py", line 141, in __init__
    softmax_loss_function=softmax_loss_function)
  File "/home/sprabh6/Attention-OCR/src/model/seq2seq.py", line 993, in model_with_buckets
    decoder_inputs[:int(bucket[1])], int(bucket[0]))
  File "/home/sprabh6/Attention-OCR/src/model/seq2seq_model.py", line 140, in <lambda>
    self.target_weights, buckets, lambda x, y, z: seq2seq_f(x, y, z, False),
  File "/home/sprabh6/Attention-OCR/src/model/seq2seq_model.py", line 122, in seq2seq_f
    attn_num_hidden = attn_num_hidden)
  File "/home/sprabh6/Attention-OCR/src/model/seq2seq.py", line 675, in embedding_attention_decoder
    initial_state_attention=initial_state_attention, attn_num_hidden=attn_num_hidden)
  File "/home/sprabh6/Attention-OCR/src/model/seq2seq.py", line 575, in attention_decoder
    x = linear([inp] + attns, input_size, True)
  File "/home/sprabh6/anaconda/envs/tf_1.0_keras_1/lib/python2.7/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 751, in _linear
    res = math_ops.matmul(array_ops.concat(args, 1), weights)
  File "/home/sprabh6/anaconda/envs/tf_1.0_keras_1/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 1765, in matmul
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
  File "/home/sprabh6/anaconda/envs/tf_1.0_keras_1/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 1454, in _mat_mul
    transpose_b=transpose_b, name=name)
  File "/home/sprabh6/anaconda/envs/tf_1.0_keras_1/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/home/sprabh6/anaconda/envs/tf_1.0_keras_1/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/sprabh6/anaconda/envs/tf_1.0_keras_1/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1226, in __init__
    self._traceback = _extract_stack()

InternalError (see above for traceback): Blas SGEMM launch failed : a.shape=(64, 522), b.shape=(522, 128), m=64, n=128, k=522
	 [[Node: model_with_buckets/embedding_attention_decoder_1/attention_decoder/attention_decoder/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](model_with_buckets/embedding_attention_decoder_1/attention_decoder/attention_decoder/concat, embedding_attention_decoder/attention_decoder/weights/read)]]
	 [[Node: conv_conv5/BatchNorm/AssignMovingAvg/_270 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_28061_conv_conv5/BatchNorm/AssignMovingAvg", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Figured that my LD_LIBRARY_PATH wasn't set properly. So added an entry to make it point to libcublas. Still didn't work. Figured it could be a memory problem. Set GPU options in launcher.py as follows -

gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True, gpu_options=gpu_options)) as sess:
        model = Model(
                phase = parameters.phase,
                visualize = parameters.visualize,
                data_path = parameters.data_path,
                data_base_dir = parameters.data_base_dir,
                output_dir = parameters.output_dir,
                batch_size = parameters.batch_size,
                initial_learning_rate = parameters.initial_learning_rate,
                num_epoch = parameters.num_epoch,
                steps_per_checkpoint = parameters.steps_per_checkpoint,
                target_vocab_size = parameters.target_vocab_size,
                model_dir = parameters.model_dir,
                target_embedding_size = parameters.target_embedding_size,
                attn_num_hidden = parameters.attn_num_hidden,
                attn_num_layers = parameters.attn_num_layers,
                clip_gradients = parameters.clip_gradients,
                max_gradient_norm = parameters.max_gradient_norm,
                load_model = parameters.load_model,
                valid_target_length = float('inf'),
                gpu_id=parameters.gpu_id,
                use_gru=parameters.use_gru,
                session = sess)
        model.launch()

Still doesn't work. Can anyone please tell me if I'm missing anything ?
Tensorflow version - 1.1.0

Testing , bad results even on training sample after convergence

Right now by following the instructions on the Readme:

Training procedure seems to converge (perplexity around 1 on toy example),
but when we test on the same data ( the toy example itself) the results are quite bad , Is anyone experiencing this behavior as well? I tried to look into the bucketing part of the code , i'm not sure why the bucketing in evaluation and in training differ but that doesn't seem to be the cause anyway ( tried with same bucketing and still bad results)
The version of keras and tensorflow are the reccomended ones ( Keras 1.1.1 and tf 0.11.0)

Testing the model

Hi, I have just run the new version of your code (TF and Keras) at my dataset.
The perplexity is ~ 1.3, which is very nice. But when I want to test the model I get ~1000.
After looking into visualization it look like the model always predict the same sequence of letters, no matter what input is (both on train test and validation set which the model did not see).

This is my command which I use for testing:

python src/launcher.py --phase=test --data-path=/home/ubuntu/Data/anno_v2/labels_vals.txt --data-base-dir=/home/ubuntu/Data/anno_v2 --log-path=logtest.txt --load-model --visualize

In the other hand, when I continue training, everything is ok. The only think which vary between this two runs is 'decoder' input: while testing it is predicted words from previous iterations while in 'train' it is ground truth.
Do you know what goes wrong? Or I overfitt to data and now first letter does create a all sequence, no matter what input is (my data have 15k examples)?

error when I tried to run the toy example

I downloaded the sample data in the Attention-OCR folder and depressed there.
I executed such command line and encountered such error.
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
below is the trackback information:
File "src/launcher.py", line 129, in
main(sys.argv[1:], exp_config.ExpConfig)
File "src/launcher.py", line 125, in main
session = sess)
File "/data/clzhai/Attention-OCR-master/src/model/model.py", line 118, in init
cnn_model = CNN(self.img_data)
File "/data/clzhai/Attention-OCR-master/src/model/cnn.py", line 31, in init
self._build_network(input_tensor)
File "/data/clzhai/Attention-OCR-master/src/model/cnn.py", line 51, in _build_network
border_mode='same'))
File "/data/clzhai/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/models.py", line 308, in add
output_tensor = layer(self.outputs[0])
File "/data/clzhai/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/engine/topology.py", line 487, in call
self.build(input_shapes[0])
File "/data/clzhai/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/layers/convolutional.py", line 410, in build
self.W = self.init(self.W_shape, name='{}_W'.format(self.name))
File "/data/clzhai/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/initializations.py", line 57, in glorot_uniform
fan_in, fan_out = get_fans(shape, dim_ordering=dim_ordering)
File "/data/clzhai/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/initializations.py", line 15, in get_fans
receptive_field_size = np.prod(shape[2:])
File "/data/clzhai/anaconda2/envs/tensorflow/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 2492, in prod
out=out, keepdims=keepdims)
File "/data/clzhai/anaconda2/envs/tensorflow/lib/python2.7/site-packages/numpy/core/_methods.py", line 35, in _prod
return umr_prod(a, axis, dtype, out, keepdims)
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'

the version

I use the keras version 1.1.1 and tensorflow 0.10 is not fit to your new updated program, there is a problem:raise TypeError("List of Tensors when single Tensor expected") TypeError: List of Tensors when single Tensor expected.

train_demo.sh throws ValueError exception

pip3 freeze:

  • appdirs==1.4.3
  • image==1.5.5
  • Keras==2.0.4
  • numpy==1.12.1
  • olefile==0.44
  • packaging==16.8
  • Pillow==4.1.1
  • protobuf==3.3.0
  • pyparsing==2.2.0
  • pytz==2017.2
  • PyYAML==3.12
  • scipy==0.19.0
  • six==1.10.0
  • tensorflow-gpu==1.1.0
  • Theano==0.9.0
  • tqdm==4.11.2
  • Werkzeug==0.12.2

(Maybe I don't have the right version of Keras?)

The error thrown:

2017-05-23 10:03:59,609 root INFO loading data
2017-05-23 10:03:59,611 root INFO phase: train
2017-05-23 10:03:59,611 root INFO model_dir: model_01_16
2017-05-23 10:03:59,611 root INFO load_model: False
2017-05-23 10:03:59,611 root INFO output_dir: results
2017-05-23 10:03:59,612 root INFO steps_per_checkpoint: 2000
2017-05-23 10:03:59,612 root INFO batch_size: 64
2017-05-23 10:03:59,612 root INFO num_epoch: 3
2017-05-23 10:03:59,612 root INFO learning_rate: 1
2017-05-23 10:03:59,612 root INFO reg_val: 0
2017-05-23 10:03:59,612 root INFO max_gradient_norm: 5.000000
2017-05-23 10:03:59,613 root INFO clip_gradients: True
2017-05-23 10:03:59,613 root INFO valid_target_length inf
2017-05-23 10:03:59,613 root INFO target_vocab_size: 39
2017-05-23 10:03:59,613 root INFO target_embedding_size: 10.000000
2017-05-23 10:03:59,613 root INFO attn_num_hidden: 256
2017-05-23 10:03:59,614 root INFO attn_num_layers: 2
2017-05-23 10:03:59,614 root INFO visualize: True
2017-05-23 10:03:59,614 root INFO buckets
2017-05-23 10:03:59,614 root INFO [(16, 11), (27, 17), (35, 19), (64, 22), (80, 32)]
2017-05-23 10:03:59,614 root INFO ues GRU in the decoder.
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
using GRU CELL in decoder
Traceback (most recent call last):
File "src/launcher.py", line 146, in
main(sys.argv[1:], exp_config.ExpConfig)
File "src/launcher.py", line 142, in main
session = sess)
File "/home/kankroc/AttentionOCR/Attention-OCR/src/model/model.py", line 151, in init
use_gru = use_gru)
File "/home/kankroc/AttentionOCR/Attention-OCR/src/model/seq2seq_model.py", line 141, in init
softmax_loss_function=softmax_loss_function)
File "/home/kankroc/AttentionOCR/Attention-OCR/src/model/seq2seq.py", line 993, in model_with_buckets
decoder_inputs[:int(bucket[1])], int(bucket[0]))
File "/home/kankroc/AttentionOCR/Attention-OCR/src/model/seq2seq_model.py", line 140, in
self.target_weights, buckets, lambda x, y, z: seq2seq_f(x, y, z, False),
File "/home/kankroc/AttentionOCR/Attention-OCR/src/model/seq2seq_model.py", line 122, in seq2seq_f
attn_num_hidden = attn_num_hidden)
File "/home/kankroc/AttentionOCR/Attention-OCR/src/model/seq2seq.py", line 675, in embedding_attention_decoder
initial_state_attention=initial_state_attention, attn_num_hidden=attn_num_hidden)
File "/home/kankroc/AttentionOCR/Attention-OCR/src/model/seq2seq.py", line 577, in attention_decoder
cell_output, state = cell(x, state)
File "/home/kankroc/AttentionOCR/env/lib/python3.5/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 953, in call
cur_inp, new_state = cell(cur_inp, cur_state)
File "/home/kankroc/AttentionOCR/env/lib/python3.5/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 146, in call
with _checked_scope(self, scope or "gru_cell", reuse=self._reuse):
File "/usr/lib/python3.5/contextlib.py", line 59, in enter
return next(self.gen)
File "/home/kankroc/AttentionOCR/env/lib/python3.5/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 77, in _checked_scope
type(cell).name))
ValueError: Attempt to reuse RNNCell <tensorflow.contrib.rnn.python.ops.core_rnn_cell_impl.GRUCell object at 0x7f9a3c88cf98> with a different variable scope than its first use. First use of cell was with scope 'embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/gru_cell', this attempt is with scope 'embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_1/gru_cell'. Please create a new instance of the cell if you would like it to use a different set of weights. If before you were using: MultiRNNCell([GRUCell(...)] * num_layers), change to: MultiRNNCell([GRUCell(...) for _ in range(num_layers)]). If before you were using the same cell instance as both the forward and reverse cell of a bidirectional RNN, simply create two instances (one for forward, one for reverse). In May 2017, we will start transitioning this cell's behavior to use existing stored weights, if any, when it is called with scope=None (which can lead to silent model degradation, so this error will remain until then.)

The error seems to indicate a version mismatch, what is the recommanded combo (TF + Keras) version?

model_with_buckets fails because of wildcard input size

Hi I'm trying to understand the run the training code, but I keep running into the issue on line 998 in seq2seq.py. As far as I can tell, it's because the encoder_inputs_tensor shape is (?, ?, 512)

Here's the error:

  File "src/launcher.py", line 142, in <module>
    main(sys.argv[1:], exp_config.ExpConfig)
  File "src/launcher.py", line 138, in main
    old_model_version = parameters.old_model_version)
  File "/Users/trimchala/receipt-classification/Attention-OCR/src/model/model.py", line 143, in __init__
    use_gru = use_gru)
  File "/Users/trimchala/receipt-classification/Attention-OCR/src/model/seq2seq_model.py", line 145, in __init__
    softmax_loss_function=softmax_loss_function)
  File "/Users/trimchala/receipt-classification/Attention-OCR/src/model/seq2seq.py", line 998, in model_with_buckets
    encoder_inputs = tf.split(0, bucket[0], encoder_inputs_tensor)
  File "/Users/trimchala/miniconda/envs/tfenvPy35/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1221, in split
    num = size_splits_shape.dims[0]
IndexError: list index out of range

I did change line 74 to linear = tf.contrib.layers.linear. I wonder if it affects anything. Could you help me figure out what the problem might be? Thanks a lot!

error using my own trained model to test.

Hi,
I've trained a model following the toy example. And using the model to test the iiit5k data. While error happens:
"../Attention-OCR/src/model/model.py" line 182, in init
self.saver_all = tf.train.Saver(tf.all_variables())
an exception is throwed:
tensorflow.python.framework.errors.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shap=[1024] rhs shape=[2048]

When using the author's model trained on Synth 90K to test iiit5k , everythin is ok and finally get "2034.98 out of 3000 correct".

Could anybody explain this? Thanks a lot.

Restoring the Model

Hello, I'm trying out your code and I'm running the Test and Visualize attention results codes and I'm getting this the error below. Could it be that the sample model is already outdated? Thank you.

Log

2017-05-25 09:05:29,733 root  INFO     loading data
2017-05-25 09:05:29,738 root  INFO     phase: test
2017-05-25 09:05:29,738 root  INFO     model_dir: model
2017-05-25 09:05:29,738 root  INFO     load_model: True
2017-05-25 09:05:29,738 root  INFO     output_dir: results
2017-05-25 09:05:29,738 root  INFO     steps_per_checkpoint: 500
2017-05-25 09:05:29,738 root  INFO     batch_size: 1
2017-05-25 09:05:29,738 root  INFO     num_epoch: 1000
2017-05-25 09:05:29,739 root  INFO     learning_rate: 1
2017-05-25 09:05:29,739 root  INFO     reg_val: 0
2017-05-25 09:05:29,739 root  INFO     max_gradient_norm: 5.000000
2017-05-25 09:05:29,739 root  INFO     clip_gradients: True
2017-05-25 09:05:29,743 root  INFO     valid_target_length inf
2017-05-25 09:05:29,743 root  INFO     target_vocab_size: 39
2017-05-25 09:05:29,743 root  INFO     target_embedding_size: 10.000000
2017-05-25 09:05:29,743 root  INFO     attn_num_hidden: 128
2017-05-25 09:05:29,743 root  INFO     attn_num_layers: 2
2017-05-25 09:05:29,743 root  INFO     visualize: True
2017-05-25 09:05:29,743 root  INFO     buckets
2017-05-25 09:05:29,744 root  INFO     [(16, 32), (27, 32), (35, 32), (64, 32), (80, 32)]
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
2017-05-25 09:07:04,718 root  INFO     Reading model parameters from model/translate.ckpt-47200
Traceback (most recent call last):
  File "src/launcher.py", line 146, in <module>
    main(sys.argv[1:], exp_config.ExpConfig)
  File "src/launcher.py", line 142, in main
    session = sess)
  File "/home/lrs/Attention-OCR/src/model/model.py", line 204, in __init__
    self.saver_all.restore(self.sess, ckpt.model_checkpoint_path)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1428, in restore
    {self.saver_def.filename_tensor_name: save_path})
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run
    run_metadata_ptr)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _run
    feed_dict_string, options, run_metadata)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run
    target_list, options, run_metadata)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1035, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Tensor name "embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_1/basic_lstm_cell/biases" not found in checkpoint files model/translate.ckpt-47200
         [[Node: save/RestoreV2_33 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_33/tensor_names, save/RestoreV2_33/shape_and_slices)]]

Caused by op u'save/RestoreV2_33', defined at:
  File "src/launcher.py", line 146, in <module>
    main(sys.argv[1:], exp_config.ExpConfig)
  File "src/launcher.py", line 142, in main
    session = sess)
  File "/home/lrs/Attention-OCR/src/model/model.py", line 198, in __init__
    self.saver_all = tf.train.Saver(tf.all_variables())
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1040, in __init__
    self.build()
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1070, in build
    restore_sequentially=self._restore_sequentially)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 675, in build
    restore_sequentially, reshape)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 402, in _AddRestoreOps
    tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 242, in restore_op
    [spec.tensor.dtype])[0])
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 668, in restore_v2
    dtypes=dtypes, name=name)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1226, in __init__
    self._traceback = _extract_stack()

NotFoundError (see above for traceback): Tensor name "embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_1/basic_lstm_cell/biases" not found in checkpoint files model/translate.ckpt-47200
         [[Node: save/RestoreV2_33 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_33/tensor_names, save/RestoreV2_33/shape_and_slices)]]

How to train the Model on new dataset changing the vocabulary size

Hi,

I am training the Model on my own dataset which contains both uppercase and lowercase letter although it doesn't contains any wildcards so the new vocabulary is 26+26+10+3=65. The problem is the code only outputs Generating first batch instead of logging the loss and perplexity.

Any help would be appreciated.
thanks.

Question about training procedure

I train the model on my own dataset, which contains 10k plate images.
When I set the batch_size as 256, the step perplexity reaches to 1.001, however, the step perplexity of the trained model increases to 10 if the batch_size is set to 2.
If I fix the batch_size to 256, does the model really converge?
I wonder that a large batch_size is not appropriate.

I want to apply this code for my research, but i don't know how to modify or add categories.

Hi, i need some help.
If anyone can help me, I would be very thankful.

I found this source code while looking for code for text recognition. this source code is very similar to what I was looking for. However, i need more than a few categories. I thought if i ask before analyzing the code,it will be easier to solve. So i leave a question.

I want to add mathematical operators, brackets and commas, and so on. How can i add categories to the source code?

please help me. thank you.

Code for validation loss

Looking at the code I only see training loss. Is there any routine that saves both training and validation loss?

Training seems broken from tf 11 and keras 1.1.1

I was checking the last version that should support tensorflow 11 and keras 1.1.1 that I have in my system

I run on full synth90 corpus the sample training (without GRU option)

python src/launcher.py \
	--phase=train \
	--data-path=90kDICT32px/new_annotation_train.txt \
	--data-base-dir=90kDICT32px \
	--log-path=log_sy.txt \
	--attn-num-hidden 256 \
	--batch-size 32 \
	--model-dir=model_x \
	--initial-learning-rate=1.0 \
	--num-epoch=20000 \
	--gpu-id=0 \
--target-embedding-size=10

It seem to converge better than the old version. After 70k iterations I get pretty nice perplexity

2016-12-08 13:12:07,393 root  INFO     current_step: 78998
2016-12-08 13:12:07,589 root  INFO     step_time: 0.196019, step_loss: 0.080441, step perplexity: 1.083765
2016-12-08 13:12:08,620 root  INFO     current_step: 78999
2016-12-08 13:12:08,884 root  INFO     step_time: 0.264444, step_loss: 0.071078, step perplexity: 1.073665
2016-12-08 13:12:08,885 root  INFO     global step 79000 step-time 0.26 loss 0.094580  perplexity 1.10
2016-12-08 13:12:08,885 root  INFO     Saving model, current_step: 79000

However, when testing on SVT, I get a very bad performance

2016-12-08 13:35:25,947 root  INFO     step_time: 0.049570, loss: 3.750340, step perplexity: 42.535551
2016-12-08 13:35:25,950 root  INFO     62.324373 out of 647 correct

Interestingly, the model provided by the authors works great with --old-model option. So it is not the decoder bug but rather some problem in training that does not show up in training log but does affect the final performance.

I wonder if someone tried to traing/test the new version using tensorflow 11 and keras 1.1.1 ?
Thanks

Why the accuracy rate is so low from your model?

Hi guys,

I've download the evaluation_data and model you supported, and verify the test result by the command
python src/launcher.py --phase=test --visualize --data-path=evaluation_data/svt/test.txt --data-base-dir=evaluation_data/svt --log-path=log.txt --load-model --model-dir=model --output-dir=results --target-embedding-size=10
(BTW. should add --target-embedding-size=10 option or else it will fails by "Assign requires shapes of both tensors to match. lhs shape= [39,20] rhs shape= [39,10]")

But the result log.txt shows "188.780794 out of 647 correct" which means only 29% accuracy rate you get. Why is it so bad? Or Did I something wrong?

Script to run/test pre trained model on a single image

Hi @da03 ,

I'm trying to write a script to run/test the trained model you provided on a single image to get the output characters. Would it be possible to provide a sample script or list of commands if you already have one ? If not, it would be great if you can briefly mention a series of steps on what/how to do it.

Thanks in advance.

`tf.contrib.rnn.core_rnn_cell.BasicLSTMCell` should be replaced by `tf.contrib.rnn.BasicLSTMCell`

For Tensorflow 1.2 and Keras 2.0, the line tf.contrib.rnn.core_rnn_cell.BasicLSTMCell should be replaced by tf.contrib.rnn.BasicLSTMCell.

$ ./train_demo.sh
017-06-30 16:09:13,025 root  INFO     ues GRU in the decoder.
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
Traceback (most recent call last):
  File "src/launcher.py", line 146, in <module>
    main(sys.argv[1:], exp_config.ExpConfig)
  File "src/launcher.py", line 142, in main
    session = sess)
  File "/home/math/Github/Attention-OCR/src/model/model.py", line 151, in __init__
    use_gru = use_gru)
  File "/home/math/Github/Attention-OCR/src/model/seq2seq_model.py", line 87, in __init__
    single_cell = tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(attn_num_hidden, forget_bias=0.0, state_is_tuple=False)
AttributeError: 'module' object has no attribute 'core_rnn_cell'

and

$ sh test_demo.sh 
2017-06-30 16:10:13.765890: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765918: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765927: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765933: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765938: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13,766 root  INFO     loading data
2017-06-30 16:10:13,767 root  INFO     phase: test
2017-06-30 16:10:13,767 root  INFO     model_dir: model_01_16
2017-06-30 16:10:13,767 root  INFO     load_model: True
2017-06-30 16:10:13,767 root  INFO     output_dir: model_01_16/synth90
2017-06-30 16:10:13,767 root  INFO     steps_per_checkpoint: 500
2017-06-30 16:10:13,767 root  INFO     batch_size: 1
2017-06-30 16:10:13,767 root  INFO     num_epoch: 3
2017-06-30 16:10:13,767 root  INFO     learning_rate: 1
2017-06-30 16:10:13,768 root  INFO     reg_val: 0
2017-06-30 16:10:13,768 root  INFO     max_gradient_norm: 5.000000
2017-06-30 16:10:13,768 root  INFO     clip_gradients: True
2017-06-30 16:10:13,768 root  INFO     valid_target_length inf
2017-06-30 16:10:13,768 root  INFO     target_vocab_size: 39
2017-06-30 16:10:13,768 root  INFO     target_embedding_size: 10.000000
2017-06-30 16:10:13,768 root  INFO     attn_num_hidden: 256
2017-06-30 16:10:13,768 root  INFO     attn_num_layers: 2
2017-06-30 16:10:13,768 root  INFO     visualize: True
2017-06-30 16:10:13,768 root  INFO     buckets
2017-06-30 16:10:13,768 root  INFO     [(16, 32), (27, 32), (35, 32), (64, 32), (80, 32)]
2017-06-30 16:10:13,768 root  INFO     ues GRU in the decoder.
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
Traceback (most recent call last):
  File "src/launcher.py", line 146, in <module>
    main(sys.argv[1:], exp_config.ExpConfig)
  File "src/launcher.py", line 142, in main
    session = sess)
  File "/home/math/Github/Attention-OCR/src/model/model.py", line 151, in __init__
    use_gru = use_gru)
  File "/home/math/Github/Attention-OCR/src/model/seq2seq_model.py", line 87, in __init__
    single_cell = tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(attn_num_hidden, forget_bias=0.0, state_is_tuple=False)
AttributeError: 'module' object has no attribute 'core_rnn_cell'

a question

when I train my data, the accuracy is one substract the loss value ?

What's your training parameters?

Hi guys,

My training model can't reach the accuracy of your release model.tar.
My training parameter is here:

python src/launcher.py \
    --phase=train \
    --data-path=${data_path} \
    --data-base-dir=${data_base_dir} \
    --log-path=log_train.txt \
    --load-model \
    --model-dir=$model_dir \
    --num-epoch=800 \
    --target-embedding-size=10 

When the steps reach translate.ckpt-48000, I use it to evaluate svt. But the accuracy is every poor of 6.1%(39.365199 out of 647 correct) comparing with your result of 68%(437.192016 out of 647 correct).

I don't know why. What's your training parameters?

waiting for your answers

Load the model Error

Hi, I have run the code you supported. When I test the pre-trained model, the command is:
python src/launcher.py --phase=test --visualize --data-path=evaluation_data/svt/test.txt --data-base-dir=evaluation_data/svt --log-path=log.txt --load-model --model-dir=model --output-dir=results --old-model-version
But the result shows:

NotFoundError (see above for traceback): Tensor name "bidirectional_rnn/bw/basic_lstm_cell/biases" not found in checkpoint files model/translate.ckpt-47200
         [[Node: save/RestoreV2_25 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_25/tensor_names, save/RestoreV2_25/shape_and_slices)]]
         [[Node: save/RestoreV2_33/_99 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_272_save/RestoreV2_33", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

when I test the model that I trained on the dataset you supported, the command is:
python src/launcher.py --phase=test --visualize --data-path=evaluation_data/svt/test.txt --data-base-dir=evaluation_data/svt --log-path=log.txt --load-model --model-dir=train --output-dir=results
But there is another error:

DataLossError (see above for traceback): Unable to open table file train_1/translate.ckpt-71000.data-00000-of-00001: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
         [[Node: save/RestoreV2_4 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_4/tensor_names, save/RestoreV2_4/shape_and_slices)]]
         [[Node: save/RestoreV2_33/_99 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_272_save/RestoreV2_33", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

I found both the error occur in the line of src/model/model.py: self.saver_all.restore(self.sess, ckpt.model_checkpoint_path). Do you know what's the problem is? Thank you.

The training example can't converge

Hi, thanks for you code! I have tried to train a model following the example using the samples of "sample.tgz", but after a long time training ,the perplexity of the output is still very high, have you tried to train your model successfully using this samples? Or can you give me some advice about how to make my training converge? Thank you very much!

The log file:
2016-07-12 10:53:03,497 root INFO step_time: 0.689435, step perplexity: 17.029219
2016-07-12 10:53:03,904 root INFO current_step: 54647
2016-07-12 10:53:04,641 root INFO step_time: 0.736557, step perplexity: 16.870074
2016-07-12 10:53:04,734 root INFO current_step: 54648
2016-07-12 10:53:05,529 root INFO step_time: 0.795520, step perplexity: 16.883686
2016-07-12 10:53:05,693 root INFO current_step: 54649
2016-07-12 10:53:06,493 root INFO step_time: 0.799504, step perplexity: 16.677010
2016-07-12 10:53:06,576 root INFO current_step: 54650
2016-07-12 10:53:07,375 root INFO step_time: 0.799053, step perplexity: 16.259812
2016-07-12 10:53:07,387 root INFO current_step: 54651
2016-07-12 10:53:08,095 root INFO step_time: 0.708000, step perplexity: 17.182491

Error when trying to train model on my own dataset

Hi,

I'm trying to run your code to train my own dataset, however I get a ValueError. This happens even with the toy dataset given in the README.

Here is the full output when I try to train:

Attention-OCR mehulshah$ python src/launcher.py --phase=train --data-path=train-path.txt --data-base-dir=/ --log-path=log.txt --no-load-model
2017-06-02 11:23:40.694013: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-02 11:23:40.694034: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-02 11:23:40.694039: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-02 11:23:40.694043: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-02 11:23:40.694066: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-06-02 11:23:40,694 root INFO loading data
2017-06-02 11:23:40,695 root INFO phase: train
2017-06-02 11:23:40,695 root INFO model_dir: train
2017-06-02 11:23:40,695 root INFO load_model: False
2017-06-02 11:23:40,695 root INFO output_dir: results
2017-06-02 11:23:40,696 root INFO steps_per_checkpoint: 500
2017-06-02 11:23:40,696 root INFO batch_size: 64
2017-06-02 11:23:40,696 root INFO num_epoch: 1000
2017-06-02 11:23:40,696 root INFO learning_rate: 1
2017-06-02 11:23:40,696 root INFO reg_val: 0
2017-06-02 11:23:40,696 root INFO max_gradient_norm: 5.000000
2017-06-02 11:23:40,697 root INFO clip_gradients: True
2017-06-02 11:23:40,697 root INFO valid_target_length inf
2017-06-02 11:23:40,697 root INFO target_vocab_size: 39
2017-06-02 11:23:40,697 root INFO target_embedding_size: 10.000000
2017-06-02 11:23:40,697 root INFO attn_num_hidden: 128
2017-06-02 11:23:40,697 root INFO attn_num_layers: 2
2017-06-02 11:23:40,698 root INFO visualize: True
2017-06-02 11:23:40,698 root INFO buckets
2017-06-02 11:23:40,698 root INFO [(16, 11), (27, 17), (35, 19), (64, 22), (80, 32)]
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
Traceback (most recent call last):
File "src/launcher.py", line 146, in
main(sys.argv[1:], exp_config.ExpConfig)
File "src/launcher.py", line 142, in main
session = sess)
File "/Users/mehulshah/Documents/Ongoing/LPR/Attention-OCR/src/model/model.py", line 151, in init
use_gru = use_gru)
File "/Users/mehulshah/Documents/Ongoing/LPR/Attention-OCR/src/model/seq2seq_model.py", line 141, in init
softmax_loss_function=softmax_loss_function)
File "/Users/mehulshah/Documents/Ongoing/LPR/Attention-OCR/src/model/seq2seq.py", line 993, in model_with_buckets
decoder_inputs[:int(bucket[1])], int(bucket[0]))
File "/Users/mehulshah/Documents/Ongoing/LPR/Attention-OCR/src/model/seq2seq_model.py", line 140, in
self.target_weights, buckets, lambda x, y, z: seq2seq_f(x, y, z, False),
File "/Users/mehulshah/Documents/Ongoing/LPR/Attention-OCR/src/model/seq2seq_model.py", line 122, in seq2seq_f
attn_num_hidden = attn_num_hidden)
File "/Users/mehulshah/Documents/Ongoing/LPR/Attention-OCR/src/model/seq2seq.py", line 675, in embedding_attention_decoder
initial_state_attention=initial_state_attention, attn_num_hidden=attn_num_hidden)
File "/Users/mehulshah/Documents/Ongoing/LPR/Attention-OCR/src/model/seq2seq.py", line 577, in attention_decoder
cell_output, state = cell(x, state)
File "/usr/local/lib/python2.7/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 953, in call
cur_inp, new_state = cell(cur_inp, cur_state)
File "/usr/local/lib/python2.7/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 235, in call
with _checked_scope(self, scope or "basic_lstm_cell", reuse=self._reuse):
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in enter
return self.gen.next()
File "/usr/local/lib/python2.7/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 77, in _checked_scope
type(cell).name))

And here is the error:

ValueError: Attempt to reuse RNNCell <tensorflow.contrib.rnn.python.ops.core_rnn_cell_impl.BasicLSTMCell object at 0x11a4d0d10> with a different variable scope than its first use. First use of cell was with scope 'embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell', this attempt is with scope 'embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_1/basic_lstm_cell'. Please create a new instance of the cell if you would like it to use a different set of weights. If before you were using: MultiRNNCell([BasicLSTMCell(...)] * num_layers), change to: MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)]). If before you were using the same cell instance as both the forward and reverse cell of a bidirectional RNN, simply create two instances (one for forward, one for reverse). In May 2017, we will start transitioning this cell's behavior to use existing stored weights, if any, when it is called with scope=None (which can lead to silent model degradation, so this error will remain until then.)

I tried changing the following in seq2seq_model.py:

If before you were using: MultiRNNCell([BasicLSTMCell(...)] * num_layers), change to: MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)])

But it still does not work. Any suggestions?

CNN feature extraction

Hi, I would like to get the feature vector extracted using the CNN, how can I get from the last conv layer in this source code.
Thank you in advance

'module' object has no attribute 'core_rnn_cell'

Hi,

I have tried sample train with following cmd as listed below, but got error as title descirbes.
Tensorflow version is 0.12.1.
python src/launcher.py --phase=train --data-path=sample/sample.txt --data-base-dir=sample --log-path=log.txt --no-load-model
Detailed log info is listed below.

2017-08-07 17:39:12.301997: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-07 17:39:12.302045: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-07 17:39:12.302056: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-07 17:39:12.302065: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-07 17:39:12.302073: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-08-07 17:39:15.857019: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: Tesla P40
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:02:00.0
Total memory: 22.38GiB
Free memory: 22.22GiB
2017-08-07 17:39:16.239217: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x7bf2770 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2017-08-07 17:39:16.240932: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 1 with properties:
name: Tesla P40
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:03:00.0
Total memory: 22.38GiB
Free memory: 22.22GiB
2017-08-07 17:39:16.626551: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x7bf65c0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2017-08-07 17:39:16.628301: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 2 with properties:
name: Tesla P40
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:83:00.0
Total memory: 22.38GiB
Free memory: 22.22GiB
2017-08-07 17:39:17.002842: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x7c1ad70 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2017-08-07 17:39:17.004574: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 3 with properties:
name: Tesla P40
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:84:00.0
Total memory: 22.38GiB
Free memory: 22.22GiB
2017-08-07 17:39:17.006291: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 0 and 2
2017-08-07 17:39:17.006319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 0 and 3
2017-08-07 17:39:17.006346: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 1 and 2
2017-08-07 17:39:17.006362: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 1 and 3
2017-08-07 17:39:17.006398: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 2 and 0
2017-08-07 17:39:17.006412: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 2 and 1
2017-08-07 17:39:17.007788: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 3 and 0
2017-08-07 17:39:17.007814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 3 and 1
2017-08-07 17:39:17.007880: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 1 2 3
2017-08-07 17:39:17.007895: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y Y N N
2017-08-07 17:39:17.007908: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 1: Y Y N N
2017-08-07 17:39:17.007920: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 2: N N Y Y
2017-08-07 17:39:17.007941: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 3: N N Y Y
2017-08-07 17:39:17.008000: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla P40, pci bus id: 0000:02:00.0)
2017-08-07 17:39:17.008017: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:1) -> (device: 1, name: Tesla P40, pci bus id: 0000:03:00.0)
2017-08-07 17:39:17.008030: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:2) -> (device: 2, name: Tesla P40, pci bus id: 0000:83:00.0)
2017-08-07 17:39:17.008039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:3) -> (device: 3, name: Tesla P40, pci bus id: 0000:84:00.0)
2017-08-07 17:39:17,797 root INFO loading data
2017-08-07 17:39:17,806 root INFO phase: train
2017-08-07 17:39:17,806 root INFO model_dir: train
2017-08-07 17:39:17,807 root INFO load_model: False
2017-08-07 17:39:17,807 root INFO output_dir: results
2017-08-07 17:39:17,807 root INFO steps_per_checkpoint: 500
2017-08-07 17:39:17,807 root INFO batch_size: 64
2017-08-07 17:39:17,807 root INFO num_epoch: 1000
2017-08-07 17:39:17,808 root INFO learning_rate: 1
2017-08-07 17:39:17,808 root INFO reg_val: 0
2017-08-07 17:39:17,808 root INFO max_gradient_norm: 5.000000
2017-08-07 17:39:17,808 root INFO clip_gradients: True
2017-08-07 17:39:17,808 root INFO valid_target_length inf
2017-08-07 17:39:17,808 root INFO target_vocab_size: 39
2017-08-07 17:39:17,809 root INFO target_embedding_size: 10.000000
2017-08-07 17:39:17,809 root INFO attn_num_hidden: 128
2017-08-07 17:39:17,809 root INFO attn_num_layers: 2
2017-08-07 17:39:17,809 root INFO visualize: True
2017-08-07 17:39:17,809 root INFO buckets
2017-08-07 17:39:17,809 root INFO [(16, 11), (27, 17), (35, 19), (64, 22), (80, 32)]
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
Traceback (most recent call last):
File "src/launcher.py", line 146, in
main(sys.argv[1:], exp_config.ExpConfig)
File "src/launcher.py", line 142, in main
session = sess)
File "/home/xxx/ocrbench/Attention-OCR/src/model/model.py", line 151, in init
use_gru = use_gru)
File "/home/xxx/ocrbench/Attention-OCR/src/model/seq2seq_model.py", line 87, in init
single_cell = tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(attn_num_hidden, forget_bias=0.0, state_is_tuple=False)
AttributeError: 'module' object has no attribute 'core_rnn_cell'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.