Coder Social home page Coder Social logo

question-generation's People

Contributors

dependabot[bot] avatar tomayoola avatar tomhosking avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

question-generation's Issues

Input shape for the tensors

I would like to know the input and output of the model names along with the dimensions and shapes for them, Or please add some references for the same to get them

q,q_len = self.sess.run([self.model.q_hat_beam_string,self.model.q_hat_beam_lens], feed_dict={self.model.context_in: ctxt_dict, self.model.answer_in: ans_dict})

I suppose the first parameter you are passing the dimensions of the tensor and the second one takes the inputs right here, so what exactly are the dimensions for the model along with the shape for each.

Replicate the RL paper parameters and to run

@rpoli40 Were u able to run it and replicate as given in the RL paper. I tried it but ran into an error
ValueError: Dimensions must be equal, but are 2269 and 2446 for 'train_loss/mul_4' (op: 'Mul') with input shapes: [?,?,2269], [?,?,2446].

I kept the advanced encoding & context_as_set FLAG as TRUE. Am I doing it right or do I need to change any other parameters in the flags file.

error when trying to run demo

./demo.sh --model_type MALUUBA --context_as_set --glove_vocab

WARNING:tensorflow:From /home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
Spinning up AQ demo app
WARNING:tensorflow:From /home/ec2-user/SageMaker/question-generation-master/src/seq2seq_model.py:418: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
Modifying Seq2Seq model to incorporate RL rewards
Total number of trainable parameters: 40934753
2018-11-07 18:40:24.744655: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1327, in _do_call
return fn(*args)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1312, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1420, in _call_tf_sessionrun
status, run_metadata)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3072,768] rhs shape= [1536,768]
[[Node: save/Assign_4 = Assign[T=DT_FLOAT, _class=["loc:@attn_mech/memory_layer/kernel"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](attn_mech/memory_layer/kernel, save/RestoreV2:4)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "./src/demo/app.py", line 80, in
init()
File "./src/demo/app.py", line 77, in init
app.generator.load_from_chkpt(chkpt_path)
File "/home/ec2-user/SageMaker/question-generation-master/src/demo/instance.py", line 33, in load_from_chkpt
saver.restore(self.sess, tf.train.latest_checkpoint(path))
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1775, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 905, in run
run_metadata_ptr)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1140, in _run
feed_dict_tensor, options, run_metadata)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
run_metadata)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3072,768] rhs shape= [1536,768]
[[Node: save/Assign_4 = Assign[T=DT_FLOAT, _class=["loc:@attn_mech/memory_layer/kernel"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](attn_mech/memory_layer/kernel, save/RestoreV2:4)]]

Caused by op 'save/Assign_4', defined at:
File "./src/demo/app.py", line 80, in
init()
File "./src/demo/app.py", line 77, in init
app.generator.load_from_chkpt(chkpt_path)
File "/home/ec2-user/SageMaker/question-generation-master/src/demo/instance.py", line 32, in load_from_chkpt
saver = tf.train.Saver()
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1311, in init
self.build()
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1320, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1357, in _build
build_save=build_save, build_restore=build_restore)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 809, in _build_internal
restore_sequentially, reshape)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 470, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 162, in restore
self.op.get_shape().is_fully_defined())
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py", line 281, in assign
validate_shape=validate_shape)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py", line 61, in assign
use_locking=use_locking, name=name)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3290, in create_op
op_def=op_def)
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1654, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [3072,768] rhsshape= [1536,768]
[[Node: save/Assign_4 = Assign[T=DT_FLOAT, _class=["loc:@attn_mech/memory_layer/kernel"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](attn_mech/memory_layer/kernel, save/RestoreV2:4)]]

What does the below hyperparameters indicates?

Hi @tomhosking

can you please elaborate what does the below hyperparameter indicates?

tf.app.flags.DEFINE_integer("filter_window_size_before", 1, "Filter contexts down to the sentences around the answer. Set -1 to disable filtering")
tf.app.flags.DEFINE_integer("filter_window_size_after", 1, "Filter contexts down to the sentences around the answer. Set -1 to disable filtering")
tf.app.flags.DEFINE_integer("filter_max_tokens", 100, "Filter contexts down to at most this many tokens around the answer. Set -1 to disable filtering")

Thanks,
Bhavika

Is this a bug?

In the eval.py, there is a piece of code:
if len(dev_data) < FLAGS.num_eval_samples: exit('***ERROR*** Eval dataset is smaller than the num_eval_samples flag!') if len(dev_data) > FLAGS.num_eval_samples: print('***WARNING*** Eval dataset is larger than the num_eval_samples flag!')
There is an error here, when I run this code, "ERROR Eval dataset is smaller than the num_eval_samples flag!" occured.

In my understanding, the dev_data means the dataset used in the development process while the num_eval_samples means the number of test data?
Therefore, I think I should change len(dev_data) to len(test_data)? Am I right?

TypeError : Nonetype object not iterable

I have been getting a typerror Nonetype object not iterable in the below line,
ctxts[i], ans_pos[i] = preprocessing.filter_context(ctxts[i], ans_pos[i], filter_window_size_before, filter_window_size_after, filter_max_tokens)

After having populated the answers & passing onto the generation model batch-wise, it's showing the error, "Couldn't find the char position in the filter_context block of preprocessing"

run problem

hello, The code has problem when it run,the issue is “tensorflow.python.framework.errors_impl.InvalidArgumentError: Paddings must be non-negative: 0 -22”
which at the code “ alignments_padded = tf.pad(alignments, [[0, 0], [0, self.units-tf.shape(alignments)[-1]]], 'CONSTANT')”

cloud you help to solve it thanks!

Could you help me upload these missing files which are needed in evaluation.

Looks like that repo is missing some files (this folder models/saved...doesn't exist):
File "/home/ec2-user/SageMaker/question-generation-be134175652204f3bf51cb194454d7b72c8b8105/src/langmodel/lm.py", line 101,in load_from_chkpt
with open(path+'/vocab.json') as f:
FileNotFoundError: [Errno 2] No such file or directory: '../models/saved/lmtest/vocab.json'
Exception ignored in: <bound method LstmLmInstance.del of <langmodel.lm.LstmLmInstance object at 0x7f9d8b8cb550>>
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/question-generation-be134175652204f3bf51cb194454d7b72c8b8105/src/langmodel/lm.py", line 98, in del
self.sess.close()
AttributeError: 'LstmLmInstance' object has no attribute 'sess'
Exception ignored in: <bound method QANetInstance.del of <qa.qanet.instance.QANetInstance object at 0x7f9d8b8cb518>>
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/question-generation-be134175652204f3bf51cb194454d7b72c8b8105/src/qa/qanet/instance.py", line53, in del
self.sess.close()
AttributeError: 'QANetInstance' object has no attribute 'sess'

could you help me upload these missing files which are needed in evaluation.

Unable to save the model

@tomhosking I am unable to save the checkpoints while training the model. I guess m unable to figure out what should be in the restore path. Can u help me in this regard?

Unrecognised model type: MALUUBA

(.venv) gpuws@gpuws32g:/ub16_prj/question-generation$ bash train.sh
WARNING:tensorflow:From /home/gpuws/ub16_prj/question-generation/.venv/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
Run ID is 1544494702
Model type is MALUUBA
Loaded SQuAD with 75722 triples
Unrecognised model type: MALUUBA
(.venv) gpuws@gpuws32g:
/ub16_prj/question-generation$

Cannot join current thread

Got the following error while training the module !


Eval 1000:  39%|###9      | 259/660 [2:45:08<3:45:04, 33.68s/it]
Eval 1000:  39%|###9      | 260/660 [2:45:45<3:51:40, 34.75s/it]
Eval 1000:  40%|###9      | 261/660 [2:46:32<4:16:02, 38.50s/it]
Eval 1000:  40%|###9      | 262/660 [2:46:48<3:31:01, 31.81s/it]
Eval 1000:  40%|###9      | 263/660 [2:47:36<4:01:20, 36.48s/it]
Eval 1000:  40%|####      | 264/660 [2:48:17<4:10:44, 37.99s/it]
Eval 1000:  40%|####      | 265/660 [2:48:55<4:09:58, 37.97s/it]
Eval 1000:  40%|####      | 266/660 [2:49:36<4:15:21, 38.89s/it]
Eval 1000:  40%|####      | 267/660 [2:50:14<4:12:22, 38.53s/it]
Eval 1000:  41%|####      | 268/660 [2:50:54<4:14:42, 38.98s/it]
Eval 1000:  41%|####      | 269/660 [2:51:53<4:52:46, 44.93s/it]
Eval 1000:  41%|####      | 270/660 [2:52:38<4:52:35, 45.02s/it]
Eval 1000:  41%|####1     | 271/660 [2:53:20<4:45:23, 44.02s/it]
Eval 1000:  41%|####1     | 272/660 [2:53:57<4:32:12, 42.09s/it]
Eval 1000:  41%|####1     | 273/660 [2:54:38<4:30:02, 41.87s/it]
Eval 1000:  42%|####1     | 274/660 [2:55:20<4:28:42, 41.77s/it]
Eval 1000:  42%|####1     | 275/660 [2:56:09<4:42:37, 44.05s/it]
Eval 1000:  42%|####1     | 276/660 [2:57:02<4:57:38, 46.51s/it]
Eval 1000:  42%|####1     | 277/660 [2:57:19<4:01:31, 37.84s/it]
Eval 1000:  42%|####2     | 278/660 [2:57:40<3:28:02, 32.68s/it]
Eval 1000:  42%|####2     | 279/660 [2:57:59<3:00:44, 28.46s/it]
Eval 1000:  42%|####2     | 280/660 [2:58:39<3:22:59, 32.05s/it]
Eval 1000:  43%|####2     | 281/660 [2:59:21<3:40:47, 34.95s/it]
Eval 1000:  43%|####2     | 282/660 [3:00:04<3:55:35, 37.39s/it]
Eval 1000:  43%|####2     | 283/660 [3:00:55<4:20:37, 41.48s/it]
Eval 1000:  43%|####3     | 284/660 [3:01:39<4:24:33, 42.22s/it]
Eval 1000:  43%|####3     | 285/660 [3:02:22<4:26:11, 42.59s/it]
Eval 1000:  43%|####3     | 286/660 [3:03:13<4:41:31, 45.16s/it]
Eval 1000:  43%|####3     | 287/660 [3:03:58<4:39:56, 45.03s/it]
Eval 1000:  44%|####3     | 288/660 [3:04:38<4:29:30, 43.47s/it]
Eval 1000:  44%|####3     | 289/660 [3:04:54<3:37:57, 35.25s/it]
Eval 1000:  44%|####3     | 290/660 [3:05:10<3:02:42, 29.63s/it]
Eval 1000:  44%|####4     | 291/660 [3:05:58<3:35:32, 35.05s/it]
Eval 1000:  44%|####4     | 292/660 [3:06:41<3:49:31, 37.42s/it]
Eval 1000:  44%|####4     | 293/660 [3:06:59<3:13:29, 31.63s/it]
Eval 1000:  45%|####4     | 294/660 [3:07:15<2:43:25, 26.79s/it]
Eval 1000:  45%|####4     | 295/660 [3:07:32<2:25:47, 23.97s/it]
Eval 1000:  45%|####4     | 296/660 [3:08:28<3:24:13, 33.66s/it]
Eval 1000:  45%|####5     | 297/660 [3:08:49<2:59:40, 29.70s/it]
Eval 1000:  45%|####5     | 298/660 [3:09:32<3:24:05, 33.83s/it]
Eval 1000:  45%|####5     | 299/660 [3:09:52<2:58:49, 29.72s/it]
Eval 1000:  45%|####5     | 300/660 [3:10:36<3:22:46, 33.80s/it]
Eval 1000:  46%|####5     | 301/660 [3:11:21<3:42:01, 37.11s/it]
Eval 1000:  46%|####5     | 302/660 [3:12:10<4:03:41, 40.84s/it]
Eval 1000:  46%|####5     | 303/660 [3:12:51<4:03:45, 40.97s/it]
Eval 1000:  46%|####6     | 304/660 [3:13:39<4:14:45, 42.94s/it]
Eval 1000:  46%|####6     | 305/660 [3:13:59<3:33:06, 36.02s/it]
Eval 1000:  46%|####6     | 306/660 [3:14:55<4:07:27, 41.94s/it]
Eval 1000:  47%|####6     | 307/660 [3:15:39<4:11:57, 42.83s/it]
Eval 1000:  47%|####6     | 308/660 [3:15:56<3:24:41, 34.89s/it]
Eval 1000:  47%|####6     | 309/660 [3:16:42<3:43:38, 38.23s/it]
Eval 1000:  47%|####6     | 310/660 [3:17:23<3:48:52, 39.24s/it]
Eval 1000:  47%|####7     | 311/660 [3:17:24<2:41:26, 27.76s/it]Run ID is  1541052899
Model type is  MALUUBA
Loaded SQuAD with  87599  triples
Modifying Seq2Seq model to incorporate RL rewards
Total number of trainable parameters: 39165281
Traceback (most recent call last):
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call
    return fn(*args)
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1312, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1420, in _call_tf_sessionrun
    status, run_metadata)
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 516, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.OutOfRangeError: End of sequence
         [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,?], [?,?],[?,?], [?], [?], [?,?], [?,?], [?,?,?], [?], [?,?], [?,?], [?], [?,?], [?]], output_types=[DT_STRING, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_STRING, DT_INT32, DT_FLOAT, DT_INT32, DT_STRING, DT_INT32, DT_INT32, DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"](Iterator)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./src/train.py", line 415, in <module>
    tf.app.run()
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
    _sys.exit(main(argv))
  File "./src/train.py", line 373, in main
    dev_batch, curr_batch_size = dev_data_source.get_batch()
  File "C:\QG_blooms\src\datasources\squad_streamer.py", line 43, in get_batch
    return self.sess.run([self.batch_as_nested_tuple, self.batch_len])
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 905, in run
    run_metadata_ptr)
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1140, in _run
    feed_dict_tensor, options, run_metadata)
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1321, in _do_run
    run_metadata)
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: End of sequence
         [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,?], [?,?],[?,?], [?], [?], [?,?], [?,?], [?,?,?], [?], [?,?], [?,?], [?], [?,?], [?]], output_types=[DT_STRING, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_STRING, DT_INT32, DT_FLOAT, DT_INT32, DT_STRING, DT_INT32, DT_INT32, DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"](Iterator)]]

Caused by op 'IteratorGetNext', defined at:
  File "./src/train.py", line 415, in <module>
    tf.app.run()
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
    _sys.exit(main(argv))
  File "./src/train.py", line 145, in main
    with SquadStreamer(vocab, FLAGS.batch_size, FLAGS.num_epochs, shuffle=True)as train_data_source, SquadStreamer(vocab, FLAGS.eval_batch_size, 1, shuffle=True) as dev_data_source:
  File "C:\QG_blooms\src\datasources\squad_streamer.py", line 24, in __enter__
    self.build_data_pipeline(self.batch_size)
  File "C:\QG_blooms\src\datasources\squad_streamer.py", line 107, in build_data_pipeline
    self.batch_as_nested_tuple = self.iterator.get_next()
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 366, in get_next
    name=name)), self._output_types,
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_dataset_ops.py", line 1484, in iterator_get_next
    output_shapes=output_shapes, name=name)
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3290, in create_op
    op_def=op_def)
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1654, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

OutOfRangeError (see above for traceback): End of sequence
         [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,?], [?,?],[?,?], [?], [?], [?,?], [?,?], [?,?,?], [?], [?,?], [?,?], [?], [?,?], [?]], output_types=[DT_STRING, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_STRING, DT_INT32, DT_FLOAT, DT_INT32, DT_STRING, DT_INT32, DT_INT32, DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"](Iterator)]]


Exception ignored in: <bound method tqdm.__del__ of Eval 1000:  47%|####7     |311/660 [3:17:24<2:41:26, 27.76s/it]>
Traceback (most recent call last):
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tqdm\_tqdm.py", line 931, in __del__
    self.close()
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tqdm\_tqdm.py", line 1133, in close
    self._decr_instances(self)
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tqdm\_tqdm.py", line 496, in _decr_instances
    cls.monitor.exit()
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\site-packages\tqdm\_monitor.py", line 52, in exit
    self.join()
  File "C:\Users\Akashtyagi\AppData\Local\Programs\Python\Python36\lib\threading.py", line 1053, in join
    raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread


My system config -
Tensoflow 1.7.0
Python - 3.6.7
System RAM - 16GB
Processor - i3-3.30Ghz
Graphic card - 1GB
but i have not used gpu enabled Tensorflow.

is this repo finished now?

Thanks for your contribution.
I was wondering if this repo finished and how to use my corpus to train this model? thanks

error running demo

When I'm trying to run demo.sh after setup.sh, I get this error

$ ./demo.sh Traceback (most recent call last): File "./src/demo/app.py", line 6, in <module> from instance import AQInstance File "C:\Users\79270\question-generation\src\demo\instance.py", line 5, in <module> from seq2seq_model import Seq2SeqModel ModuleNotFoundError: No module named 'seq2seq_model'

I tried to install this model using
pip install seq2seq_model
but this module wasn't found

I also tried to comment import in instance.py
but got another error about rl_model
Traceback (most recent call last): File "./src/demo/app.py", line 6, in <module> from instance import AQInstance File "C:\Users\79270\question-generation\src\demo\instance.py", line 6, in <module> from rl_model import RLModel ModuleNotFoundError: No module named 'rl_model'

Not installing in the requirements.txt

"en-core-web-sm==2.0.0" contained in requirements.txt file is not installing. I want to run this code to generate question given context and answer for my dataset, please help. It's urgent.

Please give me a proper documentation.

Fine-tuning the hyperparameters

Hi Tom,
Did you fine-tune any of the hyper-parameters that u have shared in the FLAGS file as I tried with the same and got a BLEU score of 14.17, NLL- 40.33.; with the GLOVE vocab, PG set to FALSE. Can u let me know the exact split of the dev and test set samples that you used for the evaluation as this high score may be biased due to some different split I took?

Debugging the code in VSC

It is not an issue

I just want to debug the code as a flask application(app.py ~ demo.sh) and I find it hard to configure the launch.json in Visual studio code can someone help me with this?

Model dropping out the Phrase/words while generating the questions

Thanks, @tomhosking for sharing the code.

In the majority of criteria, your code is able to generate good quality questions. However, in some case, it generates the question with truncating useful phrases.

For example:

sentence :
Narendra Damodardas Modi is an Indian politician serving as the 14th and current Prime Minister of India since 2014.
Selected Answer: Narendra Damodardas Modi
Generated Question: who is an indian politician serving ?

sentence: As of 2017, Ahmedabad's estimated gross domestic product was $68 billion.
selected Answer : 2017
Generated Question: in what year was ahmedabad 's estimated gross domestic product ?

Can you please help me to solve this problem? Why the model dropping out the phrase/word while generating question.

Thanks
Bhavika

Unable to use tensorflow_hub

Iam planning to use universal sentence encoder in the app.py and when i try to import tensorflow_hub it always throws this error

absl.flags._exceptions.DuplicateFlagError: The flag 'log_dir' is defined twice. First from absl.logging, Second from flags.  Description from first occurrence: directory to write logfiles into

How do I fix this?

Cannot join current thread

Got the following error while training the module !

Run ID is 1562759522
Model type is RL-S2S
<_io.TextIOWrapper name='./data/train-v1.1.json' mode='r' encoding='ANSI_X3.4-1968'>
<_io.TextIOWrapper name='./data/dev-v1.1.json' mode='r' encoding='ANSI_X3.4-1968'>
Loaded SQuAD with 88825 triples
50131 300

WARNING:tensorflow:From /content/clouderizer/bloomsburyai_question-generation/code/src/seq2seq_model.py:126: BasicLSTMCell.init (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is deprecated, please use tf.nn.rnn_cell.LSTMCell, which supports all the feature this cell currently has. Please replace the existing code with tf.nn.rnn_cell.LSTMCell(name='basic_lstm_cell').
WARNING:tensorflow:From /content/clouderizer/bloomsburyai_question-generation/code/src/seq2seq_model.py:444: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
Modifying Seq2Seq model to incorporate RL rewards
Total number of trainable parameters: 34871537
2019-07-10 11:54:30.456140: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
Training: 3%|6 | 1000/34675 [1:39:42<58:27:08, 6.25s/it]

Eval 1000: 0%| | 1/660 [00:06<1:08:28, 6.24s/it]
....
Eval 1000: 100%|##############################| 660/660 [46:26<00:00, 3.96s/it]
New best NLL! 65.91491210731593 Saving...
Training: 3%|6 | 1016/34675 [2:27:42<87:41:03, 9.38s/it]Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(args)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: 0-th value returned by pyfunc_22 is double, but expects string
[[{{node PyFunc_2}} = PyFunc[Tin=[DT_STRING, DT_INT32, DT_STRING], Tout=[DT_STRING, DT_INT32, DT_INT32, DT_INT32], token="pyfunc_22", _device="/device:CPU:
"](arg2, arg3, arg0)]]
[[{{node IteratorGetNext}} = IteratorGetNextoutput_shapes=[[?,?], [?,?], [?,?], [?], [?], [?,?], [?,?], [?,?,?], [?], [?,?], [?,?], [?], [?,?], [?]], output_types=[DT_STRING, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_STRING, DT_INT32, DT_FLOAT, DT_INT32, DT_STRING, DT_INT32, DT_INT32, DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "./src/train.py", line 486, in
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "./src/train.py", line 204, in main
train_batch, curr_batch_size = train_data_source.get_batch()
File "/content/clouderizer/bloomsburyai_question-generation/code/src/datasources/squad_streamer.py", line 43, in get_batch
return self.sess.run([self.batch_as_nested_tuple, self.batch_len])
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: 0-th value returned by pyfunc_22 is double, but expects string
[[{{node PyFunc_2}} = PyFunc[Tin=[DT_STRING, DT_INT32, DT_STRING], Tout=[DT_STRING, DT_INT32, DT_INT32, DT_INT32], token="pyfunc_22", _device="/device:CPU:*"](arg2, arg3, arg0)]]
[[node IteratorGetNext (defined at /content/clouderizer/bloomsburyai_question-generation/code/src/datasources/squad_streamer.py:107) = IteratorGetNextoutput_shapes=[[?,?], [?,?], [?,?], [?], [?], [?,?], [?,?], [?,?,?], [?], [?,?], [?,?], [?], [?,?], [?]], output_types=[DT_STRING, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_STRING, DT_INT32, DT_FLOAT, DT_INT32, DT_STRING, DT_INT32, DT_INT32, DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Exception ignored in: <bound method tqdm.del of Training: 3%|6 | 1016/34675 [2:27:43<87:41:03, 9.38s/it]>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 931, in del
self.close()
File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 1133, in close
self._decr_instances(self)
File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 496, in _decr_instances
cls.monitor.exit()
File "/usr/local/lib/python3.6/dist-packages/tqdm/_monitor.py", line 52, in exit
self.join()
File "/usr/lib/python3.6/threading.py", line 1053, in join
raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread

AttributeError: can't set attribute

While trying to implement the code i ran into error when executing the train.sh, it shows an Attribute error as follow

$ bash train.sh
Run ID is  1539597253
Model type is  MALUUBA
Loaded SQuAD with  87599  triples
C:\Users\Akashtyagi\AppData\Local\Continuum\anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
WARNING:tensorflow:From C:\Users\Akashtyagi\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\ops\rnn.py:417: calling reverse_sequence (from tensorflow.python.ops.array_ops) with seq_dim is deprecated and will be removed in a future version.
Instructions for updating:
seq_dim is deprecated, use seq_axis instead
WARNING:tensorflow:From C:\Users\Akashtyagi\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py:432: calling reverse_sequence (from tensorflow.python.ops.array_ops) with batch_dim is deprecated and will be removed in a future version.
Instructions for updating:
batch_dim is deprecated, use batch_axis instead
Traceback (most recent call last):
  File "./src/train.py", line 415, in <module>
    tf.app.run()
  File "C:\Users\Akashtyagi\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
    _sys.exit(main(argv))
  File "./src/train.py", line 135, in main
    model = MaluubaModel(vocab, training_mode=True, use_embedding_loss=FLAGS.embedding_loss)
  File "D:\Pythonn\blooms_QG\question-generation\src\maluuba_model.py", line 20, in __init__
    super().__init__(vocab, advanced_condition_encoding=True, training_mode=training_mode, use_embedding_loss=use_embedding_loss)
  File "D:\Pythonn\blooms_QG\question-generation\src\seq2seq_model.py", line 36, in __init__
    super().__init__()
  File "D:\Pythonn\blooms_QG\question-generation\src\base_model.py", line 14, in __init__
    self.build_model()
  File "D:\Pythonn\blooms_QG\question-generation\src\seq2seq_model.py", line 298, in build_model
    name="copy_layer")
  File "D:\Pythonn\blooms_QG\question-generation\src\copy_mechanism\copy_layer.py", line 136, in __init__
    self.output_mask=output_mask
AttributeError: can't set attribute

z

Error: Assign requires shapes of both tensors to match. lhs shape= [1536,768] rhs shape= [3072,768]

Used train.py --advanced_condition_encoding --nocontext_as_set to retrain the model when more data was added to an existing training dataset from Squad. The model was trained successfully.
Redirected the paths to the new model and when running python ./src/demo/instance.py facing the following error:

here
here2 ./models/qgen/RL-MALUUBA/1547154689\model.checkpoint-29000 <tensorflow.python.client.session.Session object at 0x00000292C9ECEA20>
Traceback (most recent call last):
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call
return fn(*args)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1312, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1420, in _call_tf_sessionrun
status, run_metadata)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 516, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1536,768] rhs shape= [3072,768]
[[Node: save/Assign_4 = Assign[T=DT_FLOAT, _class=["loc:@attn_mech/memory_layer/kernel"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](attn_mech/memory_layer/kernel, save/RestoreV2:4)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call
return fn(*args)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1312, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1420, in _call_tf_sessionrun
status, run_metadata)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 516, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1536,768] rhs shape= [3072,768]
[[Node: save/Assign_4 = Assign[T=DT_FLOAT, _class=["loc:@attn_mech/memory_layer/kernel"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](attn_mech/memory_layer/kernel, save/RestoreV2:4)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "./src/demo/instance.py", line 76, in
tf.app.run()
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "./src/demo/instance.py", line 60, in main
generator.load_from_chkpt(chkpt_path)
File "./src/demo/instance.py", line 31, in load_from_chkpt
saver.restore(self.sess, tf.train.latest_checkpoint(path))
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1775, in restore
{self.saver_def.filename_tensor_name: save_path})
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\client\session.py", line 905, in run
run_metadata_ptr)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1140, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1321, in _do_run
run_metadata)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1536,768] rhs shape= [3072,768]
[[Node: save/Assign_4 = Assign[T=DT_FLOAT, _class=["loc:@attn_mech/memory_layer/kernel"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](attn_mech/memory_layer/kernel, save/RestoreV2:4)]]

Caused by op 'save/Assign_4', defined at:
File "./src/demo/instance.py", line 76, in
tf.app.run()
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "./src/demo/instance.py", line 60, in main
generator.load_from_chkpt(chkpt_path)
File "./src/demo/instance.py", line 30, in load_from_chkpt
saver = tf.train.Saver()
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1311, in init
self.build()
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1320, in build
self._build(self._filename, build_save=True, build_restore=True)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1357, in _build
build_save=build_save, build_restore=build_restore)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 809, in _build_internal
restore_sequentially, reshape)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 470, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 162, in restore
self.op.get_shape().is_fully_defined())
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\ops\state_ops.py", line 281, in assign
validate_shape=validate_shape)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 60, in assign
use_locking=use_locking, name=name)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3290, in create_op
op_def=op_def)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1654, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1536,768] rhs shape= [3072,768]
[[Node: save/Assign_4 = Assign[T=DT_FLOAT, _class=["loc:@attn_mech/memory_layer/kernel"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](attn_mech/memory_layer/kernel, save/RestoreV2:4)]]

looks like the error is coming from load_from_chkpt
def load_from_chkpt(self, path):
print("here")
self.chkpt_path = path
with self.model.graph.as_default():
saver = tf.train.Saver()
print("here2", tf.train.latest_checkpoint(path), self.sess)
saver.restore(self.sess, tf.train.latest_checkpoint(path))
print("################################Loaded model from "+path)

I can see prints of "here" and "here2" but not print("################################Loaded model from "+path)

Any suggestions why this error can happen?
Thank you in advance

Unable to find the Discriminator Instance & the saved disc_path

Hi Tom,
I was trying to use the policy gradient technique. I used the latest commit regarding the LM and QA saved models. But I wasn't able to find the Discriminator instance saved model i.e. disc_emb files, etc. which are being used in the calculation of RL_score. Can you help me in this regard?

List Index Out of Range

Hi, I am getting a list out of index error

qs.extend(self.get_q_batches(ctxts[start_ix:end_ix], ans[start_ix:end_ix], ans_pos[start_ix:end_ix]))
ctxt_feats[0] = np.array(ctxt_feats[0], dtype=bytes)

Could you please help me regarding the same?

error when running eval.py

Used commit recommended (be13417) and tried to run python eval.py --data_path ./data/
Got the following error:
sh-4.2$ python eval.py --data_path ./data/
WARNING:tensorflow:From /home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
Traceback (most recent call last):
File "eval.py", line 265, in
tf.app.run()
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "eval.py", line 41, in main
dev_data = loader.load_squad_triples(FLAGS.data_path, dev=FLAGS.eval_on_dev, test=FLAGS.eval_on_test)
File "/home/ec2-user/SageMaker/question-generation-be134175652204f3bf51cb194454d7b72c8b8105/src/helpers/loader.py", line 32, in load_squad_triples
raw_data = load_squad_dataset(path, dev=dev, test=test, v2=v2)
File "/home/ec2-user/SageMaker/question-generation-be134175652204f3bf51cb194454d7b72c8b8105/src/helpers/loader.py", line 23, in load_squad_dataset
with open(path+filename) as dataset_file:
FileNotFoundError: [Errno 2] No such file or directory: './data/dev-v1.1.json'
sh-4.2$ python eval.py --data_path ../data/
WARNING:tensorflow:From /home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
ERROR Eval dataset is smaller than the num_eval_samples flag!
sh-4.2$

How can I run the demo without using flask app?
Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.