Coder Social home page Coder Social logo

marketvectors's Introduction

MarketVectors

Implementations for my blog post here

marketvectors's People

Contributors

talolard avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

marketvectors's Issues

ValueError: Trying to share variable rnn/attention_cell_wrapper/multi_rnn_cell/cell_0/gru_cell/gates/kernel, but specified shape (200, 200) and found shape (2428, 200).

Training the RNN¶

with tf.Graph().as_default():

model = RNNModel()

input_ = train[0]

target = train[1]

with tf.Session() as sess:

    init = tf.initialize_all_variables()

    sess.run([init])

    loss = 2000

    

    for e in range(NUM_EPOCHS):

        state = sess.run(model.zero_state)

        epoch_loss =0

        for batch in range(0,NUM_TRAIN_BATCHES):

            start = batch*BATCH_SIZE

            end = start + BATCH_SIZE 

            feed = {

                model.input_data:input_[start:end],

                model.target_data:target[start:end],

                model.dropout_prob:0.5,

                model.start_state:state

                        }

            _,loss,acc,state = sess.run(

                [

                    model.train_op,

                    model.loss,

                    model.accuracy,

                    model.end_state

                ]

                ,feed_dict=feed

            )

            epoch_loss+=loss

            

        print('step - {0} loss - {1} acc - {2}'.format((e),epoch_loss,acc))

    final_preds =np.array([])

    for batch in range(0,NUM_VAL_BATCHES):

            start = batch*BATCH_SIZE

            end = start + BATCH_SIZE 

            feed = {

                model.input_data:val[0][start:end],

                model.target_data:val[1][start:end],

                model.dropout_prob:1,

                model.start_state:state

                        }

            acc,preds,state = sess.run(

                [

                    model.accuracy,

                    model.predictions,

                    model.end_state

                ]

                ,feed_dict=feed

            )

            print(acc)

            assert len(preds) == BATCH_SIZE

            final_preds = np.concatenate((final_preds,preds),axis=0)

WARNING:tensorflow:<tensorflow.contrib.rnn.python.ops.rnn_cell.AttentionCellWrapper object at 0x0000021E0CFB02B0>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.


ValueError Traceback (most recent call last)
in ()
1 with tf.Graph().as_default():
----> 2 model = RNNModel()
3 input_ = train[0]
4 target = train[1]
5 with tf.Session() as sess:

in init(self)
44 scope.reuse_variables()
45
---> 46 output, state = self.gru_cell(inp, state)
47 states.append(state)
48 outputs.append(output)

c:\python35\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state, scope)
178 with vs.variable_scope(vs.get_variable_scope(),
179 custom_getter=self._rnn_get_variable):
--> 180 return super(RNNCell, self).call(inputs, state)
181
182 def _rnn_get_variable(self, getter, *args, **kwargs):

c:\python35\lib\site-packages\tensorflow\python\layers\base.py in call(self, inputs, *args, **kwargs)
439 # Check input assumptions set after layer building, e.g. input shape.
440 self._assert_input_compatibility(inputs)
--> 441 outputs = self.call(inputs, *args, **kwargs)
442
443 # Apply activity regularization.

c:\python35\lib\site-packages\tensorflow\contrib\rnn\python\ops\rnn_cell.py in call(self, inputs, state)
1111 input_size = inputs.get_shape().as_list()[1]
1112 inputs = _linear([inputs, attns], input_size, True)
-> 1113 lstm_output, new_state = self._cell(inputs, state)
1114 if self._state_is_tuple:
1115 new_state_cat = array_ops.concat(nest.flatten(new_state), 1)

c:\python35\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state, scope)
178 with vs.variable_scope(vs.get_variable_scope(),
179 custom_getter=self._rnn_get_variable):
--> 180 return super(RNNCell, self).call(inputs, state)
181
182 def _rnn_get_variable(self, getter, *args, **kwargs):

c:\python35\lib\site-packages\tensorflow\python\layers\base.py in call(self, inputs, *args, **kwargs)
439 # Check input assumptions set after layer building, e.g. input shape.
440 self._assert_input_compatibility(inputs)
--> 441 outputs = self.call(inputs, *args, **kwargs)
442
443 # Apply activity regularization.

c:\python35\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state)
914 [-1, cell.state_size])
915 cur_state_pos += cell.state_size
--> 916 cur_inp, new_state = cell(cur_inp, cur_state)
917 new_states.append(new_state)
918

c:\python35\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state, scope)
178 with vs.variable_scope(vs.get_variable_scope(),
179 custom_getter=self._rnn_get_variable):
--> 180 return super(RNNCell, self).call(inputs, state)
181
182 def _rnn_get_variable(self, getter, *args, **kwargs):

c:\python35\lib\site-packages\tensorflow\python\layers\base.py in call(self, inputs, *args, **kwargs)
439 # Check input assumptions set after layer building, e.g. input shape.
440 self._assert_input_compatibility(inputs)
--> 441 outputs = self.call(inputs, *args, **kwargs)
442
443 # Apply activity regularization.

c:\python35\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state)
293 value = math_ops.sigmoid(
294 _linear([inputs, state], 2 * self._num_units, True, bias_ones,
--> 295 self._kernel_initializer))
296 r, u = array_ops.split(value=value, num_or_size_splits=2, axis=1)
297 with vs.variable_scope("candidate"):

c:\python35\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in _linear(args, output_size, bias, bias_initializer, kernel_initializer)
1015 _WEIGHTS_VARIABLE_NAME, [total_arg_size, output_size],
1016 dtype=dtype,
-> 1017 initializer=kernel_initializer)
1018 if len(args) == 1:
1019 res = math_ops.matmul(args[0], weights)

c:\python35\lib\site-packages\tensorflow\python\ops\variable_scope.py in get_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
1063 collections=collections, caching_device=caching_device,
1064 partitioner=partitioner, validate_shape=validate_shape,
-> 1065 use_resource=use_resource, custom_getter=custom_getter)
1066 get_variable_or_local_docstring = (
1067 """%s

c:\python35\lib\site-packages\tensorflow\python\ops\variable_scope.py in get_variable(self, var_store, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
960 collections=collections, caching_device=caching_device,
961 partitioner=partitioner, validate_shape=validate_shape,
--> 962 use_resource=use_resource, custom_getter=custom_getter)
963
964 def _get_partitioned_variable(self,

c:\python35\lib\site-packages\tensorflow\python\ops\variable_scope.py in get_variable(self, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
358 reuse=reuse, trainable=trainable, collections=collections,
359 caching_device=caching_device, partitioner=partitioner,
--> 360 validate_shape=validate_shape, use_resource=use_resource)
361 else:
362 return _true_getter(

c:\python35\lib\site-packages\tensorflow\python\ops\variable_scope.py in wrapped_custom_getter(getter, *args, **kwargs)
1403 return custom_getter(
1404 functools.partial(old_getter, getter),
-> 1405 *args, **kwargs)
1406 return wrapped_custom_getter
1407

c:\python35\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in _rnn_get_variable(self, getter, *args, **kwargs)
181
182 def _rnn_get_variable(self, getter, *args, **kwargs):
--> 183 variable = getter(*args, **kwargs)
184 trainable = (variable in tf_variables.trainable_variables() or
185 (isinstance(variable, tf_variables.PartitionedVariable) and

c:\python35\lib\site-packages\tensorflow\python\ops\variable_scope.py in wrapped_custom_getter(getter, *args, **kwargs)
1403 return custom_getter(
1404 functools.partial(old_getter, getter),
-> 1405 *args, **kwargs)
1406 return wrapped_custom_getter
1407

c:\python35\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in _rnn_get_variable(self, getter, *args, **kwargs)
181
182 def _rnn_get_variable(self, getter, *args, **kwargs):
--> 183 variable = getter(*args, **kwargs)
184 trainable = (variable in tf_variables.trainable_variables() or
185 (isinstance(variable, tf_variables.PartitionedVariable) and

c:\python35\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in _rnn_get_variable(self, getter, *args, **kwargs)
181
182 def _rnn_get_variable(self, getter, *args, **kwargs):
--> 183 variable = getter(*args, **kwargs)
184 trainable = (variable in tf_variables.trainable_variables() or
185 (isinstance(variable, tf_variables.PartitionedVariable) and

c:\python35\lib\site-packages\tensorflow\python\ops\variable_scope.py in _true_getter(name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource)
350 trainable=trainable, collections=collections,
351 caching_device=caching_device, validate_shape=validate_shape,
--> 352 use_resource=use_resource)
353
354 if custom_getter is not None:

c:\python35\lib\site-packages\tensorflow\python\ops\variable_scope.py in _get_single_variable(self, name, shape, dtype, initializer, regularizer, partition_info, reuse, trainable, collections, caching_device, validate_shape, use_resource)
667 raise ValueError("Trying to share variable %s, but specified shape %s"
668 " and found shape %s." % (name, shape,
--> 669 found_var.get_shape()))
670 if not dtype.is_compatible_with(found_var.dtype):
671 dtype_str = dtype.name

ValueError: Trying to share variable rnn/attention_cell_wrapper/multi_rnn_cell/cell_0/gru_cell/gates/kernel, but specified shape (200, 200) and found shape (2428, 200).

MarketVectors Error after import from iPython to Python, error not related...

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
ERROR

('self.logits = ', <tf.Tensor 'ff/fully_connected_2/BiasAdd:0' shape=(?, 11) dtype=float32>)
('self.target_data', <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=int32>)
Traceback (most recent call last):
File "./preparedata-manual-upgraded.py", line 204, in
model = Model()
File "./preparedata-manual-upgraded.py", line 187, in init
self.losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=self.logits,logits=self.target_data)
File "/home/steven/Practical-DataScience/DataScience/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1686, in sparse_softmax_cross_entropy_with_logits
(labels_static_shape.ndims, logits.get_shape().ndims))
ValueError: Rank mismatch: Rank of labels (received 2) should equal rank of logits minus 1 (received 1)

CODE IN QUESTION

class Model():
def init(self):
global_step = tf.contrib.framework.get_or_create_global_step()
self.input_data = tf.placeholder(dtype=tf.float32,shape=[None,num_features])
self.target_data = tf.placeholder(dtype=tf.int32,shape=[None])
self.dropout_prob = tf.placeholder(dtype=tf.float32,shape=[])
with tf.variable_scope("ff"):
droped_input = tf.nn.dropout(self.input_data,keep_prob=self.dropout_prob)

        layer_1 = tf.contrib.layers.fully_connected(
            num_outputs=hidden_1_size,
            inputs=droped_input,
        )
        layer_2 = tf.contrib.layers.fully_connected(
            num_outputs=hidden_2_size,
            inputs=layer_1,
        )
        self.logits = tf.contrib.layers.fully_connected(
            num_outputs=num_classes,
            activation_fn =None,
            inputs=layer_2,
        )
    with tf.variable_scope("loss"):
        print ("self.logits = ",self.logits) 
        print ("self.target_data", self.target_data)

exit()

        self.losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=self.logits,logits=self.target_data)
        mask = (1-tf.sign(1-self.target_data)) #Don't give credit for flat days
        mask = tf.cast(mask,tf.float32)
        self.loss = tf.reduce_sum(self.losses)
    
    with tf.name_scope("train"):
      opt = tf.train.AdamOptimizer(lr)
      gvs = opt.compute_gradients(self.loss)
      self.train_op = opt.apply_gradients(gvs, global_step=global_step)
    
    with tf.name_scope("predictions"):
        self.probs = tf.nn.softmax(self.logits)
        self.predictions = tf.argmax(self.probs, 1)
        correct_pred = tf.cast(tf.equal(self.predictions, tf.cast(self.target_data,tf.int64)),tf.float64)
        self.accuracy = tf.reduce_mean(correct_pred)

PRINTED OUTPUT OF VARIABLES BEFORE ENTERING THE FUNCTION

[[ 2 3 11 6 1 7 7 3 3 4 5]
[ 2 3 8 7 8 7 6 2 2 2 3]
[ 1 4 9 5 2 13 5 11 5 3 2]
[ 1 6 7 8 5 15 6 1 7 4 2]
[ 1 3 6 2 3 9 10 5 7 4 0]
[ 0 5 11 3 3 6 6 4 5 6 2]
[ 1 3 15 3 3 12 12 1 4 2 4]
[ 0 4 8 3 3 8 12 2 10 3 0]
[ 0 2 16 6 3 9 12 2 3 0 1]
[ 0 7 11 5 2 7 10 4 4 4 2]
[ 3 3 10 6 6 9 6 4 1 4 0]]
[[91 37 75]
[76 35 89]
[92 30 75]]


RELEVANT TENSOR FLOW DOC

https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits

pivot_table cause precision loss

https://github.com/pandas-dev/pandas/issues/15091
I fixed it by passing a numpy array. Ugly, but it works.

c_2_o = pd.DataFrame()
h_2_o = pd.DataFrame()
l_2_o = pd.DataFrame()
c_2_h = pd.DataFrame()
h_2_l = pd.DataFrame()
c1_c0 = pd.DataFrame()
vol = pd.DataFrame()

def make_inputs(filepath):
    D = pd.read_csv(filepath,header=None,names=['ticker','o','h','l','c','v']) 
    #Load the dataframe with headers
    D.index = pd.to_datetime(D.index,format='%Y%m%d') 
    # Set the indix to a datetime
    ticker = str(get_ticker(filepath))

    c_2_o[(ticker + '_c_2_o')] = zscore(ret(D.o,D.c))
    h_2_o[(ticker + '_h_2_o')] = zscore(ret(D.o,D.h))
    l_2_o[(ticker + '_l_2_o')] = zscore(ret(D.o,D.l))
    c_2_h[(ticker + '_c_2_h')] = zscore(ret(D.h,D.c))
    h_2_l[(ticker + '_h_2_l')] = zscore(ret(D.h,D.l))
    c1_c0[(ticker + '_c1_c0')] = ret(D.c,D.c.shift(-1)).fillna(0) #Tommorows return 
    vol[(ticker + '_vol')] = zscore(D.v)

for f in os.listdir(datapath):
    filepath = os.path.join(datapath,f)
    if filepath.endswith('.csv'):
        make_inputs(filepath)

dates = c_2_o.index
pivot = list(c_2_o.columns) +list( h_2_o.columns) +list( l_2_o.columns) + list(c_2_h.columns) + list(h_2_l.columns) + list(c1_c0.columns) + list(vol.columns)
values = np.concatenate((c_2_o.values, h_2_o.values, l_2_o.values, c_2_h.values, h_2_l.values, c1_c0.values, vol.values), axis=1)
flat = pd.DataFrame(values, index=dates, columns=pivot)

tensorlfow 1.x update needed for RNN model

To work with latest TF package, you'll need to make certain modification to the last part namely RNN codes.
tf.nn.rnn_cell is no longer there and you'll need to use tf.contrib.rnn.*, function name is same.
And, I'm trying to use "'state_is_tuple=True" since it's recommended now, but failed...

BTW, why the RNN learning loss is 1000+ at last? Does that mean anything?

log return to simple return

In
In [23]: TotalReturn = ((1-exp(TargetDF)).sum(1))/num_stocks

Normally to go from log return to simple return, we do:
R = exp(r) – 1
But you have done opposite
R = 1 - exp(r)

Is there something I misunderstood?

class Model()

I am getting the following error messages when I try and run the tensorflow portion of preparedata.ipynb. Not sure where to go from here.
with tf.Graph().as_default():
model = Model()
input_ = train[0]
target = train[1]
with tf.Session() as sess:
init = tf.initialize_all_variables()
sess.run([init])
epoch_loss =0
for e in range(NUM_EPOCHS):
if epoch_loss >0 and epoch_loss <1:
break
epoch_loss =0
for batch in range(0,NUM_TRAIN_BATCHES):

            start = batch*BATCH_SIZE
            end = start + BATCH_SIZE 
            feed = {
                model.input_data:input_[start:end],
                model.target_data:target[start:end],
                model.dropout_prob:0.9
                        }
            
            _,loss,acc = sess.run(
                [
                    model.train_op,
                    model.loss,
                    model.accuracy,
                ]
                ,feed_dict=feed
            )
            epoch_loss+=loss
        print('step - {0} loss - {1} acc - {2}'.format((1+batch+NUM_TRAIN_BATCHES*e),epoch_loss,acc))
            
    
    print('done training')
    final_preds =np.array([])
    final_probs =None
    for batch in range(0,NUM_VAL_BATCHES):
        
            start = batch*BATCH_SIZE
            end = start + BATCH_SIZE 
            feed = {
                model.input_data:val[0][start:end],
                model.target_data:val[1][start:end],
                model.dropout_prob:1
                        }
            
            acc,preds,probs = sess.run(
                [
                    model.accuracy,
                    model.predictions,
                    model.probs
                ]
                ,feed_dict=feed
            )
            print(acc)
            final_preds = np.concatenate((final_preds,preds),axis=0)
            if final_probs is None:
                final_probs = probs
            else:
                final_probs = np.concatenate((final_probs,probs),axis=0)
    prediction_conf = final_probs[np.argmax(final_probs,1)]

ValueError Traceback (most recent call last)
in ()
1 with tf.Graph().as_default():
----> 2 model = Model()
3 input_ = train[0]
4 target = train[1]
5 with tf.Session() as sess:

in init(self)
23 with tf.variable_scope("loss"):
24
---> 25 self.losses = tf.nn.sparse_softmax_cross_entropy_with_logits(self.logits,self.target_data)
26 mask = (1-tf.sign(1-self.target_data)) #Don't give credit for flat days
27 mask = tf.cast(mask,tf.float32)

/opt/intel/intelpython3/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py in sparse_softmax_cross_entropy_with_logits(_sentinel, labels, logits, name)
2011 """
2012 _ensure_xent_args("sparse_softmax_cross_entropy_with_logits", _sentinel,
-> 2013 labels, logits)
2014
2015 # TODO(pcmurray) Raise an error when the label is not an index in

/opt/intel/intelpython3/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py in _ensure_xent_args(name, sentinel, labels, logits)
1777 if sentinel is not None:
1778 raise ValueError("Only call %s with "
-> 1779 "named arguments (labels=..., logits=..., ...)" % name)
1780 if labels is None or logits is None:
1781 raise ValueError("Both labels and logits must be provided.")

ValueError: Only call sparse_softmax_cross_entropy_with_logits with named arguments (labels=..., logits=..., ...)

Tensorflow v1.4 compatibility: ValueError: Dimensions must be equal, but are 200 and 2428 for 'rnn/rnn/attention_cell_wrapper/attention_cell_wrapper_1/multi_rnn_cell/cell_0/cell_0/gru_cell/MatMul_2' (op: 'MatMul') with input shapes: [1,200], [2428,200]

Howdy Tal,

Attempting to run this in Tensorflow v1.4, and running into the issue below when replacing tf.pack/unpack with tf.stack/unstack. Any thoughts on what's going on here? I attached full stack from the run in question.

Thanks!
Jim

WARNING:tensorflow:<tensorflow.contrib.rnn.python.ops.rnn_cell.AttentionCellWrapper object at 0x0000000048BDA588>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
Tensor("Shape:0", shape=(3,), dtype=int32)
Tensor("Shape_1:0", shape=(3,), dtype=int32)

InvalidArgumentError Traceback (most recent call last)
C:\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, require_shape_fn)
685 graph_def_version, node_def_str, input_shapes, input_tensors,
--> 686 input_tensors_as_shapes, status)
687 except errors.InvalidArgumentError as err:

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py in exit(self, type_arg, value_arg, traceback_arg)
472 compat.as_text(c_api.TF_Message(self.status.status)),
--> 473 c_api.TF_GetCode(self.status.status))
474 # Delete the underlying status object from memory otherwise it stays alive

WARNING:tensorflow:<tensorflow.contrib.rnn.python.ops.rnn_cell.AttentionCellWrapper object at 0x0000000048BDA588>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
Tensor("Shape:0", shape=(3,), dtype=int32)
Tensor("Shape_1:0", shape=(3,), dtype=int32)


InvalidArgumentError Traceback (most recent call last)
C:\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, require_shape_fn)
685 graph_def_version, node_def_str, input_shapes, input_tensors,
--> 686 input_tensors_as_shapes, status)
687 except errors.InvalidArgumentError as err:

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py in exit(self, type_arg, value_arg, traceback_arg)
472 compat.as_text(c_api.TF_Message(self.status.status)),
--> 473 c_api.TF_GetCode(self.status.status))
474 # Delete the underlying status object from memory otherwise it stays alive

InvalidArgumentError: Dimensions must be equal, but are 200 and 2428 for 'rnn/rnn/attention_cell_wrapper/attention_cell_wrapper_1/multi_rnn_cell/cell_0/cell_0/gru_cell/MatMul_2' (op: 'MatMul') with input shapes: [1,200], [2428,200].

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
in ()
1 with tf.Graph().as_default():
----> 2 model = RNNModel()
3 input_ = train[0]
4 target = train[1]
5 with tf.Session() as sess:

in init(self)
41 scope.reuse_variables()
42
---> 43 output, state = self.gru_cell(inp, state)
44 states.append(state)
45 outputs.append(output)

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state, scope)
181 with vs.variable_scope(vs.get_variable_scope(),
182 custom_getter=self._rnn_get_variable):
--> 183 return super(RNNCell, self).call(inputs, state)
184
185 def _rnn_get_variable(self, getter, *args, **kwargs):

C:\Anaconda3\lib\site-packages\tensorflow\python\layers\base.py in call(self, inputs, *args, **kwargs)
573 if in_graph_mode:
574 self._assert_input_compatibility(inputs)
--> 575 outputs = self.call(inputs, *args, **kwargs)
576
577 if outputs is None:

C:\Anaconda3\lib\site-packages\tensorflow\contrib\rnn\python\ops\rnn_cell.py in call(self, inputs, state)
1117 self._linear1 = _Linear([inputs, attns], input_size, True)
1118 inputs = self._linear1([inputs, attns])
-> 1119 cell_output, new_state = self._cell(inputs, state)
1120 if self._state_is_tuple:
1121 new_state_cat = array_ops.concat(nest.flatten(new_state), 1)

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state, scope)
181 with vs.variable_scope(vs.get_variable_scope(),
182 custom_getter=self._rnn_get_variable):
--> 183 return super(RNNCell, self).call(inputs, state)
184
185 def _rnn_get_variable(self, getter, *args, **kwargs):

C:\Anaconda3\lib\site-packages\tensorflow\python\layers\base.py in call(self, inputs, *args, **kwargs)
573 if in_graph_mode:
574 self._assert_input_compatibility(inputs)
--> 575 outputs = self.call(inputs, *args, **kwargs)
576
577 if outputs is None:

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state)
1064 [-1, cell.state_size])
1065 cur_state_pos += cell.state_size
-> 1066 cur_inp, new_state = cell(cur_inp, cur_state)
1067 new_states.append(new_state)
1068

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state, scope)
181 with vs.variable_scope(vs.get_variable_scope(),
182 custom_getter=self._rnn_get_variable):
--> 183 return super(RNNCell, self).call(inputs, state)
184
185 def _rnn_get_variable(self, getter, *args, **kwargs):

C:\Anaconda3\lib\site-packages\tensorflow\python\layers\base.py in call(self, inputs, *args, **kwargs)
573 if in_graph_mode:
574 self._assert_input_compatibility(inputs)
--> 575 outputs = self.call(inputs, *args, **kwargs)
576
577 if outputs is None:

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state)
320 kernel_initializer=self._kernel_initializer)
321
--> 322 value = math_ops.sigmoid(self._gate_linear([inputs, state]))
323 r, u = array_ops.split(value=value, num_or_size_splits=2, axis=1)
324

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, args)
1187 res = math_ops.matmul(args[0], self._weights)
1188 else:
-> 1189 res = math_ops.matmul(array_ops.concat(args, 1), self._weights)
1190 if self._build_bias:
1191 res = nn_ops.bias_add(res, self._biases)

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, name)
1889 else:
1890 return gen_math_ops._mat_mul(
-> 1891 a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
1892
1893

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_math_ops.py in _mat_mul(a, b, transpose_a, transpose_b, name)
2434 _, _, _op = _op_def_lib._apply_op_helper(
2435 "MatMul", a=a, b=b, transpose_a=transpose_a, transpose_b=transpose_b,
-> 2436 name=name)
2437 _result = _op.outputs[:]
2438 _inputs_flat = _op.inputs

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
785 op = g.create_op(op_type_name, inputs, output_types, name=scope,
786 input_types=input_types, attrs=attr_protos,
--> 787 op_def=op_def)
788 return output_structure, op_def.is_stateful, op
789

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in create_op(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_shapes, compute_device)
2956 op_def=op_def)
2957 if compute_shapes:
-> 2958 set_shapes_for_outputs(ret)
2959 self._add_op(ret)
2960 self._record_op_seen_by_control_dependencies(ret)

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in set_shapes_for_outputs(op)
2207 shape_func = _call_cpp_shape_fn_and_require_op
2208
-> 2209 shapes = shape_func(op)
2210 if shapes is None:
2211 raise RuntimeError(

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in call_with_requiring(op)
2157
2158 def call_with_requiring(op):
-> 2159 return call_cpp_shape_fn(op, require_shape_fn=True)
2160
2161 _call_cpp_shape_fn_and_require_op = call_with_requiring

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py in call_cpp_shape_fn(op, require_shape_fn)
625 res = _call_cpp_shape_fn_impl(op, input_tensors_needed,
626 input_tensors_as_shapes_needed,
--> 627 require_shape_fn)
628 if not isinstance(res, dict):
629 # Handles the case where _call_cpp_shape_fn_impl calls unknown_shape(op).

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, require_shape_fn)
689 missing_shape_fn = True
690 else:
--> 691 raise ValueError(err.message)
692
693 if missing_shape_fn:

ValueError: Dimensions must be equal, but are 200 and 2428 for 'rnn/rnn/attention_cell_wrapper/attention_cell_wrapper_1/multi_rnn_cell/cell_0/cell_0/gru_cell/MatMul_2' (op: 'MatMul') with input shapes: [1,200], [2428,200].

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
in ()
1 with tf.Graph().as_default():
----> 2 model = RNNModel()
3 input_ = train[0]
4 target = train[1]
5 with tf.Session() as sess:

in init(self)
41 scope.reuse_variables()
42
---> 43 output, state = self.gru_cell(inp, state)
44 states.append(state)
45 outputs.append(output)

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state, scope)
181 with vs.variable_scope(vs.get_variable_scope(),
182 custom_getter=self._rnn_get_variable):
--> 183 return super(RNNCell, self).call(inputs, state)
184
185 def _rnn_get_variable(self, getter, *args, **kwargs):

C:\Anaconda3\lib\site-packages\tensorflow\python\layers\base.py in call(self, inputs, *args, **kwargs)
573 if in_graph_mode:
574 self._assert_input_compatibility(inputs)
--> 575 outputs = self.call(inputs, *args, **kwargs)
576
577 if outputs is None:

C:\Anaconda3\lib\site-packages\tensorflow\contrib\rnn\python\ops\rnn_cell.py in call(self, inputs, state)
1117 self._linear1 = _Linear([inputs, attns], input_size, True)
1118 inputs = self._linear1([inputs, attns])
-> 1119 cell_output, new_state = self._cell(inputs, state)
1120 if self._state_is_tuple:
1121 new_state_cat = array_ops.concat(nest.flatten(new_state), 1)

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state, scope)
181 with vs.variable_scope(vs.get_variable_scope(),
182 custom_getter=self._rnn_get_variable):
--> 183 return super(RNNCell, self).call(inputs, state)
184
185 def _rnn_get_variable(self, getter, *args, **kwargs):

C:\Anaconda3\lib\site-packages\tensorflow\python\layers\base.py in call(self, inputs, *args, **kwargs)
573 if in_graph_mode:
574 self._assert_input_compatibility(inputs)
--> 575 outputs = self.call(inputs, *args, **kwargs)
576
577 if outputs is None:

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state)
1064 [-1, cell.state_size])
1065 cur_state_pos += cell.state_size
-> 1066 cur_inp, new_state = cell(cur_inp, cur_state)
1067 new_states.append(new_state)
1068

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state, scope)
181 with vs.variable_scope(vs.get_variable_scope(),
182 custom_getter=self._rnn_get_variable):
--> 183 return super(RNNCell, self).call(inputs, state)
184
185 def _rnn_get_variable(self, getter, *args, **kwargs):

C:\Anaconda3\lib\site-packages\tensorflow\python\layers\base.py in call(self, inputs, *args, **kwargs)
573 if in_graph_mode:
574 self._assert_input_compatibility(inputs)
--> 575 outputs = self.call(inputs, *args, **kwargs)
576
577 if outputs is None:

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state)
320 kernel_initializer=self._kernel_initializer)
321
--> 322 value = math_ops.sigmoid(self._gate_linear([inputs, state]))
323 r, u = array_ops.split(value=value, num_or_size_splits=2, axis=1)
324

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, args)
1187 res = math_ops.matmul(args[0], self._weights)
1188 else:
-> 1189 res = math_ops.matmul(array_ops.concat(args, 1), self._weights)
1190 if self._build_bias:
1191 res = nn_ops.bias_add(res, self._biases)

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, name)
1889 else:
1890 return gen_math_ops._mat_mul(
-> 1891 a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
1892
1893

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_math_ops.py in _mat_mul(a, b, transpose_a, transpose_b, name)
2434 _, _, _op = _op_def_lib._apply_op_helper(
2435 "MatMul", a=a, b=b, transpose_a=transpose_a, transpose_b=transpose_b,
-> 2436 name=name)
2437 _result = _op.outputs[:]
2438 _inputs_flat = _op.inputs

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
785 op = g.create_op(op_type_name, inputs, output_types, name=scope,
786 input_types=input_types, attrs=attr_protos,
--> 787 op_def=op_def)
788 return output_structure, op_def.is_stateful, op
789

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in create_op(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_shapes, compute_device)
2956 op_def=op_def)
2957 if compute_shapes:
-> 2958 set_shapes_for_outputs(ret)
2959 self._add_op(ret)
2960 self._record_op_seen_by_control_dependencies(ret)

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in set_shapes_for_outputs(op)
2207 shape_func = _call_cpp_shape_fn_and_require_op
2208
-> 2209 shapes = shape_func(op)
2210 if shapes is None:
2211 raise RuntimeError(

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in call_with_requiring(op)
2157
2158 def call_with_requiring(op):
-> 2159 return call_cpp_shape_fn(op, require_shape_fn=True)
2160
2161 _call_cpp_shape_fn_and_require_op = call_with_requiring

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py in call_cpp_shape_fn(op, require_shape_fn)
625 res = _call_cpp_shape_fn_impl(op, input_tensors_needed,
626 input_tensors_as_shapes_needed,
--> 627 require_shape_fn)
628 if not isinstance(res, dict):
629 # Handles the case where _call_cpp_shape_fn_impl calls unknown_shape(op).

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, require_shape_fn)
689 missing_shape_fn = True
690 else:
--> 691 raise ValueError(err.message)
692
693 if missing_shape_fn:

ValueError: Dimensions must be equal, but are 200 and 2428 for 'rnn/rnn/attention_cell_wrapper/attention_cell_wrapper_1/multi_rnn_cell/cell_0/cell_0/gru_cell/MatMul_2' (op: 'MatMul') with input shapes: [1,200], [2428,200].

df.sort deprecated

I am getting error:
P.columns = new_ind

P = P.sort(axis=1) # Sort by columns


AttributeError Traceback (most recent call last)
in ()
1 P.columns = new_ind
----> 2 P = P.sort(axis=1) # Sort by columns

c:\python35\lib\site-packages\pandas\core\generic.py in getattr(self, name)
3079 if name in self._info_axis:
3080 return self[name]
-> 3081 return object.getattribute(self, name)
3082
3083 def setattr(self, name, value):

AttributeError: 'DataFrame' object has no attribute 'sort'

It seems sort is deprecated - now there is sort_values and sort_index.
Can you adapt code please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.