Coder Social home page Coder Social logo

tf-fit's People

Contributors

jph00 avatar pendar2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tf-fit's Issues

UnimplementedError: Generic conv implementation only supports NHWC tensor format for now. [Op:Conv2D]

When using the following code I get the error below. Any ideas? I've tried changing to channels_last; however, it doesn't seems like changing to channels_last is an option in the databunch.

from fastai.vision import *
from fastai_tf_fit import *
import tensorflow as tf
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.models import Model
import keras_applications
from keras_applications.resnet import ResNet152

path = Path(path_to_data_dir)

tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)

np.random.seed(42)
src = (ImageList.from_folder(path)
       .split_by_folder(train='train', valid='val')
       .label_from_folder())

data = (src.transform(tfms, size=112)
        .databunch()
        .normalize(imagenet_stats))

def categorical_accuracy(y_pred, y_true):
    return tf.keras.backend.mean(tf.keras.backend.equal(y_true, tf.keras.backend.argmax(y_pred, axis=-1)))

opt_fn = tf.train.AdamOptimizer
loss_fn = tf.losses.softmax_cross_entropy
metrics = [categorical_accuracy]

base_model = ResNet152(weights=path_to_weights,
                       include_top=False,
                       input_shape=(3,112,112),
                       backend=tf.keras.backend, 
                       layers=tf.keras.layers, 
                       models=tf.keras.models, 
                       utils=tf.keras.utils)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(number_of_classes, activation='sigmoid')(x)

model = Model(inputs=base_model.input, outputs=predictions)

learn = TfLearner(data, model, opt_fn, loss_fn, metrics=metrics, true_wd=True, bn_wd=True, wd=defaults.wd, train_bn=True)
---------------------------------------------------------------------------
UnimplementedError                        Traceback (most recent call last)
<ipython-input-30-115701fe1db2> in <module>
----> 1 learn = TfLearner(data, model, opt_fn, loss_fn, metrics=metrics, true_wd=True, bn_wd=True, wd=defaults.wd, train_bn=True)

<string> in __init__(self, data, model, opt_func, loss_func, metrics, true_wd, bn_wd, wd, train_bn, path, model_dir, callback_fns, callbacks, layer_groups)

/opt/conda/lib/python3.6/site-packages/fastai_tf_fit/fastai_tf_fit.py in __post_init__(self)
    161         xb, yb = next(iter(self.data.train_dl))
    162         xb, yb = _pytorch_to_tf(xb), _pytorch_to_tf(yb)
--> 163         tf_loss_batch(self.model, xb, yb)
    164 
    165     def init(self, init): raise NotImplementedError

/opt/conda/lib/python3.6/site-packages/fastai_tf_fit/fastai_tf_fit.py in tf_loss_batch(model, xb, yb, loss_func, opt, cb_handler)
     42 
     43 
---> 44     if not loss_func: return forward(), yb[0]
     45 
     46     loss = None

/opt/conda/lib/python3.6/site-packages/fastai_tf_fit/fastai_tf_fit.py in forward()
     32 
     33     def forward():
---> 34         out = model(*xb)
     35         out = cb_handler.on_loss_begin(out)
     36         return out

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
    677           with base_layer_utils.autocast_context_manager(
    678               input_list, self._mixed_precision_policy.should_cast_variables):
--> 679             outputs = self.call(inputs, *args, **kwargs)
    680           self._handle_activity_regularization(inputs, outputs)
    681           self._set_mask_metadata(inputs, outputs, previous_mask)

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py in call(self, inputs, training, mask)
    749                                 ' implement a `call` method.')
    750 
--> 751     return self._run_internal_graph(inputs, training=training, mask=mask)
    752 
    753   def compute_output_shape(self, input_shape):

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py in _run_internal_graph(self, inputs, training, mask)
    891 
    892           # Compute outputs.
--> 893           output_tensors = layer(computed_tensors, **kwargs)
    894 
    895           # Update tensor_dict.

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
    677           with base_layer_utils.autocast_context_manager(
    678               input_list, self._mixed_precision_policy.should_cast_variables):
--> 679             outputs = self.call(inputs, *args, **kwargs)
    680           self._handle_activity_regularization(inputs, outputs)
    681           self._set_mask_metadata(inputs, outputs, previous_mask)

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/layers/convolutional.py in call(self, inputs)
    194 
    195   def call(self, inputs):
--> 196     outputs = self._convolution_op(inputs, self.kernel)
    197 
    198     if self.use_bias:

/opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py in __call__(self, inp, filter)
   1077 
   1078   def __call__(self, inp, filter):  # pylint: disable=redefined-builtin
-> 1079     return self.conv_op(inp, filter)
   1080 
   1081 

/opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py in __call__(self, inp, filter)
    633 
    634   def __call__(self, inp, filter):  # pylint: disable=redefined-builtin
--> 635     return self.call(inp, filter)
    636 
    637 

/opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py in __call__(self, inp, filter)
    232         padding=self.padding,
    233         data_format=self.data_format,
--> 234         name=self.name)
    235 
    236 

/opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py in conv2d(input, filter, strides, padding, use_cudnn_on_gpu, data_format, dilations, name, filters)
   1951                            data_format=data_format,
   1952                            dilations=dilations,
-> 1953                            name=name)
   1954 
   1955 

/opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/gen_nn_ops.py in conv2d(input, filter, strides, padding, use_cudnn_on_gpu, explicit_paddings, data_format, dilations, name)
   1029             input, filter, strides=strides, use_cudnn_on_gpu=use_cudnn_on_gpu,
   1030             padding=padding, explicit_paddings=explicit_paddings,
-> 1031             data_format=data_format, dilations=dilations, name=name, ctx=_ctx)
   1032       except _core._SymbolicException:
   1033         pass  # Add nodes to the TensorFlow graph.

/opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/gen_nn_ops.py in conv2d_eager_fallback(input, filter, strides, padding, use_cudnn_on_gpu, explicit_paddings, data_format, dilations, name, ctx)
   1128   explicit_paddings, "data_format", data_format, "dilations", dilations)
   1129   _result = _execute.execute(b"Conv2D", 1, inputs=_inputs_flat, attrs=_attrs,
-> 1130                              ctx=_ctx, name=name)
   1131   _execute.record_gradient(
   1132       "Conv2D", _inputs_flat, _attrs, _result, name)

/opt/conda/lib/python3.6/site-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     65     else:
     66       message = e.message
---> 67     six.raise_from(core._status_to_exception(e.code, message), None)
     68   except TypeError as e:
     69     if any(ops._is_keras_symbolic_tensor(x) for x in inputs):

/opt/conda/lib/python3.6/site-packages/six.py in raise_from(value, from_value)

UnimplementedError: Generic conv implementation only supports NHWC tensor format for now. [Op:Conv2D]

Problem loading .h5 file

I am trying to load a pretrained model in .h5 format, I want to do transfer learning (this exactly learn.fit_one_cycle(4, max_lr=slice(3e-5, 3e-4)))
I tried this

model = load_model('vggnet5.h5')
learn = TfLearner(data, model, opt_func=Adam, loss_func=keras.losses.categorical_crossentropy, metrics=accuracy, true_wd=True, bn_wd=True, wd=defaults.wd, train_bn=True)

I got the following error.

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1618   try:
-> 1619     c_op = c_api.TF_FinishOperation(op_desc)
   1620   except errors.InvalidArgumentError as e:

InvalidArgumentError: Depth of output (32) is not a multiple of the number of groups (64) for 'VGG5/block1_conv1/convolution' (op: 'Conv2D') with input shapes: [64,3,64,64], [3,3,1,32].

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
<ipython-input-22-18d78dd88662> in <module>
----> 1 learn = TfLearner(data, model, opt_func=Adam, loss_func=keras.losses.categorical_crossentropy, metrics=accuracy, true_wd=True, bn_wd=True, wd=defaults.wd, train_bn=True)

<string> in __init__(self, data, model, opt_func, loss_func, metrics, true_wd, bn_wd, wd, train_bn, path, model_dir, callback_fns, callbacks, layer_groups)

/kaggle/usr/lib/fastai_tf_fit/fastai_tf_fit.py in __post_init__(self)
    162         xb, yb = next(iter(self.data.train_dl))
    163         xb, yb = _pytorch_to_tf(xb), _pytorch_to_tf(yb)
--> 164         tf_loss_batch(self.model, xb, yb)
    165 
    166     def init(self, init): raise NotImplementedError

/kaggle/usr/lib/fastai_tf_fit/fastai_tf_fit.py in tf_loss_batch(model, xb, yb, loss_func, opt, cb_handler)
     43 
     44 
---> 45     if not loss_func: return forward(), yb[0]
     46 
     47     loss = None

/kaggle/usr/lib/fastai_tf_fit/fastai_tf_fit.py in forward()
     33 
     34     def forward():
---> 35         out = model(*xb)
     36         out = cb_handler.on_loss_begin(out)
     37         return out

/opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in symbolic_fn_wrapper(*args, **kwargs)
     73         if _SYMBOLIC_SCOPE.value:
     74             with get_graph().as_default():
---> 75                 return func(*args, **kwargs)
     76         else:
     77             return func(*args, **kwargs)

/opt/conda/lib/python3.6/site-packages/keras/engine/base_layer.py in __call__(self, inputs, **kwargs)
    487             # Actually call the layer,
    488             # collecting output(s), mask(s), and shape(s).
--> 489             output = self.call(inputs, **kwargs)
    490             output_mask = self.compute_mask(inputs, previous_mask)
    491 

/opt/conda/lib/python3.6/site-packages/keras/engine/network.py in call(self, inputs, mask)
    581             return self._output_tensor_cache[cache_key]
    582         else:
--> 583             output_tensors, _, _ = self.run_internal_graph(inputs, masks)
    584             return output_tensors
    585 

/opt/conda/lib/python3.6/site-packages/keras/engine/network.py in run_internal_graph(self, inputs, masks)
    738                                     kwargs['mask'] = computed_mask
    739                             output_tensors = to_list(
--> 740                                 layer.call(computed_tensor, **kwargs))
    741                             output_masks = layer.compute_mask(computed_tensor,
    742                                                               computed_mask)

/opt/conda/lib/python3.6/site-packages/keras/layers/convolutional.py in call(self, inputs)
    169                 padding=self.padding,
    170                 data_format=self.data_format,
--> 171                 dilation_rate=self.dilation_rate)
    172         if self.rank == 3:
    173             outputs = K.conv3d(

/opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in conv2d(x, kernel, strides, padding, data_format, dilation_rate)
   3715         padding=padding,
   3716         data_format=tf_data_format,
-> 3717         **kwargs)
   3718     if data_format == 'channels_first' and tf_data_format == 'NHWC':
   3719         x = tf.transpose(x, (0, 3, 1, 2))  # NHWC -> NCHW

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/ops/nn_ops.py in convolution_v2(input, filters, strides, padding, data_format, dilations, name)
    916       data_format=data_format,
    917       dilations=dilations,
--> 918       name=name)
    919 
    920 

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/ops/nn_ops.py in convolution_internal(input, filters, strides, padding, data_format, dilations, name, call_from_convolution)
   1008           data_format=data_format,
   1009           dilations=dilations,
-> 1010           name=name)
   1011     else:
   1012       if channel_index == 1:

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_nn_ops.py in conv2d(input, filter, strides, padding, use_cudnn_on_gpu, explicit_paddings, data_format, dilations, name)
    967                   padding=padding, use_cudnn_on_gpu=use_cudnn_on_gpu,
    968                   explicit_paddings=explicit_paddings,
--> 969                   data_format=data_format, dilations=dilations, name=name)
    970   _result = _outputs[:]
    971   if _execute.must_record_gradient():

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py in _apply_op_helper(op_type_name, name, **keywords)
    740       op = g._create_op_internal(op_type_name, inputs, dtypes=None,
    741                                  name=scope, input_types=input_types,
--> 742                                  attrs=attr_protos, op_def=op_def)
    743 
    744     # `outputs` is returned as a separate return value so that the output

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/func_graph.py in _create_op_internal(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_device)
    593     return super(FuncGraph, self)._create_op_internal(  # pylint: disable=protected-access
    594         op_type, inputs, dtypes, input_types, name, attrs, op_def,
--> 595         compute_device)
    596 
    597   def capture(self, tensor, name=None, shape=None):

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in _create_op_internal(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_device)
   3312           input_types=input_types,
   3313           original_op=self._default_original_op,
-> 3314           op_def=op_def)
   3315       self._create_op_helper(ret, compute_device=compute_device)
   3316     return ret

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
   1784           op_def, inputs, node_def.attr)
   1785       self._c_op = _create_c_op(self._graph, node_def, grouped_inputs,
-> 1786                                 control_input_ops)
   1787       name = compat.as_str(node_def.name)
   1788     # pylint: enable=protected-access

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1620   except errors.InvalidArgumentError as e:
   1621     # Convert to ValueError for backwards compatibility.
-> 1622     raise ValueError(str(e))
   1623 
   1624   return c_op

ValueError: Depth of output (32) is not a multiple of the number of groups (64) for 'VGG5/block1_conv1/convolution' (op: 'Conv2D') with input shapes: [64,3,64,64], [3,3,1,32].

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.