Coder Social home page Coder Social logo

deep-learning-for-computer-vision's Issues

about the categorical_crossentropy in line 51

Dear author,
the code 7_cat_vs_dog_bottleneck.py
I encountered an error when running your code. This issue is related to the line 51. However, after changing categorical_crossentropy to binary_crossentropy the problem was solved.

Could you check this problem?
Thanks very much!

Best,
Helen

3_satellite from Chapter 5

Dear author,

I have encountered an error when running your code for modelling FCN for segmentation.

Traceback (most recent call last): File "d:\git\Deep-Learning-for-Computer-Vision\Chapter05\3_satellite.py", line 14, in <module> resnet50_model = ResNet50(include_top=False, weights='imagenet', input_tensor=input_tensor) File "C:\Users\gesu270495\AppData\Local\Programs\Python\Python36\lib\site-packages\keras_applications\resnet50.py", line 212, in ResNet50 x = layers.ZeroPadding2D(padding=(3, 3), name='conv1_pad')(img_input) File "C:\Users\gesu270495\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\base_layer.py", line 446, in __call__ previous_mask = _collect_previous_mask(inputs) File "C:\Users\gesu270495\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\base_layer.py", line 1328, in _collect_previous_mask mask = node.output_masks[tensor_index] AttributeError: 'Node' object has no attribute 'output_masks'

So far, I haven't been able to fix it. Any suggestions about what might cause this error would help.

Best Regards,

Shirll

Nerve Segmentation from Chapter 5

Dear author;
I got this error when running the code:
ValueError: Negative dimension size caused by subtracting 3 from 2 for 'conv2d_93/Conv2D' (op: 'Conv2D') with input shapes: [?,2,2,256], [3,3,256,512].

and also :
TypeError: init() missing 1 required positional argument: 'kernel_size'

Chapter3 Serving Client Solution

1> Conda Environment Configuration
#tensorflow 2.0 Environment Configuration (Client)
#conda create -n keras keras tensorflow
#source activate keras
#pip install numpy scipy scikit-learn pillow h5py
#pip install grpcio
#pip install tensorflow-serving-api
#pip install tensorflow_datasets
#conda deactivate

2> Docker Server Environment Configuration
------------------------- DockerFile --------------------------
FROM ubuntu:16.04

RUN apt-get update && apt-get install -y software-properties-common && add-apt-repository ppa:deadsnakes/ppa &&
apt-get update && apt-get install -y python3.6 python3.6-dev python3-pip git

RUN ln -sfn /usr/bin/python3.6 /usr/bin/python3 && ln -sfn /usr/bin/python3 /usr/bin/python && ln -sfn /usr/bin/pip3 /usr/bin/pip

docker build -t docker-ubuntu16-python3.6 .
docker run -it docker-ubuntu16-python3.6 bash
apt-get install pkg-config zip g++ zlib1g-dev unzip

apt-get install wget
wget https://github.com/bazelbuild/bazel/releases/download/0.15.0/bazel-0.15.0-installer-linux-x86_64.sh
chmod +x bazel-0.15.0-installer-linux-x86_64.sh
./bazel-0.15.0-installer-linux-x86_64.sh
export PATH="$PATH:$HOME/bin"
apt-get install sudo
sudo apt-get update && sudo apt-get install -y
automake
build-essential
curl
libcurl3-dev
git
libtool
libfreetype6-dev
libpng12-dev
libzmq3-dev
pkg-config
python-dev
python-numpy
python-pip
software-properties-common
swig
zip
zlib1g-dev
pip install --upgrade pip
pip install grpcio
git clone --recursive https://github.com/tensorflow/serving
echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
sudo apt-get update && sudo apt-get install tensorflow-model-server
pip install tensorflow-serving-api
cd serving/
python tensorflow_serving/example/mnist_saved_model.py /tmp/mnist_model

tensorflow_model_server --port=9000 --model_name=mnist --model_base_path=/tmp/mnist_model

3> Source Customization (TensorFlow 2.0)
from grpc.beta import implementations
import numpy
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2
import matplotlib.pyplot as plt
import os

mnist = tfds.load("mnist", shuffle_files=False, as_supervised=False)
mnist.keys()

print(mnist.keys())

concurrency = 1
num_tests = 100
host = ''
port = 9000
work_dir = './tmp/4'

try:
if not(os.path.isdir(work_dir)):
os.makedirs(os.path.join(work_dir))
except OSError:
print('Failed to create directory!' + work_dir)

def _create_rpc_callback():
def _callback(result):
response = numpy.array(
result.result().outputs['y'].float_val)
prediction = numpy.argmax(response)
print(prediction)
return _callback

train_data, test_data = mnist["train"], mnist["test"]
#test_data_set = mnist["test"]
#test_image = mnist.test.images[0]

train_data, test_data = tfds.load(
"mnist",
split=(tfds.Split.TRAIN, tfds.Split.TEST),
shuffle_files=False, as_supervised=False
)
print(train_data.output_types)
print(train_data.output_shapes)

data = []
iterator = iter(train_data.batch(2))
data.append(next(iterator))
data.append(next(iterator))
data.append(next(iterator))

print(data[0].keys())
print(data[0]["image"].shape)
print(data[0]["label"].shape)

plt.figure(figsize=(10,2))
for i in range(3):
for j in range(2):
plt.subplot(1,6,2*i+j+1)
plt.imshow(data[i]["image"][j,:,:,0])
plt.axis("off")
#plt.tight_layout()
#plt.show()

print(data[0]["label"].numpy(), data[1]["label"].numpy(), data[2]["label"].numpy())

test_image = data[0]["image"].numpy()

#print('test_image:'+ test_image)

predict_request = predict_pb2.PredictRequest()
predict_request.model_spec.name = 'mnist'
predict_request.model_spec.signature_name = 'prediction'

predict_channel = implementations.insecure_channel(host, int(port))
predict_stub = prediction_service_pb2.beta_create_PredictionService_stub(predict_channel)

#print(test_image)

predict_request.inputs['x'].CopyFrom(
tf.make_tensor_proto(test_image, shape=[1, test_image.size]))
result = predict_stub.Predict.future(predict_request, 3.0)
result.add_done_callback(
_create_rpc_callback())

ValueError in Chapter08/1_style_transfer.py

In Chapter 8, I modified vgg16_avg.py to match keras 2.0.

But I get an error on line 100 in 1_style_transfer.py, but I do not know how to fix it.

line 100 is as follows,

style_loss = sum(style_mse_loss(l1[0], l2[0]) for l1, l2 in zip(style_features, style_targets))

And the error is as follows,

ValueError: Dimensions must be equal, but are 64 and 128 for 'add_1' (op: 'Add') with input shapes: [64], [128].

I would appreciate it if you could tell me how to fix the error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.