Coder Social home page Coder Social logo

keras-blog's Introduction

The Keras blog

Read at: blog.keras.io

Generated with Pelican, hosted on Github pages.

To contribute an article: send a Pull Request to the content branch. We welcome anything that would be of interest to Keras users, from a simple tutorial about a specific use case, to an advanced application demo, and everything in between.

keras-blog's People

Contributors

fchollet avatar jrheard avatar lutzroeder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-blog's Issues

autoencoder post reproducibility

While reproducing the examples from the blog for a lecture I realised that the losses dont add up.

Plain 784->32->784 indeed gives 0.10 easily and can even lower.
Second step is the activity regularization.

from keras import regularizers

encoding_dim = 32
input_img = Input(shape=(784,))
# add a Dense layer with a L1 activity regularizer
encoded = Dense(encoding_dim, activation='relu',
                activity_regularizer=regularizers.activity_l1(10e-5))(input_img)
decoded = Dense(784, activation='sigmoid')(encoded)
autoencoder = Model(input=input_img, output=decoded)
# this model maps an input to its encoded representation
encoder = Model(input=input_img, output=encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(input=encoded_input, output=decoder_layer(encoded_input))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

Going by the book I cannot get anything better than: 0.26 after 300+ epochs

Has keras underling code changed since the post or something?
Im using theano with latest keras.

Use HTTPS not protocol-agnostic URLs in RSS feed

Currently the feed at https://blog.keras.io/feeds/all.rss.xml has links like this:

<link>
//blog.keras.io/user-experience-design-for-apis.html
</link>

However some platforms (like Slack, and various email clients, and some versions of some mobile browsers) do not handle these links as links.

Therefore, given that the site is on HTTPS, it would be more robust to simply use the full URL with protocol (https:) included.

<link>
https://blog.keras.io/user-experience-design-for-apis.html
</link>

See also: getpelican/pelican#1532

Can Lambda layer be used to manipulate the data and shape of the input

Hi,

I want to create a new network out of two existing networks (models) A and B using Keras with tensorflow. Architecture of new network is like this

Input -> A -> B -> Output

The output of A has shape of (15, 500) and input of B has a shape of (1000). I have a conversion method that takes the input of shape (15, 100) and call predict of a completely different network (C), and return the output with the shape of (1000).

I am assuming that I have to introduce a Lambda layer which will use my conversion method to convert the output of A to a format required for B. But, when I tried that, I get error

"TypeError: unsupported operand type(s) for /: 'Dimension' and 'float'_" on calling predict in the conversion method.

This is the conversion method

.def convert(x):
.....C = load_model("path/to/the/network/C.h5")
.....return C.predict(x)

I am not sure if this is the right way of doing, if it is, then why is this error. If it is not the right way, then, what should be my approach.

Keras version: 2.0.1
Tensorflow version: 1.0.1

Thanks,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.