rajarsheem / libsdae-autoencoder-tensorflow Goto Github PK
View Code? Open in Web Editor NEWA simple Tensorflow based library for deep and/or denoising AutoEncoder.
License: MIT License
A simple Tensorflow based library for deep and/or denoising AutoEncoder.
License: MIT License
Hi,
How to save and restore the model ?
Hi, Rajarshee Mitra!
Thank you very much for your code! But when I tried running your code on my computer, I always got the error shown below:
'''
InternalError: Blas GEMM launch failed : a.shape=(100, 784), b.shape=(784, 200), m=100, n=200, k=784
[[Node: MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](_arg_x_0_0/_7, Variable/read)]]
'''
where 100 is batch_size, 784 and 200 are dimensions.
Can you offer me some help!
Thanks a lot!
Phil
Hi,
What do the two files inside data/ represent? I found no description. Thanks.
Gordon
Hi Rajarshee Mitra,
I have question regarding your code. I am confused how your code construct the connections between layers. Specifically, in line 63 of stacked_autoencoder.py, you called the run function for different layers . it seems for me that you construct n different auto-encoders instead of one stacked with n hidden layers.
Best regards,
Mohammad
utils seem to be missing
How to decode the transform result, and calculate the similarity between the decoded one with the original matrix ?
Nice library.
How did you evaluate the goodness of your feature representation? Is the global loss the general reconstruction?
So for example, the 2nd DAE gets the encoded version of the image, so is the loss function of the 2nd DAE the followin: LOSS(decoded(noisy encoded input), decoded(encoded output))
This way the network doesn't loose seight of the global goal: Finding an optimal feature representation, right?
What I did was:
Is that right?
I have just started learning denoising autoencoders but I want to import dataset of images than mnist what can be done to do so . I have tried giving path name and assigning it to a variable but im getting a type error.
The mini batch function may generate same sample over
There is no removal already used indices
Also issue with mask noise
Following error:
File "algorithms/deepautoencoder/stacked_autoencoder.py", line 42, in add_noise
frac = float(self.noise.split('-')[1])
Hi,
I'm playing around with this code for a research project and everything works fine with mean-squared-error, however as soon as I switch to cross-entropy (which I really want), it does not converge and loss gets bigger over time... I tried numerous parameters but nothing seems to work. I'm using MNIST with the following model.
model = StackedAutoEncoder(
dims=[100],
activations=['softmax'],
noise='gaussian',
epoch=[1000],
loss='cross-entropy',
lr=0.005,
batch_size=100,
print_step=100
)
Do you know why this isn't working?
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.