fdavidcl / ruta Goto Github PK
View Code? Open in Web Editor NEWUnsupervised Deep Architechtures in R
Home Page: https://deivi.ch/ruta
License: GNU General Public License v3.0
Unsupervised Deep Architechtures in R
Home Page: https://deivi.ch/ruta
License: GNU General Public License v3.0
Error in data + term : arreglos de dimensón no compatibles
evaluate
functions are always verbose. keras docs
Hi - your package works really smooth for image data. I wonder whether you could add KATE: K-Competitive Autoencoder for Text to your models.
Simon
Provide an API or similar which specifies all possible argument values for each parameter.
object
, x
, and other parameter names in genericsValueError: Output of generator should be a tuple `(x, y, sample_weight)` or `(x, y)`. Found: [array([[0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
Allow to tie encoder's and decoder's weights
Implement evaluation methods for performance assessment
Seen in #32 that ruta's encoder
does not have all necessary layers.
x_train_mat <- matrix(1:200, nrow = 10)
network <-
input() +
dense(256, "elu") +
variational_block(4, seed = 42, epsilon_std = .5) +
dense(256, "elu") +
output("sigmoid")
learner <- autoencoder_variational(network, loss = "binary_crossentropy")
model <- learner %>% train(x_train_mat, epochs = 5)
latent_model <- model$models$encoder
## Correct:
inputs <- keras::get_layer(model$models[[1]], index = 1)$input
latent_space <- keras::get_layer(model$models[[1]], index = 7)$output
latent_model <- keras::keras_model(
inputs = inputs,
outputs = latent_space
)
And batch_size
Currently, the evaluated metric shows up as [[2]]
in the result.
It's looking for $type
Hello, there!
I want to use this awesome software to learning CIFAR10 structured data. Could you support them?
Best,
Seongho
ValueError when calling evaluate_... in a clean conda+TF 1.14 installation
Hi - in general, I'd like to use your package for topicmodeling on large scale text collections. I managed to get document level vectors with this code.
network <-
input() +
dense(256, "elu") +
variational_block(4, seed = 42, epsilon_std = .5) +
dense(256, "elu") +
output("sigmoid")
learner <- autoencoder_variational(network, loss = "binary_crossentropy")
model <- learner %>% train(x_train_mat, epochs = 5)
# summary(learner$network)
inputs <- get_layer(model$models[[1]], index = 1)$input
latent_space <- get_layer(model$models[[1]], index = 7)$output
latent_model <- keras_model(
inputs = inputs,
outputs = latent_space
)
Via PCA the output looks like this:
Now I have two questions:
Do you know whether there is a better way of interpreting the latent space? LDA outputs topic proportions. I thought maybe a sigmoid or (softmax for sparsity) activation at the bottleneck. But this way the model does not learn anything useful.
Do you know how I could get word-level features that are associated with the each latent dimension?
Thanks in advance!
Best Simon
(PS: if you prefer a fully reproducible example, please let me know).
Variational autoencoder tutorial gives the following error when it's run:
stop(structure(list(message = "TypeError: in user code:\n\n File \"/home/david/.local/share/r-miniconda/envs/r-reticulate/lib/python3.8/site-packages/keras/engine/training.py\", line 1021, in train_function *\n return step_function(self, iterator)\n File \"/home/david/.local/share/r-miniconda/envs/r-reticulate/lib/python3.8/site-packages/keras/engine/training.py\", line 1010, in step_function **\n outputs = model.distribute_strategy.run(run_step, args=(data,))\n File \"/home/david/.local/share/r-miniconda/envs/r-reticulate/lib/python3.8/site-packages/keras/engine/training.py\", line 1000, in run_step **\n outputs = model.train_step(data)\n File \"/home/david/.local/share/r-miniconda/envs/r-reticulate/lib/python3.8/site-packages/keras/engine/training.py\", line 860, in train_step\n loss = self.compute_loss(x, y, y_pred, sample_weight)\n File \"/home/david/.local/share/r-miniconda/envs/r-reticulate/lib/python3.8/site-packages/keras/engine/training.py\", line 918, in compute_loss\n return self.compiled_loss(\n File \"/home/david/.local/share/r-miniconda/envs/r-reticulate/lib/python3.8/site-packages/keras/engine/compile_utils.py\", line 239, in __call__\n self._loss_metric.update_state(\n File \"/home/david/.local/share/r-miniconda/envs/r-reticulate/lib/python3.8/site-packages/keras/utils/metrics_utils.py\", line 70, in decorated\n update_op = update_state_fn(*args, **kwargs)\n File \"/home/david/.local/share/r-miniconda/envs/r-reticulate/lib/python3.8/site-packages/keras/metrics.py\", line 178, in update_state_fn\n return ag_update_state(*args, **kwargs)\n File \"/home/david/.local/share/r-miniconda/envs/r-reticulate/lib/python3.8/site-packages/keras/metrics.py\", line 455, in update_state **\n sample_weight = tf.__internal__.ops.broadcast_weights(\n File \"/home/david/.local/share/r-miniconda/envs/r-reticulate/lib/python3.8/site-packages/keras/engine/keras_tensor.py\", line 254, in __array__\n raise TypeError(\n\n TypeError: You are passing KerasTensor(type_spec=TensorSpec(shape=(), dtype=tf.float32, name=None), name='Placeholder:0', description=\"created by layer 'tf.cast_5'\"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or `tf.map_fn`. Keras Functional model construction only supports TF API calls that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer `call` and calling that layer on this symbolic input/output.\n",
call = py_call_impl(callable, dots$args, dots$keywords),
cppstack = structure(list(file = "", line = -1L, stack = c("/home/david/R/x86_64-pc-linux-gnu-library/4.1/reticulate/libs/reticulate.so(Rcpp::exception::exception(char const*, bool)+0x74) [0x7f06a41c1524]",
"/home/david/R/x86_64-pc-linux-gnu-library/4.1/reticulate/libs/reticulate.so(Rcpp::stop(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x29) [0x7f06a41b0bc4]", ...
Until this is fixed, users should implement variational autoencoders directly in Keras: https://keras.rstudio.com/articles/examples/variational_autoencoder.html. Sorry for any inconvenience.
Use knitr commands like in keras
LookupError: No gradient defined for operation 'loss_186/activation_281_loss/loop_body/gradients/activation_278/Softsign_grad/SoftsignGrad/pfor/SoftsignGrad' (op type: SoftsignGrad)
Maybe with global defaults for a whole network? Regularizers, activations and so on
Current workaround:
deconv <- function(filters, kernel_size, ...) {
layer_keras("conv_2d_transpose", filters = filters, kernel_size = kernel_size, ...)
}
Use fit_generator
to dynamically augment data
I tried to run the code for VAE - just copy pasting in RStudio - and I got the following error when executing the following command.
model <- learner %>% train(x_train, epochs = 5)
Error in py_call_impl(callable, dots$args, dots$keywords) :
_SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Tensor 'z_mean_3/Identity:0' shape=(None, 3) dtype=float32>, <tf.Tensor 'z_log_var_3/Identity:0' shape=(None, 3) dtype=float32>]
Any clue on what I am doing wrong? Thanks.
PS: Figured out from more reading. Needed the following lines:
if (tensorflow::tf$executing_eagerly())
tensorflow::tf$compat$v1$disable_eager_execution()
Working now.
Thanks.
We already have a filter
class so, why not provide image normalizations
When defining a network through an integer vector, decide on layer activations instead of leaving all as "linear"
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.