waikato / wekadeeplearning4j Goto Github PK
View Code? Open in Web Editor NEWWeka package for the Deeplearning4j java library
Home Page: https://deeplearning.cms.waikato.ac.nz/
License: GNU General Public License v3.0
Weka package for the Deeplearning4j java library
Home Page: https://deeplearning.cms.waikato.ac.nz/
License: GNU General Public License v3.0
Hi,
I would like to run this in WEKA. I am interested in training a convolutional neural network classifying images. In CLI, the iterator convolutional looks for instances whereas the image dataset iterator (where the directory with folders as labels and images within) gives an error looking for -t training file (WEKA). How do I set this up properly? If the latter is the proper direction, the training file is a list of complete image file path and class?
In Explorer it says "cannot handle binary class" regardless of how I try to set it up.
why not support yolo
Implement a convenient way to visualize layer weights and layer activations for specific samples.
Add missing documentation.
All WEKA packages have a lib folder in which dependencies (jar files) for that package are stored. DL4J pulls in many jar files (through the Maven pom.xml script) and right now all of them are naively copied to the lib folder. Because the DL4J classifier only exposes a small part of DL4J itself (just CNNs and MLPs), it seems weird to be carrying around more jars than we should. For example, jars like opencv-3.1.0-1.2-macosx-x86_64.jar or cleartk-opennlp-tools-2.0.0.jar. I'm not sure what the best way to start culling some of these jars, short of doing some sort of process by elimination or maybe asking the JVM what jars have been loaded at runtime. @agibsonccc do you have any suggestions?
Cant install wekaDeeplearning4j when I looked at console when start weka 3.8 console show like this
Skipping package wekaDeeplearning4j the OS/arch (Windows 7 x86) does not meet package OS/arch constraints: 64
It is look like wrong version 32 and 64 bit.
I use this file wekaDeeplearning4j-1.5.3.zip
How can I fix this issued ?
Sorry if i did something wrong I am newbie here.
To simulate channels in text data it is often common to provide different embeddings. Currently, only one embedding can provide a mapping. This should be extended to multiple embeddings.
With v1.4.1 the RelationalInstanceIterator
was introduced. A proper example of its usage in the documentation is missing.
Implement a way to retrain serialized models on different data (see also: Deeplearning4j Transfer Learning).
On of that, a weka-filter could make use of a serialized model as feature transformation to preprocess a given dataset.
See also #27.
Add support for importing Keras models. Either regarding the model architecture without weights or as a pretrained model to be used for transfer learning (#30) or for Dl4jMlpFilter
.
Can you please publish this artifact to maven central repository?
Currently we have to include it in the source code and this is not the best practice for maven project
Default log file path is not set in windows
Opening the Dl4jMlpClassifier
editor should have a preset log file named network.log
(preferably in WEKA_HOME
if environmental variable was set)
Shows directory Weka-3-8
I tried to change the filter with 3*1 but the input is error
Running "./build.sh -v -b GPU -c -i" on wekaDeeplearning4j-1.2.1 gives me following error.
rm: /usr/local/Caskroom/weka/3.9.1/weka-3-9-1/packages/wekaDeeplearning4jCPU-dev: No such file or directory
rm: /usr/local/Caskroom/weka/3.9.1/weka-3-9-1/packages/wekaDeeplearning4jGPU-dev: No such file or directory
java.io.FileNotFoundException: dist/wekaDeeplearning4jGPU-1.2.0-dev-macosx-x86_64.zip (No such file or directory)
java.util.zip.ZipFile.open(Native Method)
java.util.zip.ZipFile.<init>(ZipFile.java:219)
java.util.zip.ZipFile.<init>(ZipFile.java:149)
java.util.zip.ZipFile.<init>(ZipFile.java:163)
weka.core.packageManagement.DefaultPackageManager.getPackageArchiveInfo(DefaultPackageManager.java:354)
weka.core.WekaPackageManager.installPackageFromArchive(WekaPackageManager.java:2136)
weka.core.WekaPackageManager.main(WekaPackageManager.java:2839)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:219)
at java.util.zip.ZipFile.<init>(ZipFile.java:149)
at java.util.zip.ZipFile.<init>(ZipFile.java:163)
at weka.core.packageManagement.DefaultPackageManager.getPackageArchiveInfo(DefaultPackageManager.java:354)
at weka.core.WekaPackageManager.installPackageFromArchive(WekaPackageManager.java:2136)
at weka.core.WekaPackageManager.main(WekaPackageManager.java:2839)
It look likes it tries to extract the zip with the wrong name (1.2.0 instead of 1.2.1)
Describe the bug
After generating the word embeddings with weka 3.8 (weka.filters.unsupervised.attribute.Dl4jStringToWord2Vec). i've tried to use these embeddings in Meka 1.9.2 .
To Reproduce
Expected behavior
You should be able to use Dlj4MLP in Binary Relevance method in Meka with embeddings generated in weka.
Additional Information
Error
[INFO ] 16:21:57.131 [Thread-6] weka.classifiers.functions.Dl4jMlpClassifier - Building on 6296 training instances
meka.gui.explorer.ClassifyTab
Evaluation failed (train/test split):
weka.core.InvalidInputDataException: An ARFF is required with a string attribute and a class attribute
at weka.dl4j.iterators.instance.sequence.text.rnn.RnnTextEmbeddingInstanceIterator.validate(RnnTextEmbeddingInstanceIterator.java:57)
at weka.dl4j.iterators.instance.sequence.text.rnn.RnnTextEmbeddingInstanceIterator.getDataSetIterator(RnnTextEmbeddingInstanceIterator.java:80)
at weka.dl4j.iterators.instance.AbstractInstanceIterator.getDataSetIterator(AbstractInstanceIterator.java:59)
at weka.classifiers.functions.Dl4jMlpClassifier.getDataSetIterator(Dl4jMlpClassifier.java:1069)
at weka.classifiers.functions.Dl4jMlpClassifier.getDataSetIterator(Dl4jMlpClassifier.java:1121)
at weka.classifiers.functions.Dl4jMlpClassifier.getFirstBatchFeatures(Dl4jMlpClassifier.java:1449)
at weka.classifiers.functions.Dl4jMlpClassifier.createModel(Dl4jMlpClassifier.java:1294)
at weka.classifiers.functions.Dl4jMlpClassifier.finishClassifierInitialization(Dl4jMlpClassifier.java:957)
at weka.classifiers.functions.Dl4jMlpClassifier.initializeClassifier(Dl4jMlpClassifier.java:899)
at weka.classifiers.functions.Dl4jMlpClassifier.buildClassifier(Dl4jMlpClassifier.java:816)
at meka.classifiers.multilabel.BR.buildClassifier(BR.java:75)
at meka.classifiers.multilabel.Evaluation.evaluateModel(Evaluation.java:428)
at meka.classifiers.multilabel.Evaluation.evaluateModel(Evaluation.java:326)
at meka.gui.explorer.ClassifyTab$7.run(ClassifyTab.java:414)
at java.lang.Thread.run(Unknown Source)
at meka.gui.explorer.AbstractThreadedExplorerTab$WorkerThread.run(AbstractThreadedExplorerTab.java:78)
Describe the solution you'd like
A solution where we right click on the model produced and select visualise filters of layers or visualise feature maps by providing an input image.
I am receiving this build error:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on project deeplearning4j-examples: Compilation failure
[ERROR] /C:/deep4j/wekaDeeplearning4j/src/main/java/weka/dl4j/ShufflingDataSetIterator.java:[11,8] weka.dl4j.ShufflingDataSetIterator is not abstract and does not override abstract method remove() in java.util.Iterator
[ERROR] -> [Help 1]
[ERROR]
Is there a required version of Java? I am using JDK 7.
Thanks,
Jason
Choosing CONJUGATE_GRADIENT
or LINE_GRADIENT_DESCENT
as optimization algorithm in NeuralNetworkConfiguration
while performing a cross validation results in the following behavior:
TODO: Inspect weights/updates
Is your feature request related to a problem? Please describe.
ZooModels from Deeplearning4j such as UNet
, NASNet
etc. have a graph-based, non-linear architecture.
Since the Dl4jMlpClassifier
holds all layers in a linear/sequential manner, it is not capable of parsing these models.
Idea
Create a Dl4jZooModel
classifier that mimics the Dl4jMlpClassifier
behavior and can load arbitrary zoo models but does not allow to change the model architecture, except for the output and input.
Fully upgrade the Deeplearning4j API to version 1.0.0. See also 1.0.0-alpha release-notes
Describe the bug
Installation instructions are incorrect for release version on Github and the website.
To Reproduce
Installation instructions are incorrect.
Expected behavior
What are the right installation instructions for Weka for Deep Learning?
Additional Information
Additional context
Add any other context about the problem here.
Since DL4J added support for FastText, please add the same for wekaDL4J, otherwise, please give me some insight on how we can adapt the existing code to use model (bin) generated by fasttext. Thanks
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional Information
Additional context
Add any other context about the problem here.
Hi,
I'm finding the default values of many parameters as NaN (e.g., Convolution Layer, NeuralNet Configuration).
The FileIterationListener is raising an exception.
Cheers,
Felipe
I have installed wekaDeeplearning4j package using the package manager, but I am not able to choose different type of layers for network.
Steps to reproduce -
Caused by: java.lang.RuntimeException: org.nd4j.linalg.factory.Nd4jBackend$NoAvailableBackendException: Please ensure that you have an nd4j backend on your classpath. Please see: http://nd4j.org/getstarted.html
at org.nd4j.linalg.factory.Nd4j.initContext(Nd4j.java:5449)
at org.nd4j.linalg.factory.Nd4j.<clinit>(Nd4j.java:213)
... 136 more
Weka version: 3.8.2
wekaDeeplearning4j package version: 1.5.6
Operating System: macOS
Describe the bug
The Dl4jStringToGlove
filter deadlocks during the training.
The issue stems from the Deeplearning4j Glove model and is fixed in the master branch (deeplearning4j/deeplearning4j@94c811e).
The issue should be gone as soon as Deeplearning4j publishes their new version.
Dl4jStringToGlove
and Dl4jWord2Vec
filtersIs your feature request related to a problem? Please describe.
I am using wekaDeeplearning4j through weka UI. After setting the debug option to True, I do not see any additional logs in weka.log or network.log
Describe the solution you'd like
dl4j also provides debug logs, it would be helpful if we can view dl4j logs when the debug option is set to true.
Make the evaluation metrics in EpochListener
configurable
This is not a feature suggestion per so, but a question about this awesome package for Weka.
I was wondering if it is possible to define a configuration for a RNN such as it can be considered a bidirectional RNN. Is there a way?
I have been reading the documentation for DL4J and there are some layers specified there that are explicitly defined as bidirectional. On Weka DL4J there is the truncated back propagation through time backward and forward parameters. Does setting the backward parameter as 0 make my network as unidirectional? Does setting both != 0 make my network bi-directional?
Thank you very much.
Describe the bug
I've noticed that the validation accuracy logged by dl4j is almost an exact inversion of the test set accuray reported by weka for some binary classification datasets I've been using.
To Reproduce
Steps to reproduce the behavior:
Note that this does not happen with all binary classification datasets I have used.
Expected behavior
Accuracy reported by dl4j should be similar to what is reported by weka.
I have been training a model with 244 label classes; however, an unsupported expected occurs that states Cannot do conversion to one hot using batched reader: 244 output classes, but array.size(1) is 203
.
BaseImageRecordReader - ImageRecordReader: 244 label classes inferred using label generator ArffMetaDataLabelGenerator
BaseImageRecordReader - ImageRecordReader: 203 label classes inferred using label generator ArffMetaDataLabelGenerator
When using wekadeeplearning4j for time series forecast, and setting up LSTM layer and Rnnoutputlayer I get this message:
Problem evaluating forecaster
indexes must be same length as array rank.
What could be the problem here ?
Would love support to output the weights of each attribute (similar to the MLP classifier) in the output window (or save to file).
Thanks
Add a way to serialize the Embedding generated with implementations of Dl4jStringToWordEmbeddings
, so that they can be loaded in TextInstanceIterator
.
Dl4j v0.9.0 introduced a new way to set update parameter with
NeuralNetConfiguration.Builder().updater(new Sgd(0.1))
instead of
NeuralNetConfiguration.Builder().updater(UPDATER.SGD).learningRate(0.1)
The weka wrapper have already been written in the updater package.
Due to a bug in dl4j v0.9.1 the new API cannot be used yet. Maybe we need to wait until v0.9.2.
I want to know how can we amend Output Layer methods such as activation function, loss function for pre-trained models such as Dl4JResNet50 using Java API.
it is not a bug, but i meet it ...
Hello,
after I installed wekadeeplearning4j, I test it with the result
Installed Repository Loaded Package
========= ========== ====== =======
1.5.14 ----- Yes : Weka wrappers for Deeplearning4j
but when I run the
./install-cuda-libs.ps1 ~/Downloads/wekaDeeplearning4j-cuda-10.1-1.5.14-windows-x86_64.zip
or
./install-cuda-libs.ps1
it shows Could not fine users\admin\wekafiles\packages\wekaDeeplearning4j, but it really exists.
How TO DO? THANK YOU
Running both examples, from https://deeplearning.cms.waikato.ac.nz/examples/, gives unable to find class weka.classifiers.functions.Dl4jMlpClassifier.
Running this on maven project, wekaDeeplearning4j was installed via ./build.sh, verison 1.2.1
Add the platform dependency for each package as describe in http://weka.wikispaces.com/How+are+packages+structured+for+the+package+management+system%3F
E.g.
# Specify which OS's the package can operate with. Omitting this entry indicates no restrictions on OS. (optional)
OSName=Windows,Mac,Linux
# Specify which architecture the package can operate with. Omitting this entry indicates no restriction. (optional)
OSArch=64
The add-gpu-support page (https://deeplearning.cms.waikato.ac.nz/install/#add-gpu-support) specifies that one need cuda v8 or cuda v9. Please add support for cuda v10.x
I am not sure if such a thing exists somewhere, but I had to write my own iterator that can return mini-batches of a dataset, and (optionally) shuffle it as well. It seems to not work properly when used in conjunction with the multiple epoch iterator that comes with DL4J. For example, if ShufflingDataSetIterator is used on its own on iris.arff and we specify a batch size of 50, we expect 3 mini-batches (since there are 150 instances total). However, if we wrap this in MultipleEpochsIterator and specify 10 epochs, we get 28 mini-batches, instead of the proper 10*3 = 30 mini-batches.
I have reproduced this phenomena in the file IrisTest.java
As discussed with @m-a-hall:
is it possible to populate weka.dl4j.NeuralNetConfiguration with the settings extracted from the org.deeplearning4j.nn.conf.NeuralNetConfiguration attached to a zoo model? It would be quite nice to see (and be able to tweak) the default configuration for a given zoo model.
That should be partly possible. The thing is that the Dl4j Model define a MultiLayerConfiguration (see https://github.com/deeplearning4j/deeplearning4j/blob/308e141fe40bd8ff1a3d1f8cc75615c240089ef7/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/AlexNet.java#L86) and with that allow for more fine-grained layer wise network configuration. Our implementation only has a network wide (one for all layers) configuration. Though, allmost all configurations are the same for all layers in the ModelZoo and are defined in the conf() method beginning (see https://github.com/deeplearning4j/deeplearning4j/blob/308e141fe4/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/AlexNet.java#L89-L102).
It would be possible to try to parse a MultiLayerConfiguration of a ModelZoo into a Weka NeuralNetConfiguration by checking which values are the same over all layers.
When I generate a model A and then I'd like to generate a model B based on model A and some new data
I think it would a good idea to have CNN and RNN classifiers that can act directly on string attributes so as SGDText : http://weka.sourceforge.net/doc.dev/weka/classifiers/functions/SGDText.html
These classifiers should receive pre-trained word embeddings as parameters.
The following two dl4j examples should help:
The leaky relu alpha parameter is currently part of the NeuralNetworkConfiguration
class and should be moved to the actual LeakyRelu activation class.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.