waikato / meka Goto Github PK
View Code? Open in Web Editor NEWMulti-label classifiers and evaluation procedures using the Weka machine learning framework.
Home Page: http://waikato.github.io/meka/
License: GNU General Public License v3.0
Multi-label classifiers and evaluation procedures using the Weka machine learning framework.
Home Page: http://waikato.github.io/meka/
License: GNU General Public License v3.0
Hi,
to make it easier to track possible changes in how meka operates - would you mind adding a -version parameter that would just throw the version out there on the stdout?
Best,
Piotr
Hi
I tried to use CRUpdateable as classifier, I am building classifier by first 100 instances and then I need to add 1 by one instances and update my classifier but the process of updating create NullPointerException. my code is here:
try {
ConverterUtils.DataSource dataSource = new ConverterUtils.DataSource(FILE_PATH); // original dataset
Instances preparedDataSet = dataSource.getDataSet();
preparedDataSet = filterUnsupervisedAttributes(preparedDataSet);
preparedDataSet.setClassIndex(7);
CRUpdateable classifier = new CRUpdateable();
RandomForest randomForest = createRandomForest(1);
classifier.setClassifier(randomForest);
Instances trainingInstances = new Instances(dataSource.getStructure()); // temporary dataset for train
trainingInstances = filterUnsupervisedAttributes(trainingInstances);
trainingInstances.setClassIndex(7);
Instances testInstances = new Instances(dataSource.getStructure()); // temporary dataset for test
testInstances = filterUnsupervisedAttributes(testInstances);
testInstances.setClassIndex(7);
int countTestInstances = 0;
int countTrainInstances = 0;
boolean firstTrain = true;
boolean benchTest = true;
int numInst = preparedDataSet.numInstances();
for(int row = 123; row < 5021; row++) {
Instance trainingInstance = preparedDataSet.instance(row);
trainingInstances.add(trainingInstance); // collect instances to use as training
countTrainInstances++;
if (firstTrain && countTrainInstances%100 == 0 ) { // train the classifier with the first 100 instances(without any missing values)
firstTrain = false;
classifier.buildClassifier(trainingInstances);
}
if(!firstTrain){
benchTest = true;
// classifier.updateClassifier(trainingInstance);
for(int j=row+1;j<row+101;j++){
if(benchTest && countTestInstances != 100) { // add next 100 instances to testInstance
Instance testInstance = preparedDataSet.instance(j);
testInstances.add(testInstance);
countTestInstances++;
if (countTestInstances % 100 == 0) {
System.out.println("Evaluate CRUpdateable classifier on ");
String top = "PCut1";
String vop = "3";
Result result = Evaluation.evaluateModel(classifier, trainingInstances , testInstances, top, vop);
System.out.println("Evaluation available metrics: " + result.availableMetrics());
System.out.println("Evaluation Info: " + result.toString());
System.out.println("Levenshtein distance: " + result.getValue("Levenshtein distance"));
System.out.println("Type: " + result.getInfo("Type"));
countTestInstances = 0;
benchTest = false;
testInstances.delete();
}
}
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
Thanks,
Mali
Autocreation of eval.txt
was a bad idea (running/testing time is always different, so becomes Modified file in the repository), should reverse this.
The full posterior distribution should be available (not just the max probability).
CNode.java
to use the distribution information, rather than the nominal value, as an attribute.Range
to (optionally) specify a fixed chain in the optionsIs there a way to add other base classifiers on Meka from weka?
Should mention in tutorial, that one sholud add new folder hierarchy in ./src/main/resources/meka/gui/goe/MekaPropertiesCreator.props
to see it in the GUI
Randomize=true
-- randomize?test compilation and execution with Java 9
Hi everyone,
I'm using version 1.9.1 of Meka from Maven.
I tried using multi target using the same code as multi label example in
Github for train, cross validation and evaluate test set.
In multi label we have access to the evaluation function below:
//Cross validation
meka.classifiers.multilabel.Evaluation.cvModel(classifier, train, NoFolds,
"PCut3", vop);
//Evaluate test
meka.classifiers.multilabel.Evaluation.evaluateModel(classifier, train,
test, "PCut3", vop);
Question:
Can I cast the multi target classifier to multi label and use the same
function?
MultiTargetClassifier classifier = new meka.classifiers.multitarget.BCC();
result =
meka.classifiers.multilabel.Evaluation.evaluateModel((MultiLabelClassifier)
classifier, train, test, "PCut3", vop);
or?
result =
meka.classifiers.multilabel.Evaluation.cvModel((MultiLabelClassifier)
classifier, train, NoFolds, "PCut3", vop);
If I use the above functions with cast to multilabelclassifer, I get different results to the
Explorer. E.g. below:
== Evaluation Info
Classifier meka.classifiers.multitarget.BCC
Options [-X, Ibf, -S, 0, -W,
weka.classifiers.trees.J48, --, -C, 0.25, -M, 2]
Additional Info [0, 2, 1]
Dataset TrainSet
Number of labels (L) 3
Type MT-CV
Verbosity 6
== Predictive Performance
N(test) 2384
L 3
Hamming score 0.859
Exact match 0.587
Hamming loss 0.141
ZeroOne loss 0.413
Levenshtein distance 0.141
Label indices [ 0 1 2 ]
Accuracy (per label) [ 0.988 0.988 0.599 ]
== Evaluation Info
Classifier meka.classifiers.multitarget.BCC
Options [-X, Ibf, -S, 0, -W,
weka.classifiers.trees.J48, --, -C, 0.25, -M, 2]
Additional Info [0, 2, 1]
Dataset MultiLabel
Number of labels (L) 3
Type MT-CV
Verbosity 3
== Predictive Performance
N(test) 2384
L 3
Hamming score 0.999
Exact match 0.998
Hamming loss 0.001
ZeroOne loss 0.002
Levenshtein distance 0.001
Label indices [ 0 1 2 ]
Accuracy (per label) [ 1.000 1.000 0.998 ]
Is there similar evaluation functions for cross validation and evaluate test data using training and test instances?
Please show me how can I use those Evaluate functions in Multi Target
classification.
Thanks
Le
They should work, but there were some complaints/issues on the mailing list
When I lauch run.bat, this is the message error :
could not initialize the generic properties creator
java.lang.ClassCastException: java.base/jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to java.base/java.net.URLClassLoader
Could someone help/explain ?
Thanks
Hi I'm testing a single target using BCC with J48.
The result seems to be different compared to Weka.
E.g. Accuracy is 0.525 in Meka while 0.5 in Weka for cross validation.
Is the method implement different between the two? Shouldn't I get the same answer if it is a single target?
Thanks
Meka
N(test) 40
L 1
Hamming score 0.525
Exact match 0.525
Hamming loss 0.475
ZeroOne loss 0.475
Levenshtein distance 0.475
Label indices [ 0 ]
Accuracy (per label) [ 0.525 ]
Weka
Correctly Classified Instances 20 50 %
Incorrectly Classified Instances 20 50 %
Kappa statistic 0.2599
Mean absolute error 0.2036
Root mean squared error 0.4321
Relative absolute error 73.8882 %
Root relative squared error 117.4604 %
Coverage of cases (0.95 level) 55 %
Mean rel. region size (0.95 level) 31 %
Total Number of Instances 40
Hello,
I would like to know what is the original reference of the REUTERS-K500-EX2.arff dataset, since I would like to use it in one of my experiments.
Thanks!
Use 0.5 threshold as default; and force 0.5 (or pre-set ad-hoc) whenever user has not supplied additionally the training set with the load-from-disk option.
Mulan, Clus as packages that can be added to Meka.
Hi,
in theory if there is only one label, classifier chains should behave like binary relevance. But they don't, they actually break:
Same hapens with PMCC.
java.lang.IllegalArgumentException: bound must be positive
at java.util.Random.nextInt(Random.java:388)
at meka.core.A.swap(A.java:268)
at meka.classifiers.multilabel.MCC.buildClassifier(MCC.java:91)
at meka.classifiers.multilabel.Evaluation.runExperiment(Evaluation.java:228)
at meka.classifiers.multilabel.ProblemTransformationMethod.runClassifier(ProblemTransformationMethod.java:172)
at meka.classifiers.multilabel.ProblemTransformationMethod.evaluation(ProblemTransformationMethod.java:152)
at meka.classifiers.multilabel.MCC.main(MCC.java:250)
Could you fix this? I finally have the time to sit down with experiments we talked about with @jmread in Macedonia, and this breaks a lot for me :(
Change EnsembleML
class to `Ensemble - there's no need for the 'ML' distinction since it's in Meka.
-- should also merge BaggingML into Ensemble -- simply by including a boolean option for 'sampling with replacement'.
Inside some of the Java examples we called
classifier.buildClassifier
Inside Evaluation.evaluateModel(...) classifier.buildClassifier is called again.
If would be good in to flag the classifier that it has been built. Then inside the Evaluation.evaluateModel check that if classifier.isBuilt() and prevent rebuiding the classifier.
This way we can resuse the evaluationModel function to evaluate a saved model or passed in a prebuilt model just to evaluate.
Regards
Instances reader for multi-label libSVM datasets: http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html
Generate Markdown from the classifier code (e.g., the globalInfo, tipText and technical info) to make better documentation, and more easily.
I was successful in generating predictions for single unlabeled data instance by using non-incremental trained model as follows:
java -Xmx6000m -cp "./lib/*" meka.classifiers.multilabel.PS -l dump_ps_nbu_dmoz -t dmoz_computers_2000_175.arff -verbosity 5 -T temp_test.arff
== Individual Errors
|==== PREDICTIONS (N=1.0) =====>
| 1 [] [28, 105]
|==============================<
Then I tried it for incremental traind model and I got this error:
java -Xmx6000m -cp "./lib/*" meka.classifiers.multilabel.incremental.PSUpdateable -l gui_dmoz_ps_nb -t dmoz_computers_2000_175.arff -verbosity 5 -T temp_test.arff Evaluation exception (java.lang.Exception: Illegal options: -T temp_test.arff ); failed to run experiment java.lang.Exception: Illegal options: -T temp_test.arff
at weka.core.Utils.checkForRemainingOptions(Utils.java:542)
at meka.classifiers.multilabel.incremental.IncrementalEvaluation.evaluateModel(IncrementalEvaluation.java:107)
at meka.classifiers.multilabel.incremental.IncrementalEvaluation.runExperiment(IncrementalEvaluation.java:45)
at meka.classifiers.multilabel.incremental.PSUpdateable.main(PSUpdateable.java:204)
I don't know if I'm making any mistake cause I haven't found any documentation for this. Is there any other way to predict using incremental models?
Hello,
I would like to know what is the difference between the meta multi-label classifiers EnsembleML and BaggingML. Is there a reference for EnsembleML?
Thanks,
Alex de Sá
Hi,
I am trying to use CRUpdateable for Multi target regression, and I found the CRUpdateable 's default classifier is HoeffdingTree, but when I call method as buildClassifier, it gives me this error:
weka.core.UnsupportedAttributeTypeException: weka.classifiers.bayes.NaiveBayesUpdateable: Cannot handle numeric class!
my code is here 👍
CRUpdateable classifier = new CRUpdateable();
HoeffdingTree ht = new HoeffdingTree();
classifier.setClassifier(ht);
classifier.buildClassifier(trainingInstances);
If the HoeffdingTree is not for regression why it is default classifier in CRUpdateable, and if it is for regression why it doesn't work?
Thanks
Mali
I have tried to package Meka 1.9.2 with no success.
I am following the command in the tutorial:
mvn clean install
And I get the following stack trace:
[INFO] --- latex-maven-plugin:1.4.1:latex (default) @ meka ---
[ERROR]
java.io.IOException: Cannot run program "pdflatex" (in directory "F:\Projects\meka\target\latex\Tutorial"): CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at java.lang.Runtime.exec(Runtime.java:620)
…
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
at java.lang.ProcessImpl.start(ProcessImpl.java:137)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 04:16 min
[INFO] Finished at: 2018-10-12T12:18:50-07:00
[INFO] Final Memory: 387M/1212M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.github.fracpete:latex-maven-plugin:1.4.1:latex (default) on project meka: Cannot run program "pdflatex" (in directory "F:\Projects\meka\target\latex\Tutorial"): CreateProcess error=2, The system cannot find the file specified -> [Help 1]
I expect the project to compile and provide me with an environment with the BAT file, lib directory with needed dependencies and successfully built meka-X.X.X.jar file.
I am using Windows 8.1. Pro on an Intel P7 computer with 16GB of RAM. I am compiling with Java Oracle JDK 1.8.0_181 under cygwin (I have built many other projects with success, so I do not think the problem is in my cygwin setup).
Note that I also tried with Meka snapshot (Latest commit fe5eeb5) with no success.
Any help would be greatly appreciated.
Add to tutorial: new ways to save predictions, for both GUI (Export Predictions to CSV
) and also from the CLI (-predictions
), and note: not yet cross validation.
At http://meka.sourceforge.net/ the link to "The API reference." points towards http://meka.sourceforge.net/api-1.7/index.html all tough 1.9.0 is the current version. Is it possible to host the current javadoc as well?
instead of -P 0, which is basically LP.
I do the following filters on numeric data such that
and when I try to classify I get the error
Evaluation failed: "java.lang.indexOutOfBoundsException: Index: 0, Size: 0"
I get the same error even with very small sample so the sample size is very unlikely the issue.
What is causing this indexOutIndex error in Classify of MEKA?
Hello,
I am now doing some works of multilabel deep learning, and have noticed the paper "deep learning for multi-label classification", Jesse Read and Fernando Perez-Cruz, 2014. Are the functions related to this paper already included to the Meka package?
Thanks very much
I am trying to train an incremental classifier and I am planning to update the classifier in future with new datasets as soon as they are available. For that, I need to save the trained incremental classifier. Saving a trained classifier can be done by using -d option but the option is not working in case of incremental classifiers. Following is the error I'm getting:
C:\Users\Palash\Dropbox\Project ME\java\meka-release-1.9.0>java -Xmx1024m -cp "./lib/*" meka.classifiers.multilabel.incremental.BRUpdateable -d classifier.dump -verbosity 5 -t train_dmoz_computers_dataset.arff -W weka.classifiers.meta.MOA -- -B moa.classifiers.trees.HoeffdingAdaptiveTree -output-debug-info
Evaluation exception (java.lang.Exception: Illegal options: -d classifier.dump ); failed to run experiment java.lang.Exception: Illegal options: -d classifier.dump at weka.core.Utils.checkForRemainingOptions(Utils.java:542) at meka.classifiers.multilabel.incremental.IncrementalEvaluation.evaluateModel(IncrementalEvaluation.java:83) at meka.classifiers.multilabel.incremental.IncrementalEvaluation.runExperiment(IncrementalEvaluation.java:45) at meka.classifiers.multilabel.incremental.BRUpdateable.main(BRUpdateable.java:72)
If saving classifier is not yet supported by incremental learners in meka, then how can I tweak it to get the job done.
Hi,
I am trying to call Evaluation from the last version of Meka like this:
Result result = Evaluation.evaluateModel(classifier, testInstances, top, vop);
Finally I found this calling ends to "distributionForInstance" method in CR.class
but I don't understand why this method create the array of Double with length of 2 times of class index
and put the prediction values at the end of array?
public double[] distributionForInstance(Instance x) throws Exception {
int L = x.classIndex();
double[] y = new double[L * 2];
for(int j = 0; j < L; ++j) {
Instance x_j = (Instance)x.copy();
x_j.setDataset((Instances)null);
x_j = MLUtils.keepAttributesAt(x_j, new int[]{j}, L);
x_j.setDataset(this.m_Templates[j]);
double[] w = this.m_MultiClassifiers[j].distributionForInstance(x_j);
y[j] = (double)Utils.maxIndex(w);
y[L + j] = w[(int)y[j]];
}
return y;
}
in my case I have 7 class indexes, this method creates y[14] and put the 7 prediction values at end of y,
then the first 7 value in the y is zero and the result of evaluation showed zero....
Hello,
I've just downloaded the Meka package and when trying to start run.sh i'm getting the following error:
Vlads-MacBook:~ vlad$ /Users/vlad/Downloads/meka-release-1.9.0/run.sh ; exit;
Exception in thread "main" java.lang.NoClassDefFoundError: meka/gui/guichooser/GUIChooser
Caused by: java.lang.ClassNotFoundException: meka.gui.guichooser.GUIChooser
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
logout
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
Deleting expired sessions...40 completed.
[Process completed]
}
Does anybody know what the problem might be?
The homepage http://meka.sourceforge.net/ is broken.
Use a Matrix class for storing all values in Result
(thus it could be a sparse matrix in the case of multi-label).
Hi,
The result of my evaluation is zero and I don't know why? my code is here:
try {
ConverterUtils.DataSource dataSource = new ConverterUtils.DataSource(FILE_PATH); // original dataset
Instances preparedDataSet = dataSource.getDataSet();
preparedDataSet = filterUnsupervisedAttributes(preparedDataSet);
preparedDataSet.setClassIndex(7);
CRUpdateable classifier = new CRUpdateable();
RandomForest randomForest = createRandomForest(1);
classifier.setClassifier(randomForest);
Instances trainingInstances = new Instances(dataSource.getStructure()); // temporary dataset for train
trainingInstances = filterUnsupervisedAttributes(trainingInstances);
trainingInstances.setClassIndex(7);
Instances testInstances = new Instances(dataSource.getStructure()); // temporary dataset for test
testInstances = filterUnsupervisedAttributes(testInstances);
testInstances.setClassIndex(7);
int countTestInstances = 0;
int countTrainInstances = 0;
boolean firstTrain = true;
boolean benchTest = true;
int numInst = preparedDataSet.numInstances();
for(int row = 123; row < 5021; row++) {
Instance trainingInstance = preparedDataSet.instance(row);
trainingInstances.add(trainingInstance); // collect instances to use as training
countTrainInstances++;
if (firstTrain && countTrainInstances%100 == 0 ) { // train the classifier with the first 100 instances(without any missing values)
firstTrain = false;
classifier.buildClassifier(trainingInstances);
}
if(!firstTrain){
benchTest = true;
// classifier.updateClassifier(trainingInstance);
for(int j=row+1;j<row+101;j++){
if(benchTest && countTestInstances != 100) { // add next 100 instances to testInstance
Instance testInstance = preparedDataSet.instance(j);
testInstances.add(testInstance);
countTestInstances++;
if (countTestInstances % 100 == 0) {
System.out.println("Evaluate CRUpdateable classifier on ");
String top = "PCut1";
String vop = "3";
Result result = Evaluation.evaluateModel(classifier, trainingInstances , testInstances, top, vop);
System.out.println("Evaluation available metrics: " + result.availableMetrics());
System.out.println("Evaluation Info: " + result.toString());
System.out.println("Levenshtein distance: " + result.getValue("Levenshtein distance"));
System.out.println("Type: " + result.getInfo("Type"));
countTestInstances = 0;
benchTest = false;
testInstances.delete();
}
}
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
The result of Evaluation is here:
Evaluation Info: == Evaluation Info
Classifier meka.classifiers.multiltarget.incremental.CRUpdateable
Options [-W, weka.classifiers.trees.RandomForest, --, -P, 100, -I, 1, -num-slots, 1, -K, 0, -M, 1.0, -V, 0.001, -S, 1]
Additional Info
Dataset Missing_values_Predicted-weka.filters.unsupervised.attribute.RemoveType-Tstring
Number of labels (L) 7
Type MT
Verbosity 3
== Predictive Performance
N(test) 100
L 7
Hamming score 0
Exact match 0
Hamming loss 1
ZeroOne loss 1
Levenshtein distance 1
Label indices [ 0 1 2 3 4 5 6 ]
Accuracy (per label) [ 0.000 0.000 0.000 0.000 0.000 0.000 0.000 ]
== Additional Measurements
Number of training instances 154
Number of test instances 100
Label cardinality (train set) 659.407
Label cardinality (test set) 676.757
Build Time 0.061
Test Time 0.006
Total Time 0.067
Hi,
I'm trying to get prediction results after testing from meka. I'm loading a prebuilt classifier:
/usr/bin/java -cp "/home/niedakh/scikit/meka/meka-release-1.9.0/lib/*" meka.classifiers.multilabel.LC -W weka.classifiers.bayes.NaiveBayes -threshold 0 -verbosity 5 -t ~/engine/scikit-multilearn/meka/data/scene-test.arff -T ~/engine/scikit-multilearn/meka/data/scene-test.arff -l classifier.dump
With verbosity 5 I am getting results such as:
| 1196 [5] [0, 1, 2, 3, 4, 5]
what I get from meka.core.Result is that the first is the truth from the test set, the right are the predictions?
With verbosity 6 I am getting:
| 1196 [ 0 0 0 0 0 1 ] [ 0,0 0,0 0,0 0,0 1,0 0,0 ]
Which would suggest that with verbosity 5 meka predicts all labels, with verbosity 6 it predicts only the fifth one. Is this a bug in verbosity 5?
I am trying to learn a classifier and store it for the future without the need to test it at the time of classification. I am using the following command to make sure no splitting for test is done (without it meka reported test instance count larger than 0):
/usr/bin/java -cp "/home/niedakh/scikit/meka/meka-release-1.9.0/lib/*" meka.classifiers.multilabel.LC -W weka.classifiers.bayes.NaiveBayes -threshold 0 -verbosity 5 -split-percentage 100 -t ~/engine/scikit-multilearn/meka/data/scene-train.arff -d classifier.dump
I am receiving an error:
java.lang.ArrayIndexOutOfBoundsException: 0
at meka.core.MLEvalUtils.getMLStats(MLEvalUtils.java:57)
at meka.core.Result.getStats(Result.java:289)
at meka.classifiers.multilabel.Evaluation.evaluateModel(Evaluation.java:263)
at meka.classifiers.multilabel.Evaluation.runExperiment(Evaluation.java:187)
at meka.classifiers.multilabel.ProblemTransformationMethod.runClassifier(ProblemTransformationMethod.java:172)
at meka.classifiers.multilabel.ProblemTransformationMethod.evaluation(ProblemTransformationMethod.java:152)
at meka.classifiers.multilabel.LC.main(LC.java:148)
Dear all,
I am using the latest code from this repo and I am obtaining the following error on the attached CAL dataset (on which is well known that the exact match is zero).
Any idea?
NDM
CAL500.f0.train.arff.gz
CAL500.f0.test.arff.gz
ndm@ndm-ThinkPad-L420:~/Desktop/MANIAC$ /usr/bin/java -cp "./meka/lib/*" meka.classifiers.multilabel.BR -threshold PCut1 -t ./data/CAL500.f0.train.arff -T ./data/CAL500.f0.test.arff -verbosity 20
java.lang.ArrayIndexOutOfBoundsException: 101
at meka.core.Metrics.P_FmacroAvgL(Metrics.java:658)
at meka.core.MLEvalUtils.getMLStats(MLEvalUtils.java:104)
at meka.core.MLEvalUtils.getMLStats(MLEvalUtils.java:58)
at meka.core.Result.getStats(Result.java:290)
at meka.classifiers.multilabel.Evaluation.evaluateModel(Evaluation.java:334)
at meka.classifiers.multilabel.Evaluation.runExperiment(Evaluation.java:220)
at meka.classifiers.multilabel.ProblemTransformationMethod.runClassifier(ProblemTransformationMethod.java:172)
at meka.classifiers.multilabel.ProblemTransformationMethod.evaluation(ProblemTransformationMethod.java:152)
at meka.classifiers.multilabel.BR.main(BR.java:157)
Hi
Currently Meka seems to only showing the model output in string for BCC and CC with J48 method.
Setting verbose >= 5 does not show model as string in the output.
Show Graph manually in explorer does display the tree representation of the model.
Question:
For consistency, if possible, please enable showing model text for CR and other methods.
Thanks
The 'type' (ML
,MT
,CV
) should be rethought - it's not very elegant or flexible, and not necessary.
Hi,
I have three different (conceptual) doubts about the meta-algorithms. I am trying to understand them.
It is said in the documentation that Classification Maximization (CM) is a hard version of Expectation Maximization (EM). What does it mean this statement? What are the differences between them?
The Meta BR (MBR) has the description saying that "BR stacked with feature outputs into another BR". I did not understand it. What do you mean with this description?
In the description of Subset Mapper, it said that this algorithm "maps the output of a multi-label classifier to a known label combination using the hamming distance". Would it be the hamming distance between what?
Thanks in advance!
Alex de Sá
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.