Coder Social home page Coder Social logo

bwaldvogel / liblinear-java Goto Github PK

View Code? Open in Web Editor NEW
307.0 35.0 184.0 3.17 MB

Java version of LIBLINEAR

Home Page: https://liblinear.bwaldvogel.de

License: BSD 3-Clause "New" or "Revised" License

Shell 0.04% Java 99.96%
java svm logistic-regression liblinear

liblinear-java's Introduction

CI Maven Central codecov BSD 3-Clause License Donate

This is the Java version of LIBLINEAR.

The project site of the original C++ version is located at http://www.csie.ntu.edu.tw/~cjlin/liblinear/

The upstream changelog can be found at http://www.csie.ntu.edu.tw/~cjlin/liblinear/log

The upstream GitHub project can be found at https://github.com/cjlin1/liblinear

Dependencies

The only requirement is Java 8 or later.

Usage

<dependency>
    <groupId>de.bwaldvogel</groupId>
    <artifactId>liblinear</artifactId>
    <version>2.44</version>
</dependency>

Please be aware that the code would be written differently at various places, i.e.

  • Java coding style,
  • less static functions and state,
  • smaller classes and methods,

if it would be a pure Java project.

However, I tried to stick as close as possible to the original C++ source code for the following reasons:

  • Maintainability: Patches for the original C++ version can often be applied easily

  • Probability of translation errors: Sticking to the original source code makes it less likely to introduce new bugs that are caused by porting to Java.

  • Code Reviews: It should be more easy to conduct code reviews since the sources can be compared to the original version.

Below follows a slightly modified version of the original README file. Please note that the README refers to the C++ version. As afore mentioned, the Java version is almost identical to use.

The three most important methods for programmatic usage that you might be interested in are:

  • Linear.train(…)
  • Linear.predict(…)
  • Linear.predictProbability(…)

Contributing

Please read the contributing guidelines if you want to contribute code to the project.

If you want to thank the author for this library or want to support the maintenance work, we are happy to receive a donation.

Donate


LIBLINEAR is a simple package for solving large-scale regularized linear classification, regression and outlier detection. It currently supports

  • L2-regularized logistic regression/L2-loss support vector classification/L1-loss support vector classification
  • L1-regularized L2-loss support vector classification/L1-regularized logistic regression
  • L2-regularized L2-loss support vector regression/L1-loss support vector regression
  • one-class support vector machine. This document explains the usage of LIBLINEAR.

To get started, please read the Quick Start section first. For developers, please check the Library Usage section to learn how to integrate LIBLINEAR in your software.

Table of Contents

  • When to use LIBLINEAR but not LIBSVM
  • Quick Start
  • train Usage
  • predict Usage
  • Examples
  • Library Usage
  • Additional Information

When to use LIBLINEAR but not LIBSVM

There are some large data for which with/without nonlinear mappings gives similar performances. Without using kernels, one can efficiently train a much larger set via linear classification/regression. These data usually have a large number of features. Document classification is an example.

Warning: While generally liblinear is very fast, its default solver may be slow under certain situations (e.g., data not scaled or C is large). See Appendix B of our SVM guide about how to handle such cases.

http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf

Warning: If you are a beginner and your data sets are not large, you should consider LIBSVM first.

LIBSVM page: http://www.csie.ntu.edu.tw/~cjlin/libsvm

Quick Start

See the section Installation for installing LIBLINEAR.

After installation, there are programs train and predict for training and testing, respectively.

About the data format, please check the README file of LIBSVM. Note that feature index must start from 1 (but not 0).

A sample classification data included in this package is heart_scale.

Type train heart_scale, and the program will read the training data and output the model file heart_scale.model. If you have a test set called heart_scale.t, then type predict heart_scale.t heart_scale.model output to see the prediction accuracy. The output file contains the predicted class labels.

For more information about train and predict, see the sections train Usage and predict Usage.

To obtain good performances, sometimes one needs to scale the data. Please check the program svm-scale of LIBSVM. For large and sparse data, use -l 0 to keep the sparsity.

train Usage

Usage: train [options] training_set_file [model_file]
options:
-s type : set type of solver (default 1)
  for multi-class classification
     0 -- L2-regularized logistic regression (primal)
     1 -- L2-regularized L2-loss support vector classification (dual)
     2 -- L2-regularized L2-loss support vector classification (primal)
     3 -- L2-regularized L1-loss support vector classification (dual)
     4 -- support vector classification by Crammer and Singer
     5 -- L1-regularized L2-loss support vector classification
     6 -- L1-regularized logistic regression
     7 -- L2-regularized logistic regression (dual)
  for regression
    11 -- L2-regularized L2-loss support vector regression (primal)
    12 -- L2-regularized L2-loss support vector regression (dual)
    13 -- L2-regularized L1-loss support vector regression (dual)
  for outlier detection
    21 -- one-class support vector machine (dual)
-c cost : set the parameter C (default 1)
-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
-n nu : set the parameter nu of one-class SVM (default 0.5)
-e epsilon : set tolerance of termination criterion
    -s 0 and 2
        |f'(w)|_2 <= eps*min(pos,neg)/l*|f'(w0)|_2,
        where f is the primal function and pos/neg are # of
        positive/negative data (default 0.01)
    -s 11
        |f'(w)|_2 <= eps*|f'(w0)|_2 (default 0.0001)
    -s 1, 3, 4, 7, and 21
        Dual maximal violation <= eps; similar to libsvm (default 0.1 except 0.01 for -s 21)
    -s 5 and 6
        |f'(w)|_1 <= eps*min(pos,neg)/l*|f'(w0)|_1,
        where f is the primal function (default 0.01)
    -s 12 and 13
        |f'(alpha)|_1 <= eps |f'(alpha0)|,
        where f is the dual function (default 0.1)
-B bias : if bias >= 0, instance x becomes [x; bias]; if < 0, no bias term added (default -1)
-R : not regularize the bias; must with -B 1 to have the bias; DON'T use this unless you know what it is
	(for -s 0, 2, 5, 6, 11)
-wi weight: weights adjust the parameter C of different classes (see README for details)
-v n: n-fold cross validation mode
-C : find parameters (C for -s 0, 2 and C, p for -s 11)
-q : quiet mode (no outputs)

Option -v randomly splits the data into n parts and calculates cross validation accuracy on them.

Option -C conducts cross validation under different parameters and finds the best one. This option is supported only by -s 0, -s 2 (for finding C) and -s 11 (for finding C, p). If the solver is not specified, -s 2 is used.

Formulations:

For L2-regularized logistic regression (-s 0), we solve

min_w w^Tw/2 + C \sum log(1 + exp(-y_i w^Tx_i))

For L2-regularized L2-loss SVC dual (-s 1), we solve

min_alpha  0.5(alpha^T (Q + I/2/C) alpha) - e^T alpha
    s.t.   0 <= alpha_i,

For L2-regularized L2-loss SVC (-s 2), we solve

min_w w^Tw/2 + C \sum max(0, 1- y_i w^Tx_i)^2

For L2-regularized L1-loss SVC dual (-s 3), we solve

min_alpha  0.5(alpha^T Q alpha) - e^T alpha
    s.t.   0 <= alpha_i <= C,

For L1-regularized L2-loss SVC (-s 5), we solve

min_w \sum |w_j| + C \sum max(0, 1- y_i w^Tx_i)^2

For L1-regularized logistic regression (-s 6), we solve

min_w \sum |w_j| + C \sum log(1 + exp(-y_i w^Tx_i))

For L2-regularized logistic regression (-s 7), we solve

min_alpha  0.5(alpha^T Q alpha) + \sum alpha_i*log(alpha_i) + \sum (C-alpha_i)*log(C-alpha_i) - a constant
    s.t.   0 <= alpha_i <= C,

where

Q is a matrix with Q_ij = y_i y_j x_i^T x_j.

For L2-regularized L2-loss SVR (-s 11), we solve

min_w w^Tw/2 + C \sum max(0, |y_i-w^Tx_i|-epsilon)^2

For L2-regularized L2-loss SVR dual (-s 12), we solve

min_beta  0.5(beta^T (Q + lambda I/2/C) beta) - y^T beta + \sum |beta_i|

For L2-regularized L1-loss SVR dual (-s 13), we solve

min_beta  0.5(beta^T Q beta) - y^T beta + \sum |beta_i|
    s.t.   -C <= beta_i <= C,

where

Q is a matrix with Q_ij = x_i^T x_j.

For one-class SVM dual (-s 21), we solve

min_alpha 0.5(alpha^T Q alpha)
    s.t.   0 <= alpha_i <= 1 and \sum alpha_i = nu*l,

where

Q is a matrix with Q_ij = x_i^T x_j.

If bias >= 0, w becomes [w; w_{n+1}] and x becomes [x; bias]. For example, L2-regularized logistic regression (-s 0) becomes

min_w w^Tw/2 + (w_{n+1})^2/2 + C \sum log(1 + exp(-y_i [w; w_{n+1}]^T[x_i; bias]))

Some may prefer not having (w_{n+1})^2/2 (i.e., bias variable not regularized). For primal solvers (-s 0, 2, 5, 6, 11), we provide an option -R to remove (w_{n+1})^2/2. However, -R is generally not needed as for most data with/without (w_{n+1})^2/2 give similar performances.

The primal-dual relationship implies that -s 1 and -s 2 give the same model, -s 0 and -s 7 give the same, and -s 11 and -s 12 give the same.

We implement 1-vs-the rest multi-class strategy for classification. In training i vs. non_i, their C parameters are (weight from -wi)*C and C, respectively. If there are only two classes, we train only one model. Thus weight1*C vs. weight2*C is used. See examples below.

We also implement multi-class SVM by Crammer and Singer (-s 4):

min_{w_m, \xi_i}  0.5 \sum_m ||w_m||^2 + C \sum_i \xi_i
    s.t.  w^T_{y_i} x_i - w^T_m x_i >= \e^m_i - \xi_i \forall m,i

where e^m_i = 0 if y_i  = m,
      e^m_i = 1 if y_i != m,

Here we solve the dual problem:

min_{\alpha}  0.5 \sum_m ||w_m(\alpha)||^2 + \sum_i \sum_m e^m_i alpha^m_i
    s.t.  \alpha^m_i <= C^m_i \forall m,i , \sum_m \alpha^m_i=0 \forall i

where w_m(\alpha) = \sum_i \alpha^m_i x_i,
and C^m_i = C if m  = y_i,
    C^m_i = 0 if m != y_i.

predict Usage

Usage: predict [options] test_file model_file output_file
options:
-b probability_estimates: whether to output probability estimates, 0 or 1 (default 0); currently for logistic regression only
-q : quiet mode (no outputs)

Note that -b is only needed in the prediction phase. This is different from the setting of LIBSVM.

Examples

> train data_file

Train linear SVM with L2-loss function.

> train -s 0 data_file

Train a logistic regression model.

> train -s 21 -n 0.1 data_file

Train a linear one-class SVM which selects roughly 10% data as outliers.

> train -v 5 -e 0.001 data_file

Do five-fold cross-validation using L2-loss SVM. Use a smaller stopping tolerance 0.001 than the default 0.1 if you want more accurate solutions.

> train -C data_file

Conduct cross validation many times by L2-loss SVM and find the parameter C which achieves the best cross validation accuracy.

> train -C -s 0 -v 3 -c 0.5 -e 0.0001 data_file

For parameter selection by -C, users can specify other solvers (currently -s 0, -s 2 and -s 11 are supported) and different number of CV folds. Further, users can use the -c option to specify the smallest C value of the search range. This option is useful when users want to rerun the parameter selection procedure from a specified C under a different setting, such as a stricter stopping tolerance -e 0.0001 in the above example. Similarly, for -s 11, users can use the -p option to specify the maximal p value of the search range.

> train -c 10 -w1 2 -w2 5 -w3 2 four_class_data_file

Train four classifiers: positive negative Cp Cn class 1 class 2,3,4. 20 10 class 2 class 1,3,4. 50 10 class 3 class 1,2,4. 20 10 class 4 class 1,2,3. 10 10

> train -c 10 -w3 1 -w2 5 two_class_data_file

If there are only two classes, we train ONE model. The C values for the two classes are 10 and 50.

> predict -b 1 test_file data_file.model output_file

Output probability estimates (for logistic regression only).

Library Usage

These functions and structures are declared in the header file linear.h. You can see train.c and predict.c for examples showing how to use them. We define LIBLINEAR_VERSION and declare extern int liblinear_version; in linear.h, so you can check the version number.

  • Function: model* train(const struct problem *prob, const struct parameter *param);

    This function constructs and returns a linear classification or regression model according to the given training data and parameters.

    struct problem describes the problem:

      struct problem
      {
          int l, n;
          int *y;
          struct feature_node **x;
          double bias;
      };
    

    where l is the number of training data. If bias >= 0, we assume that one additional feature is added to the end of each data instance. n is the number of feature (including the bias feature if bias >= 0). y is an array containing the target values. (integers in classification, real numbers in regression) And x is an array of pointers, each of which points to a sparse representation (array of feature_node) of one training vector.

    For example, if we have the following training data:

      LABEL       ATTR1   ATTR2   ATTR3   ATTR4   ATTR5
      -----       -----   -----   -----   -----   -----
      1           0       0.1     0.2     0       0
      2           0       0.1     0.3    -1.2     0
      1           0.4     0       0       0       0
      2           0       0.1     0       1.4     0.5
      3          -0.1    -0.2     0.1     1.1     0.1
    

    and bias = 1, then the components of problem are:

      l = 5
      n = 6
    
      y -> 1 2 1 2 3
    
      x -> [ ] -> (2,0.1) (3,0.2) (6,1) (-1,?)
           [ ] -> (2,0.1) (3,0.3) (4,-1.2) (6,1) (-1,?)
           [ ] -> (1,0.4) (6,1) (-1,?)
           [ ] -> (2,0.1) (4,1.4) (5,0.5) (6,1) (-1,?)
           [ ] -> (1,-0.1) (2,-0.2) (3,0.1) (4,1.1) (5,0.1) (6,1) (-1,?)
    

    struct parameter describes the parameters of a linear classification or regression model:

      struct parameter
      {
              int solver_type;
    
              /* these are for training only */
              double eps;             /* stopping tolerance */
              double C;
              double nu;              /* one-class SVM only */
              int nr_weight;
              int *weight_label;
              double* weight;
              double p;
              double *init_sol;
      };
    

    solver_type can be one of L2R_LR, L2R_L2LOSS_SVC_DUAL, L2R_L2LOSS_SVC, L2R_L1LOSS_SVC_DUAL, MCSVM_CS, L1R_L2LOSS_SVC, L1R_LR, L2R_LR_DUAL, L2R_L2LOSS_SVR, L2R_L2LOSS_SVR_DUAL, L2R_L1LOSS_SVR_DUAL, ONECLASS_SVM. for classification

    • L2R_LR L2-regularized logistic regression (primal)
    • L2R_L2LOSS_SVC_DUAL L2-regularized L2-loss support vector classification (dual)
    • L2R_L2LOSS_SVC L2-regularized L2-loss support vector classification (primal)
    • L2R_L1LOSS_SVC_DUAL L2-regularized L1-loss support vector classification (dual)
    • MCSVM_CS support vector classification by Crammer and Singer
    • L1R_L2LOSS_SVC L1-regularized L2-loss support vector classification
    • L1R_LR L1-regularized logistic regression
    • L2R_LR_DUAL L2-regularized logistic regression (dual) for regression
    • L2R_L2LOSS_SVR L2-regularized L2-loss support vector regression (primal)
    • L2R_L2LOSS_SVR_DUAL L2-regularized L2-loss support vector regression (dual)
    • L2R_L1LOSS_SVR_DUAL L2-regularized L1-loss support vector regression (dual) for outlier detection
    • ONECLASS_SVM one-class support vector machine (dual)

    C is the cost of constraints violation. p is the sensitiveness of loss of support vector regression. nu in ONECLASS_SVM approximates the fraction of data as outliers. eps is the stopping criterion.

    nr_weight, weight_label, and weight are used to change the penalty for some classes (If the weight for a class is not changed, it is set to 1). This is useful for training classifier using unbalanced input data or with asymmetric misclassification cost.

    nr_weight is the number of elements in the array weight_label and weight. Each weight[i] corresponds to weight_label[i], meaning that the penalty of class weight_label[i] is scaled by a factor of weight[i].

    If you do not want to change penalty for any of the classes, just set nr_weight to 0.

    init_sol includes the initial weight vectors (supported for only some solvers). See the explanation of the vector w in the model structure.

    NOTE To avoid wrong parameters, check_parameter() should be called before train().

    struct model stores the model obtained from the training procedure:

      struct model
      {
              struct parameter param;
              int nr_class;           /* number of classes */
              int nr_feature;
              double *w;
              int *label;             /* label of each class */
              double bias;
              double rho;             /* one-class SVM only */
      };
    

    param describes the parameters used to obtain the model.

    nr_class and nr_feature are the number of classes and features, respectively. nr_class = 2 for regression.

    The array w gives feature weights; its size is nr_feature*nr_class but is nr_feature if nr_class = 2. We use one against the rest for multi-class classification, so each feature index corresponds to nr_class weight values. Weights are organized in the following way

        +------------------+------------------+------------+
        | nr_class weights | nr_class weights |  ...
        | for 1st feature  | for 2nd feature  |
        +------------------+------------------+------------+
    

    The array label stores class labels.

    If bias >= 0, x becomes [x; bias]. The number of features is increased by one, so w is a (nr_feature+1)*nr_class array. The value of bias is stored in the variable bias.

    rho is the bias term used in one-class SVM only.

  • Function: void cross_validation(const problem *prob, const parameter *param, int nr_fold, double *target);

    This function conducts cross validation. Data are separated to nr_fold folds. Under given parameters, sequentially each fold is validated using the model from training the remaining. Predicted labels in the validation process are stored in the array called target.

    The format of prob is same as that for train().

  • Function: void find_parameters(const struct problem *prob, const struct parameter *param, int nr_fold, double start_C, double start_p, double *best_C, double *best_p, double *best_score);

    This function is similar to cross_validation. However, instead of conducting cross validation under specified parameters. For -s 0, 2, it conducts cross validation many times under parameters C = start_C, 2start_C, 4start_C, 8start_C, ..., and finds the best one with the highest cross validation accuracy. For -s 11, it conducts cross validation many times with a two-fold loop. The outer loop considers a default sequence of p = 19/20max_p, ..., 1/20max_p, 0 and under each p value the inner loop considers a sequence of parameters C = start_C, 2start_C, 4*start_C, ..., and finds the best one with the lowest mean squared error.

    If start_C <= 0, then this procedure calculates a small enough C for prob as the start_C. The procedure stops when the models of all folds become stable or C reaches max_C.

    If start_p <= 0, then this procedure calculates a maximal p for prob as the start_p. Otherwise, the procedure starts with the first i/20max_p <= start_p so the outer sequence is i/20max_p, (i-1)/20*max_p, ..., 0.

    The best C, the best p, and the corresponding accuracy (or MSE) are assigned to *best_C, *best_p and *best_score, respectively. For classification, *best_p is not used, and the returned value is -1.

  • Function: double predict(const model *model_, const feature_node *x);

    For a classification model, the predicted class for x is returned. For a regression model, the function value of x calculated using the model is returned.

  • Function: double predict_values(const struct model *model_, const struct feature_node *x, double* dec_values);

    This function gives nr_w decision values in the array dec_values. nr_w=1 if regression is applied or the number of classes is two. An exception is multi-class SVM by Crammer and Singer (-s 4), where nr_w = 2 if there are two classes. For all other situations, nr_w is the number of classes.

    We implement one-vs-the rest multi-class strategy (-s 0,1,2,3,5,6,7) and multi-class SVM by Crammer and Singer (-s 4) for multi-class SVM. The class with the highest decision value is returned.

  • Function: double predict_probability(const struct model *model_, const struct feature_node *x, double* prob_estimates);

    This function gives nr_class probability estimates in the array prob_estimates. nr_class can be obtained from the function get_nr_class. The class with the highest probability is returned. Currently, we support only the probability outputs of logistic regression.

  • Function: int get_nr_feature(const model *model_);

    The function gives the number of attributes of the model.

  • Function: int get_nr_class(const model *model_);

    The function gives the number of classes of the model. For a regression model, 2 is returned.

  • Function: void get_labels(const model *model_, int* label);

    This function outputs the name of labels into an array called label. For a regression model, label is unchanged.

  • Function: double get_decfun_coef(const struct model *model_, int feat_idx, int label_idx);

    This function gives the coefficient for the feature with feature index = feat_idx and the class with label index = label_idx. Note that feat_idx starts from 1, while label_idx starts from 0. If feat_idx is not in the valid range (1 to nr_feature), then a zero value will be returned. For classification models, if label_idx is not in the valid range (0 to nr_class-1), then a zero value will be returned; for regression models and one-class SVM models, label_idx is ignored.

  • Function: double get_decfun_bias(const struct model *model_, int label_idx);

    This function gives the bias term corresponding to the class with the label_idx. For classification models, if label_idx is not in a valid range (0 to nr_class-1), then a zero value will be returned; for regression models, label_idx is ignored. This function cannot be called for a one-class SVM model.

  • Function: double get_decfun_rho(const struct model *model_);

    This function gives rho, the bias term used in one-class SVM only. This function can only be called for a one-class SVM model.

  • Function: const char *check_parameter(const struct problem *prob, const struct parameter *param);

    This function checks whether the parameters are within the feasible range of the problem. This function should be called before calling train() and cross_validation(). It returns NULL if the parameters are feasible, otherwise an error message is returned.

  • Function: int check_probability_model(const struct model *model);

    This function returns 1 if the model supports probability output; otherwise, it returns 0.

  • Function: int check_regression_model(const struct model *model);

    This function returns 1 if the model is a regression model; otherwise it returns 0.

  • Function: int check_oneclass_model(const struct model *model);

    This function returns 1 if the model is a one-class SVM model; otherwise it returns 0.

  • Function: int save_model(const char *model_file_name, const struct model *model_);

    This function saves a model to a file; returns 0 on success, or -1 if an error occurs.

  • Function: struct model *load_model(const char *model_file_name);

    This function returns a pointer to the model read from the file, or a null pointer if the model could not be loaded.

  • Function: void free_model_content(struct model *model_ptr);

    This function frees the memory used by the entries in a model structure.

  • Function: void free_and_destroy_model(struct model **model_ptr_ptr);

    This function frees the memory used by a model and destroys the model structure.

  • Function: void destroy_param(struct parameter *param);

    This function frees the memory used by a parameter set.

  • Function: void set_print_string_function(void (*print_func)(const char *));

    Users can specify their output format by a function. Use set_print_string_function(NULL); for default printing to stdout.

Additional Information

If you find LIBLINEAR helpful, please cite it as

R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin.
LIBLINEAR: A Library for Large Linear Classification, Journal of
Machine Learning Research 9(2008), 1871-1874. Software available at
http://www.csie.ntu.edu.tw/~cjlin/liblinear

For any questions and comments, please send your email to [email protected]

liblinear-java's People

Contributors

bwaldvogel avatar cheusov avatar electrum avatar kzn avatar numb3r3 avatar salimm avatar tandronicus avatar vbogach avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

liblinear-java's Issues

Performance improvements

This would slightly differ from the original implementation, however, if we replace all instances of

x += y * z;

with

x = Math.fma(y, z, x);

we can get a huge speedup, for example, replacing all occurrences of this in just the de.bwaldvogel.liblinear.SparseOperator class we can get a 2x speed up on machines with FMA enabled.

However it would require switching to Java 9.

Assertions invalid

Some of the assert statements in the code are not valid. Specifically, in Train.java on lines 269 and 272, these assertions can throw ArrayIndexOutOfBoundsException if the libsvm file contains rows without any features. Once I turned off the assertions in my unit tests, everything working fine. Therefore, the rest of the code correctly handles rows without features.

Setting

In programmatic usage of the code, it is not possible to set the option for predicting probabilities (the -b option in the command line and setting flag_predict_probability in Predict.java).
I think you should add a new field to (say flag_predict_probability ) to Params class and set the parameter there, so that one can predict with probabilities.
Daniel

Different Results With the Same Experiment

I get slightly different results by running the same experiment (LogReg L1, reg=0.3) each time. Is that possible or must there be a bug either with the library or with my code?

This is not the case if I use LogReg L2. I get the exactly same results. I am testing out the other LogReg L1 implementations (StanfordNLP and Smile) as well. Both's results are deterministic.

Thread safety problem in predict: flag_predict_probability shouldn't be static

The flag_predict_probability boolean in predict is declared static. This causes a problem when running 2 different predict jobs in the same Java process with different probability options: the setting from the second call will overwrite the one from the first call. This may cause a prediction using a non-probabilty-capable solver type to fail, despite being called with correct parameters, if a simultaneous job runs with probabilities enabled.

No option for no bias feature for Train.readProblem()

I have two options to map my Dataset object to a liblinear Problem object.

1-) Convert programmatically. This is what I do normally. I also add a bias=1 feature with this method. And it works just fine.

2-) First save the Dataset object as .libsvm and then call the Train.readProblem(). However, readProblem() adds a bias feature by default. Even if I pass bias=0, an extra feature is added for all instances. This is the reason that I can't reproduce the exact same accuracy which I get with the 1st method. I get slightly worse results.

Here you may recommend not adding the bias value while saving as .libsvm since it is already going to be added by the readProblem(). But I create training and test sets separately and create two libsvm files. Let's imagine a situation like this; the training libsvm file has 100 features and in the test libsvm file, no instance has the 100th feature. Thus when I call readProblem() to read training and test problems, the test problem's dimension is 99 not 100. To prevent this I add the bias feature as the last feature by default. This way both sets has 101 features.

In conclusion, I think the bias feature must be optional for the Train.readProblem() method.

Is a bias term added to the features automatically?

I am maintaining code that I mostly didn't write which uses liblinear for logistic regression. My understanding from the documentation was that setting bias to a value greater than 0 will result in a synthetic feature being added. But I cannot see anywhere in the code where this feature is added either during training or prediction. Is it required to both set the bias parameter to a value greater than 1 and also manually add the synthetic feature node during training and prediction?

Implement init_sol setter in Parameter

There is currently a field init_sol in Parameter class that is used to initialize model weights. However there is no setter for this field. Could you implement the setter?

NullPointerException when using sparse data to train Model

Hello everyone,

I am currently working on a project that use your library to train classification models, I build a Problem object with sparse feature vectors (following some of your examples) and when I start the training with such Problem object I get NPE from the train method occurring when it browse through feature vectors.
When looking at your code I can see you browse through features in a "for each" way, but when I follow the exception using debugging tools, I see a "for i=0 to n" logic, that will obviously lead to point toward a null feature in a sparse feature vector.

I have Mac os X 11.2.3 and I use oracle jdk 11.0.6.

How to obtain the support vectors

Hi --

I am using Weka's LibLINEAR class and want to obtain the support vectors after the classifier has been trained.

Is there an example showing how this can be done?

Thanks,
Haimonti

你好我在用这个东西

你好我在用这个东西.挺好用的.但是有个问题呢...怎么能返回每个类别属于这个类别的分数..因为数据中会有.一些文本..任何类都不属于..我想得道这个分数弄个阈值..来过滤..当然..你要是看不懂中文...额...

how to get belongs to some class score?

Bias parameter not used in Linear.predictValues()

I am using a logistic regression model with 2 features and a bias.

I would expect the score to be calculated as

w1*x1 + w2*x2 + bias

but looking at

dec_values[i] += w[(idx - 1) * nr_w + i] * lx.getValue();
it seems like the bias parameter is never added to dec_values.

Am I right to think the bias parameter should contribute to the score or is my understanding incorrect?

NPE when setting -wi parameter

java -cp /home/pebble/.m2/repository/de/bwaldvogel/liblinear/1.8/liblinear-1.8.jar de.bwaldvogel.liblinear.Train -v 10 -c 10 -w1 2 vector
Exception in thread "main" java.lang.NullPointerException
at java.lang.System.arraycopy(Native Method)
at de.bwaldvogel.liblinear.Train.parse_command_line(Train.java:123)
at de.bwaldvogel.liblinear.Train.run(Train.java:287)
at de.bwaldvogel.liblinear.Train.main(Train.java:19)

whereas the native liblinear implementation behaves as expected

Linear's global RNG makes it difficult to reproduce models or track concurrent executions

We use liblinear-java in Tribuo, and it’s working very well. We’re adding a reproducibility package to Tribuo to rebuild Tribuo models from the provenance metadata they carry, and as part of the tests for that package I noticed that liblinear-java has a global RNG that causes some of the algorithms to not produce bit-wise exact reproductions when executed on the same inputs. In general Tribuo tracks all RNG state and manages it to ensure that concurrent training runs use independently tracked streams of random numbers for provenance purposes, and the global shared state in liblinear-java means we can’t effectively track it and so we’ll have to enforce sequential use of liblinear-java via synchronization and consistently reset the RNG to a known state.

Is it possible to move the static random instance in Linear into Problem as an instance field? To preserve the original behaviour it could initialize itself to a Random instance using DEFAULT_RANDOM_SEED, or the code could be modified so it defaults to the global RNG if no instance RNG is present in the Problem. The first option would basically just be a find/replace on random with prob.random, along with adding the extra field to Problem (I think it touches approximately 9 lines). The second option would be a little more involved as it requires guards on the 8 uses of random and thus would slightly increase divergence from the C++ liblinear, so might not be as desirable from a maintainability perspective. However it would preserve the existing behaviour exactly for users who don’t set the random field on Problem. We’d be happy to contribute either patch if you’d accept it.

Linear.loadModel(Reader) should not close the reader

It should be the client responsibility to decide whether or not to close the reader in Linear.loadModel(Reader).

Reason: I'm storing the model in a zip file with other information. If Linear.loadModel(Reader) closes the reader, this will close the zip input stream, which breaks my code.

Documentation For Java API

Hi,

This is a great library. I've been looking for a lightweight, reliable and commercial friendly Java implementation of LogReg, MaxEnt and SVM for a while and surprisingly it wasn't easy as I thought. Could you provide some Java code examples for the basic usage of the API? I could figure out the following but I wonder if there is more to know.

        Problem problem = Train.readProblem(new File("train.libsvm"), 1);
        Problem testProb = Train.readProblem(new File("test.libsvm"), 1);

        SolverType solver = SolverType.L2R_LR; // -s 0
        double C = 1.5;    // cost of constraints violation
        double eps = 0.01; // stopping criteria

        Parameter parameter = new Parameter(solver, C, eps);
        final Model model = Linear.train(problem, parameter);
        File modelFile = new File("model");
        model.save(modelFile);
        for (int i = 0; i < testProb.x.length; i++) {
            Feature[] instance = testProb.x[i];
            double prediction = Linear.predict(model, instance);
        }

predictValues() does not map labels

predictValues() and therefore predictProbability() fill the score array with scores according to the internal label representation, without being mapped by Model.label[].

First, Model.label[] is private, so it's not possible to guess the mapping from the application, and the number of labels in the internal representation might be less than the actual number of labels due to labels not seen in training. Then, the size the scores[] array provided to predictValues() is impossible to guess.

Would it be possible to either expose the mapping or map the scores array?

Affects 32c64ff.

Bias term is added by default

The following code will result a Model object where model.nr_feature = n-1 even if the number of features in the dataset is n excluding the bias term.

problem.l = l
problem.n = n
problem.x = x
problem.y = y
...
Model model = Linear.train(problem, parameter);

This is because in the above code, bias term is added implicitly (default value for problem.bias is 0) and in Linear.train() there is a line if (prob.bias >= 0) model.nr_feature = n - 1;. To avoid this we can change line problem.n = n as problem.n = n+1. But the Java Api documentation is misleading and CLI documentation says the default value for bias is -1 which makes one think that it is also the case for programmatic access.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.