Coder Social home page Coder Social logo

clustering's Introduction

Clusterfck

A js cluster analysis library. Includes Hierarchical (agglomerative) clustering and K-means clustering. Demo here.

Install

For node.js:

npm install clusterfck

Or grab the browser file

K-means

var clusterfck = require("clusterfck");

var colors = [
   [20, 20, 80],
   [22, 22, 90],
   [250, 255, 253],
   [0, 30, 70],
   [200, 0, 23],
   [100, 54, 100],
   [255, 13, 8]
];

// Calculate clusters.
var clusters = clusterfck.kmeans(colors, 3);

The second argument to kmeans is the number of clusters you want (default is Math.sqrt(n/2) where n is the number of vectors). It returns an array of clusters, for this example:

[
  [[200,0,23], [255,13,8]],
  [[20,20,80], [22,22,90], [0,30,70], [100,54,100]],
  [[250,255,253]]
]

Classification

For classification, instantiate a new Kmeans() object.

var kmeans = new clusterfck.Kmeans();

// Calculate clusters.
var clusters = kmeans.cluster(colors, 3);

// Calculate cluster index for a new data point.
var clusterIndex = kmeans.classify([0, 0, 225]);

Serialization

The toJSON() and fromJSON() methods are available for serialization.

// Serialize centroids to JSON.
var json = kmeans.toJSON();

// Deserialize centroids from JSON.
kmeans = kmeans.fromJSON(json);

// Calculate cluster index from a previously serialized set of centroids.
var clusterIndex = kmeans.classify([0, 0, 225]);

Initializing with Existing Centroids

// Take existing centroids, perhaps from a database?
var centroids = [ [ 35.5, 31.5, 85 ], [ 250, 255, 253 ], [ 227.5, 6.5, 15.5 ] ];

// Initialize constructor with centroids.
var kmeans = new clusterfck.Kmeans(centroids);

// Calculate cluster index.
var clusterIndex = kmeans.classify([0, 0, 225]);

Accessing Centroids and K value

After clustering or loading via fromJSON(), the calculated centers are accessible via the centroids property. Similarly, the K-value can be derived via centroids.length.

// Calculate clusters.
var clusters = kmeans.cluster(colors, 3);

// Access centroids, an array of length 3.
var centroids = kmeans.centroids;

// Access k-value.
var k = centroids.length;

Hierarchical

var clusterfck = require("clusterfck");

var colors = [
   [20, 20, 80],
   [22, 22, 90],
   [250, 255, 253],
   [100, 54, 255]
];

var clusters = clusterfck.hcluster(colors);

hcluster returns an object that represents the hierarchy of the clusters with left and right subtrees. The leaf clusters have a value property which is the vector from the data set.

{
   "left": {
      "left": {
         "left": {
            "value": [22, 22, 90]
         },
         "right": {
            "value": [20, 20, 80]
         },
      },
      "right": {
         "value": [100, 54, 255]
      },
   },
   "right": {
      "value": [250, 255, 253]
   }
}

Distance metric and linkage

Specify the distance metric, one of "euclidean" (default), "manhattan", and "max". The linkage criterion is the third argument, one of "average" (default), "single", and "complete".

var tree = clusterfck.hcluster(colors, "euclidean", "single");

clustering's People

Contributors

blakmatrix avatar harthur avatar primaryobjects avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clustering's Issues

kmeans on spark error: requires number of clusters greater than one, but does not respond to changing 'k'

I have this code:

val fileName = """file:///home/user/data/csv/sessions_sample.csv"""
     val df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load(fileName)
     val input1 = df.select("id", "duration", "ip_dist", "txr1", "txr2", "txr3", "txr4").na.fill(3.0)
     val input2 = input1.map(r => (r.getInt(0), Vectors.dense((1 until r.size - 1).map{ i =>  r.getDouble(i)}.toArray[Double])))
     val input3 = input2.toDF("id", "features")
     input3.count()

val kmeans = new KMeans().setK(100).setSeed(1L).setFeaturesCol("features").setPredictionCol("prediction")
val model = kmeans.fit(input3)
val model = kmeans.fit(input3.select("features"))
// Make predictions
val predictions = model.transform(input3.select("features"))
val predictions = model.transform(input3)
val evaluator = new ClusteringEvaluator()
// i get error when i run this line
val silhouette = evaluator.evaluate(predictions)

Error: 

> java.lang.AssertionError: assertion failed: Number of clusters must be greater than one.
>   at scala.Predef$.assert(Predef.scala:170)
>   at org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette$.computeSilhouetteScore(ClusteringEvaluator.scala:416)
>   at org.apache.spark.ml.evaluation.ClusteringEvaluator.evaluate(ClusteringEvaluator.scala:96)
>   ... 49 elided


Of course i tried changing k. It does not respond. On top of that, I have clusters with infinite cluster centers. For absolutely no value of k my clusters are stable => silhouette gives weird error?

`model.clusterCenters.foreach(println)`

> [3217567.1300936914,145.06533614203505,Infinity,Infinity,Infinity]


please advise. 



Can I i have an identification item that can be ignored by clusterfck.kmeans

For example the first item is an identification item ( userid ) so that I can know who went in wich cluster.

var colors = [
   ['a1',20, 20, 80],
   ['a2',22, 22, 90],
   ['a3',250, 255, 253],
   ['a4',0, 30, 70],
   ['a5',200, 0, 23],
   ['a6',100, 54, 100],
   ['a7',255, 13, 8]
];
clusterfck.kmeans(colors, 3);

If I run this it just hungs. Is there a way to ignore the first item, and just consider the rest?

Thanks

add kmeans.distance api

since kmeans.classify is provided, can you add kmeans.distance() api for deep use.

Thanks!

infinite loop occured while using kmeans on a dataset with 700 datapoints

I'd love to give more details . movement is always true... (inside the loop it ofcourse false, but then changes to true)...

not sure how to fix it, really problematic.
help is needed.
It might be because of a local minima point. I've added an iterations limit to the while, so while(movement && (iterations < max_iteratoins)){ ... }

this isn't a good solution obviously, but im not familiar with the algorithm
http://blog.endava.com/k-means-clustering-algorithm
" However, in this form, there is a risk to get stuck in a local minima. By local minima I mean the local minima of the cost function:"

A couple of things I'd like to add

I have two features I'd like to add to hcluster, but I thought I'd run them by you before I ran off and built them. They are:

  1. Make it possible to directly provide my own distance matrices, instead of the features and distance measure. One thing I really like about hierarchical clustering is that you can use sets of distances that don't make sense in any space, so I'd like to leverage that like I can in SciPy. I'd love a suggestion on what the interface should look like though.
  2. Add Ward's linkage as a linkage option. The complication there is that it doesn't fit neatly into the existing code because it can't be computed for each cluster independently. (Also it looks hard and I'm bad at math, but I'll get there.)

What do you think?

Record distance value when linking hierarchical clusters

It'd be a nice improvement if there was a way to access the distance between the left and right subtrees of the resulting hierarchical clustering tree without having to re-run the distance function. Perhaps this information could be recorded in the canonical object. This would make it easier (and cheaper) to generate an arbitrary number of clusters from the tree instead of just decending the tree to some depth to generate 2^depth clusters.

It would be even better if there was a function bound to the result of hcluster that accepted some int number of clusters and output the clusters in a list.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.