Coder Social home page Coder Social logo

Possible optimizations about dbscan HOT 2 OPEN

mhahsler avatar mhahsler commented on May 25, 2024
Possible optimizations

from dbscan.

Comments (2)

mhahsler avatar mhahsler commented on May 25, 2024

Hi! Thanks for this request. parallelizing the kd-tree building process is problematic since we use a fast, but somewhat complicated library. Currently, the tree is rebuilt every time you cluster. A while ago, I was looking into creating the kd-tree structure once and then performing clustering serval times on the same tree. I did not get far with implementing this consistently (without memory leaks) in R. We plan to look into this more soon.

from dbscan.

peekxc avatar peekxc commented on May 25, 2024

I've added a preliminary branch that allows the use of a pre-built kd tree with dbscan. You could try using that, although again it's worth noting there still seem to be same memory issues when I try to validate. It's worth saving your environment beforehand.

That being said, usage would look something like:

X_n <- iris[, 1:4]
xn_kdtree <- kd_tree(X_n)
dbscan(X_n, eps = 0.8, minPts = 15, kd_tree = xn_kdtree)

You can also do summary(xn_kdtree), print(xn_kdtree), or str(xn_kdtree)
to get more information on the actual structure of the tree.

It's worth noting that in my experience, building the tree is quite fast, and is often not the bottleneck of the search. It depends on the distribution of the data, so your mileage may vary (and I hope it does). I ran some benchmarks which seemed to indicate roughly a ~5-15% speedup in some cases.

The real bottleneck are the points which cause the traversal to constantly 'miss' going down the correct branch because they happen to be close to a couple of the splitting hyperplanes. If the tree is particularly large, the recursion stack gets pretty expensive to go back through.

You could try making a couple trees using different splitting strategies, which could help shave off some additional time. For example, if you suspect the distribution of the data is quite 'clustered' (implying there are several very dense areas of the feature space surrounded by sparse, non-dense areas), the "FAIR" splitting rule might help as it'll try to make balanced partitions in the tree. A more direct way to optimize is to make sure that your bucket size isn't too small. By default, it's set to 10, if the user gives no input... this could be problematic for 290k instances. What kind of setting of minPts are you using?

A parallel version of DBSCAN itself may be in the future, but perhaps too far off to be useful here.
too far off to even

from dbscan.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.