Coder Social home page Coder Social logo

thomasahle / tinyknn Goto Github PK

View Code? Open in Web Editor NEW
12.0 3.0 2.0 1.41 MB

A tiny approximate K-Nearest Neighbour library in Python based on Fast Product Quantization and IVF

License: GNU Affero General Public License v3.0

Python 68.99% Cython 31.01%
simd product-quantization python ivf nearest-neighbor-search cython

tinyknn's Introduction

tinyknn's People

Contributors

tahle avatar thomasahle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

tinyknn's Issues

Add support for signed addition

Currently, we do unsigned addition in the Cython code.
This is fine for distances, which are positive.
However, to support inner products/cosine similarity we need to have a signed version as well.

Faster building / batch-insert using compression

Currently IVF.fit(...) uses brute force nearest neighbours to find which clusters to insert the points into.
Instead we could use the same PQ.top(...) method that we use to do queries to find the relevant cluster centers faster.
However, currently PQ.top(...) doesn't support batch queries, which means this likely wouldn't be faster than brute force.

Hence the task: Find a way to do fast batch queries with QuickADC and add it to FastPQ.

Use AVX-512

AVX-512 has some nice features, such as support for fast float16 operations. This might allow us to do rescoring very fast.
The Quicker ADC paper also mentions some uses of AVX-512: https://arxiv.org/pdf/1812.09162.pdf such as {5,6,7} bit lookup tables. Though I don't think any of the top libraries, like ScaNN or Faiss actually uses that.

Support multi-ivf

A classical way to make building the index faster, cheaper memory wise, and potentially better (bigger, but lower quality) is to use a top level product code.
Instead of just "hashing" each point to the closest centroid, hash it to "the pair of centroids" which has a sum closest to the point.
My image here shows how using multi-indexing this way reduces the mean square error: https://twitter.com/thomasahle/status/1583582672906952705?s=20
Some of the code for doing this is here: https://gist.github.com/thomasahle/4f16b19aa395f25e8fee882e3a82a4d9

Use separate PQs in each cluster

Currently the same product quantizer is used for every cluster in IVF.
However, the PQ doesn't use a lot of space (it's just 16 center points), so we might as well train a separate one for the data in each cluster.

The main disadvantage is that queries would have to compute a distance table for each PQ.
It's unclear how much that currently is a bottle-neck compared to the actual pass 1 and pass 2 filtering.

An advantage is that we can quantize data[mask] - center instead of data[mask] as we do now.
I believe this is what QuickADC actually does.
By subtracting the "main component" of the points we thus gain the ability to scale up the scalars before we map to [-128, 127], allowing higher precision.

Missing convert.py

Hello! I'm unable to run examples/glove/prepare-dataset.sh because the repo is missing convert.py, it appears.

Nearest Neighbour Search

Product Quantization should be combined with space partitioning to get a good ann-benchmark score.

Build PIP package

We should have a pip package that allows people to quickly install and try the library.

Support estimating distance between two compressed datasets

Often we use PQ to estimate the distance from a full precision vector to a bunch of compressed points.
However, we can also try to compute the distance between all pairs of points in two compressed datasets (even possibly with distinct FastPQ instances).

This is relevant, for example, when inserting a batch of points into the data structure, when we quickly want to compute all the relevant close cluster centers.
Currently we compute this using full precision distance computations.

Edit: Maybe #13 is more relevant for speeding up building the index. However, supporting estimating distances between compressed datasets is still interesting and worthwhile.

Better support for storing points in multiple lists

Since 2df6a42 it is possible to store every datapoint in n lists by building with ivf.build(n_probes=n).
This increases performance recall/qps quite a lot, but only when going from n=1 to n=2, as seen in the attached figure.
Figure_1

The problem is probably that duplicate matches aren't handled well.
When calling ctop from ivf, we should somehow tell it about the indices we have already collected so the distance table can focus on telling us about alternative interesting candidates.

One option is to even reuse the query_pq_sse(transformed_data, self.tables, indices, values, True) call by calling it on multiple (transformed_data, tables) pairs while keeping (indices, values) fixed. That way we also would only do rescoring/pass-2 a single time on all candidates retrieved from different lists.

The issue is that
(1) the binary heap data structure we use can't recognize duplicates, and
(2) the query_pq_sse function only knows the "local" id of a point in a list, not the global id.

To solve (2) we could pass a list with the global ids of all the points considered. This would be some extra overhead for query_pq_sse to pass around, but perhaps not that much. And we wouldn't have to "relabel" the returned ids afterwards.

For (1) we could switch back to using insertion sort, or just try heuristically to remove some of the duplicates the heap is able to find.

Add typing

Currently we are not using python's typing functionality.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.