Coder Social home page Coder Social logo

Comments (10)

nikhilmakan02 avatar nikhilmakan02 commented on June 7, 2024

After further digging into the traceback I am starting to think this maybe something to do with the way the scores are being processed after the query is made to hnswlib. The traceback appears to be pointing to this method.

def from_protobuf(cls: Type[T], pb_msg: 'NdArrayProto') -> 'T':
        """
        Read ndarray from a proto msg
        :param pb_msg:
        :return: a numpy array
        """
        source = pb_msg.dense
        if source.buffer:
            x = np.frombuffer(bytearray(source.buffer), dtype=source.dtype)
            return cls._docarray_from_native(x.reshape(source.shape))
        elif len(source.shape) > 0:
            return cls._docarray_from_native(np.zeros(source.shape))
        else:
            raise ValueError(f'proto message {pb_msg} cannot be cast to a NdArray')

from docarray.

JoanFM avatar JoanFM commented on June 7, 2024

Can I understand how did you build the index?

from docarray.

nikhilmakan02 avatar nikhilmakan02 commented on June 7, 2024

Its a large index of 3 million plus documents. I have a set of parquet files that contain about a 100 000 rows each. These files are read in as DataFrames converted to Dicts and then the same function def build_doc_list() as above is used to create a doc list.

The db is then initialized and indexed as below.

doc_index = HnswDocumentIndex[AddressDoc](work_dir=work_dir))
doc_index.index(dl)

The whole thing runs in a loop where it runs on each file at a time.

Is the fact that I initialise the doc index on each loop run potentially the problem? Should that sit outside the loop. It's worth mention this process did crash at one point and I restarted it from where it left off. I may just try rebuild the whole db index and see where that gets me.

I see you have labeled this as a bug, can you provide a bit more on what your initial thinking is?

from docarray.

JoanFM avatar JoanFM commented on June 7, 2024

I just labeled as a bug because it seems like it, but I have no thinking other than suspecring that something went wrong at indexing time.

I would indeed not initialize everytime but keep indexing in a loop.

from docarray.

JoanFM avatar JoanFM commented on June 7, 2024

Please let us know what is the result when u reindex again?

from docarray.

nikhilmakan02 avatar nikhilmakan02 commented on June 7, 2024

@JoanFM thanks for the response. After some debugging over the weekend I manage to nail down the issue. I don't believe it is a bug as it was a subtle error on my part.

I had two scripts one to create the index and another to search through it.

The script to create the index used the following document class

class AddressDoc(BaseDoc):
    ELID: int
    FULL_ADDRESS: str
    EMBEDDINGS: NdArray[768] = Field(
        max_elements=3500000, space="cosine", num_threads=20
    )

The script to search the index used this document class and as result seemed to throw the error.

class AddressDoc(BaseDoc):
    ELID: int
    FULL_ADDRESS: str
    EMBEDDINGS: NdArray[768]

Maybe as a feature request I wonder if its possible for this line to throw an error about an inconsistent class being used to initialise an already existing index.

HnswDocumentIndex[AddressDoc](work_dir=work_dir))

I did have a few follow on questions about this which hopefully you can shed some light on.

  1. I was hoping by setting the number of threads I could improve performance of writing to the hnswlib backend as it takes quite long to index, however from reviewing my PC resources its clear its not really parallelised. Do you have any tips on how to improve this?
  2. When using the hnswlib backend you create two files a .bin file (for the embeddings I assume) and a sqlite db file for everything else. I have noticed if I change the space property Field(space="cosine") this seemed to blow up the size of sqlite db file quite drastically. Do you have any thoughts on why this is?

If it is of value. The below example can be used to demonstrate firstly the reshape error and the blown up sql lite db by uncommenting the various lines pertaining to class Person.

from docarray import BaseDoc, DocList
from docarray.index import HnswDocumentIndex
from docarray.typing import NdArray
from pydantic import Field
import numpy as np
import os

###### Build db ######


class Person(BaseDoc):
    name: str
    follower: int
    # embeddings: NdArray[32]
    embeddings: NdArray[32] = Field(space="cosine")


data = [
    {"name": f"Maria_{i}", "follower": 12345, "embeddings": np.zeros(32)}
    for i in range(20)
]

db_dl = DocList[Person](
    [
        Person(
            name=d["name"],
            follower=d["follower"],
            embeddings=d["embeddings"],
        )
        for d in data
    ]
)
print("db data complete")

doc_index = HnswDocumentIndex[Person](
    work_dir=os.path.join(work_dir)
)
doc_index.index(db_dl)
print("db indexed")
print(f"num docs: {doc_index.num_docs()}")


###### Search db ######


class Person(BaseDoc):
    name: str
    follower: int
    # embeddings: NdArray[32]
    embeddings: NdArray[32] = Field(space="cosine")


doc_index = HnswDocumentIndex[Person](
    work_dir=os.path.join(work_dir)
)

se_dl = DocList[Person](
    [
        Person(
            name=d["name"],
            follower=d["follower"],
            embeddings=d["embeddings"],
        )
        for d in data[:1]
    ]
)
print("search data complete")

doc_index.find_batched(se_dl, search_field="embeddings", limit=1)
print("db search complete")

from docarray.

JoanFM avatar JoanFM commented on June 7, 2024
  1. I have a question, I do not understand why it blew, is it because the configuration was not respected? because the fields of the class are the same. Without this, it is hard to know how to validate the incompatibility.

  2. I will review, but this is handled by HNSW, I would first check that the flag is properly passed to the Hnswlib dependency.

  3. The reason why having "cosine" blows the size is because in that case, hnswlib normalizes the vectors, and therefore we cannot rely on the hnswlib to reconstruct the original vector to give as result. For this, we may be able to add a configuration value to see if you do not mind about that. (Not sure how this could be named, maybe you could easily provide a PR for this)

from docarray.

JoanFM avatar JoanFM commented on June 7, 2024

@nikhilmakan02 , may I ask you how you configure the num_threads?

from docarray.

nikhilmakan02 avatar nikhilmakan02 commented on June 7, 2024

@JoanFM apologies for the late reply on this.

may I ask you how you configure the num_threads?

I did it in the class as below, is that not correct?

class AddressDoc(BaseDoc):
    ELID: int
    FULL_ADDRESS: str
    EMBEDDINGS: NdArray[768] = Field(
        max_elements=3500000, space="cosine", num_threads=20
    )

1. I have a question, I do not understand why it blew, is it because the configuration was not respected? because the fields of the class are the same. Without this, it is hard to know how to validate the incompatibility.

Not sure if I understand what you are asking here when you say 'why it blew' The class has the same name, the fields have the same name correct, but the class used to build the index had this custom configuration Field(max_elements=3500000, space="cosine", num_threads=20) which was omitted from the class that was used to initialise the index upon searching it and that seemed to cause the reshape error when searching. After reading your answer to question 3 and then also reading this thread I am starting to think this reshape error is purely because the index was built using the cosine distance (and therefore the vectors were normalised). Reading that index in again and initialising without this config would cause an issue since technically you can't change the distance metric if the index is created with the cosine distance.

3. The reason why having "cosine" blows the size is because in that case, hnswlib normalizes the vectors, and therefore we cannot rely on the hnswlib to reconstruct the original vector to give as result. For this, we may be able to add a configuration value to see if you do not mind about that. (Not sure how this could be named, maybe you could easily provide a PR for this)

Definitely think this would be worth doing to bring that DB size down. Also in some vector dbs you also have the option to not return the embeddings in the search to improve performance, this is preferrable as ultimately I don't need the embeddings after the search is complete. I have never submitted a PR on github before, so let me know if there is something I can do to help here and how I go about doing it.

from docarray.

JoanFM avatar JoanFM commented on June 7, 2024

Hey @nikhilmakan02 ,

There is always a first time contributing to an Open Source project and I can assure you is very rewarding.

You just need to do your changes in a fork of yours and you then can open a PR from a branch from the GUI itself.

Just make sure to follow the CONTRIBUTING guidelines (https://github.com/docarray/docarray/blob/main/CONTRIBUTING.md) and to join our discord server (https://discord.com/invite/DRa9JpGT) to get assitance ans communicate with the community.

from docarray.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.