Coder Social home page Coder Social logo

Comments (6)

okhat avatar okhat commented on June 24, 2024 1

Thank you for the kind words!

Partitions is very important: it means the number of centroids used by FAISS for indexing and search. Higher means slower FAISS indexing but faster retrieval. You can make the number 2x or 4x smaller and it would still be fine.

The sample dictates how much of the data is used for FAISS indexing, so here it's 30%. If you drop this parameter completely, the default will internally be 5%. More is better, but 5--30% is enough.

from colbert.

okhat avatar okhat commented on June 24, 2024 1

Yes, all documents will be indexed! Irrespective of what you choose for sample, all embeddings are going to be stored.

Sampling just dictates the not-so-critical aspect of how to create internal representations without too much cost.

from colbert.

littlewine avatar littlewine commented on June 24, 2024

so as far as I understand sample is used for indexing, but it does not mean that using a sample param of 0.3 will index only 30% of the documents. correct?

from colbert.

littlewine avatar littlewine commented on June 24, 2024

Another semi-related indexing parameter question:

What does the --doc_maxlen 180 parameter value do? From my understanding, it cuts the passages to 180 tokens, but what if a passage/document is >180? Does it throw away the rest of the document (FirstP), or does it just split the document into multiple passages and search across all of them (MaxP)?

Also was the parameter of the value used in the original work 180? I am planning to use ColBERT for indexing other datasets apart from MSMarco-passage so I am not sure whether in fact I would have to retrain everything from scratch.

After trying to index a collection using the checkpoint from the original work transformers gave me this warning, which I guess should alarm me:

Some weights of ColBERT were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['linear.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of the model checkpoint at bert-base-uncased were not used when initializing ColBERT: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
- This IS expected if you are initializing ColBERT from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing ColBERT from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

If I understand correctly, trying to index using a checkpoint created with a different --doc_maxlen would likely create inconsistencies and result to a worse representation of the corpus.

Thank you again for your help! :)

from colbert.

okhat avatar okhat commented on June 24, 2024

By default it's FirstP. You'll have to split the documents up if you want to implement MaxP on top of this.

You don't have to retrain. Just split up the passages to 100--150 words (with Python whitespace split) and select an appropriate --doc_maxlen in the range 180--256. It should work fine.

from colbert.

Lim-Sung-Jun avatar Lim-Sung-Jun commented on June 24, 2024

The sample dictates how much of the data is used for FAISS indexing, so here it's 30%. If you drop this parameter completely, the default will internally be 5%. More is better, but 5--30% is enough.

Yes, all documents will be indexed! Irrespective of what you choose for sample, all embeddings are going to be stored.

Sampling just dictates the not-so-critical aspect of how to create internal representations without too much cost.

hello,
I have 2 questions about sampling and index.add

  1. sampling
    What I've understood so far is that sampling is just for analysing the distribution of documents(collection) to use faiss
    is it right?

if it's right then, we only use 30% for decreasing the cost?
because if we sampling 100% then the index.train() will take too much time?

  1. index.add
    why do we feed only three ".pt" files to the index.add function?
    index.add(sub_collection)

can't we just feed all files to the function?

thank you

+) and I also want to know about below things..

  • the role of slice in faiss_index.py
  • the role of chunck_size in encoder.py
  • how the .pt is made? what batch size? subset size?

from colbert.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.