Coder Social home page Coder Social logo

Comments (13)

bclavie avatar bclavie commented on July 1, 2024 1

My bad, seems like poetry silently crashed during publish... Live on PyPi now!

from ragatouille.

bclavie avatar bclavie commented on July 1, 2024 1

Oh this is interesting, thanks for flagging it! For the indexing part, it's fully deferred to ColBERT itself (the Stanford's colbert-ai lab), but I'll add on my to-do to dig and make sure that the full GPU settings are properly passed to it.

Overall, sadly indexing can be quite slow (it's by far the slowest part of ColBERT).

from ragatouille.

bclavie avatar bclavie commented on July 1, 2024 1

@bclavie Yes, I noticed this is one of the main disadvantages compared to dense embedding models. The 1000 documents took almost 4 hours to process, but the CPU was the bottleneck, not the GPU (even with only one running).
Do you have any QPS benchmarks and memory footprint compared to the number of vectors indexed?

cc @okhat

For my use case, which consists of indexing several million documents, ColBERT is probably a better choice as a reranker.

That's fair! I'm planning on looking at building RAGPretrainedModel.rerank(query: str, documents: list[str]) soon to support index-free re-ranking, just pass a query + a list of strings (as suggested in #6). If you're interested, I'll ping you when it ships.

Given the number of vectors, I wouldn't be surprised if queries were slower than more traditional methods.

I'm pretty sure once indexed (that is indeed a challenging task), ColBERT would still query super fast, but would be worth double checking!

Thanks for all your hard work, ColBERT has always been challenging to use!

Thank you, I'm glad this has been useful to you!

from ragatouille.

timothepearce avatar timothepearce commented on July 1, 2024 1

If you're interested, I'll ping you when it ships.

Please yes!

I'm pretty sure once indexed (that is indeed a challenging task), ColBERT would still query super fast, but would be worth double checking!

I'm still working on RAGatouille. I'll run some benchmarks on a more extensive dataset and post them here if you're interested.

from ragatouille.

bclavie avatar bclavie commented on July 1, 2024 1

I'm still working on RAGatouille. I'll run some benchmarks on a more extensive dataset and post them here if you're interested.

Would love the yes! All early feedback is more than welcome, thank you!

I'll close the issue for now (to keep tracks of bug) but feel free to post it here (I'll ping you on the reranker issue once that's live)

from ragatouille.

bclavie avatar bclavie commented on July 1, 2024

Hey @timothepearce, I've created the issue here!

I think this is what's going on:

The README examples are too short. I'll update shortly to make sure the doc collections are big enough.

I spy in your trace that you're using 2 GPUs (num_gpus = 2). The fact that the embedding sample ends up being a NoneType object is probably because upstream ColBERT is trying to split the document collection into batches for both GPUs then fails because there aren't enough, but still creates an empty batch.

Does it work if you use more examples?

from ragatouille.

bclavie avatar bclavie commented on July 1, 2024

I've just merged #18 and pushed a fixed version to pypi (to add the wikipedia page fetcher), the readme example should be a lot more functional now!

from ragatouille.

timothepearce avatar timothepearce commented on July 1, 2024

That was quick! I was inspecting the source code while you were fixing it. Nice job!

I'm struggling with another issue (not related to your package), but I'll keep you informed.

from ragatouille.

bclavie avatar bclavie commented on July 1, 2024

Thanks, glad I could fix it for you!

from ragatouille.

bclavie avatar bclavie commented on July 1, 2024

I'm struggling with another issue (not related to your package), but I'll keep you informed.

Oh sorry I glanced over that -- let me know if it's something I can assist with!

from ragatouille.

timothepearce avatar timothepearce commented on July 1, 2024

@bclavie, the 0.0.2b version isn't available on PyPI, but the code works as I tested it by cloning the repo instead.

from ragatouille.

timothepearce avatar timothepearce commented on July 1, 2024

@bclavie not a bug, but to carry out some benchmarks, I indexed 1000 documents and noticed that the library currently only uses one GPU at a time but loads the embedding model on both devices.

[Jan 06, 15:23:17] #> Creating directory .ragatouille/colbert/indexes/presentation_1000 

#> Starting...
#> Starting...
nranks = 2 	 num_gpus = 2 	 device=1
[Jan 06, 15:23:21] [1] 		 #> Encoding 17079 passages..
nranks = 2 	 num_gpus = 2 	 device=0
[Jan 06, 15:23:21] [0] 		 #> Encoding 31537 passages..
[Jan 06, 15:23:52] [0] 		 avg_doclen_est = 99.43394470214844 	 len(local_sample) = 31,537
[Jan 06, 15:23:52] [1] 		 avg_doclen_est = 99.43394470214844 	 len(local_sample) = 17,079
[Jan 06, 15:23:52] [0] 		 Creating 32,768 partitions.
[Jan 06, 15:23:52] [0] 		 *Estimated* 7,650,049 embeddings.
[Jan 06, 15:23:52] [0] 		 #> Saving the indexing plan to .ragatouille/colbert/indexes/presentation_1000/plan.json ..
Clustering 4783720 points in 128D to 32768 clusters, redo 1 times, 20 iterations
  Preprocessing in 0.14 s
  Iteration 0 (696.46 s, search 696.33 s): objective=1.51976e+06 imbalance=1.742 nsplit=0

Here is the output of nvidia-smi:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 4090        Off | 00000000:01:00.0 Off |                  Off |
|  0%   38C    P8              16W / 450W |   1036MiB / 24564MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce RTX 4090        Off | 00000000:03:00.0 Off |                  Off |
| 30%   31C    P2              67W / 450W |   2616MiB / 24564MiB |    100%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A   1654032      C   ...np/miniconda3/envs/np-ml/bin/python     1026MiB |
|    1   N/A  N/A   1654070      C   ...np/miniconda3/envs/np-ml/bin/python     2600MiB |
+---------------------------------------------------------------------------------------+

Do you know how I can optimise the embedding/indexing phase?

from ragatouille.

timothepearce avatar timothepearce commented on July 1, 2024

@bclavie Yes, I noticed this is one of the main disadvantages compared to dense embedding models. The 1000 documents took almost 4 hours to process, but the CPU was the bottleneck, not the GPU (even with only one running).

Do you have any QPS benchmarks and memory footprint compared to the number of vectors indexed?

For my use case, which consists of indexing several million documents, ColBERT is probably a better choice as a reranker. Given the number of vectors, I wouldn't be surprised if queries were slower than more traditional methods.

Thanks for all your hard work, ColBERT has always been challenging to use!

from ragatouille.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.