Comments (13)
My bad, seems like poetry silently crashed during publish
... Live on PyPi now!
from ragatouille.
Oh this is interesting, thanks for flagging it! For the indexing part, it's fully deferred to ColBERT itself (the Stanford's colbert-ai
lab), but I'll add on my to-do to dig and make sure that the full GPU settings are properly passed to it.
Overall, sadly indexing can be quite slow (it's by far the slowest part of ColBERT).
from ragatouille.
@bclavie Yes, I noticed this is one of the main disadvantages compared to dense embedding models. The 1000 documents took almost 4 hours to process, but the CPU was the bottleneck, not the GPU (even with only one running).
Do you have any QPS benchmarks and memory footprint compared to the number of vectors indexed?
cc @okhat
For my use case, which consists of indexing several million documents, ColBERT is probably a better choice as a reranker.
That's fair! I'm planning on looking at building RAGPretrainedModel.rerank(query: str, documents: list[str])
soon to support index-free re-ranking, just pass a query + a list of strings (as suggested in #6). If you're interested, I'll ping you when it ships.
Given the number of vectors, I wouldn't be surprised if queries were slower than more traditional methods.
I'm pretty sure once indexed (that is indeed a challenging task), ColBERT would still query super fast, but would be worth double checking!
Thanks for all your hard work, ColBERT has always been challenging to use!
Thank you, I'm glad this has been useful to you!
from ragatouille.
If you're interested, I'll ping you when it ships.
Please yes!
I'm pretty sure once indexed (that is indeed a challenging task), ColBERT would still query super fast, but would be worth double checking!
I'm still working on RAGatouille. I'll run some benchmarks on a more extensive dataset and post them here if you're interested.
from ragatouille.
I'm still working on RAGatouille. I'll run some benchmarks on a more extensive dataset and post them here if you're interested.
Would love the yes! All early feedback is more than welcome, thank you!
I'll close the issue for now (to keep tracks of bug) but feel free to post it here (I'll ping you on the reranker issue once that's live)
from ragatouille.
Hey @timothepearce, I've created the issue here!
I think this is what's going on:
The README examples are too short. I'll update shortly to make sure the doc collections are big enough.
I spy in your trace that you're using 2 GPUs (num_gpus = 2
). The fact that the embedding sample ends up being a NoneType
object is probably because upstream ColBERT is trying to split the document collection into batches for both GPUs then fails because there aren't enough, but still creates an empty batch.
Does it work if you use more examples?
from ragatouille.
I've just merged #18 and pushed a fixed version to pypi (to add the wikipedia page fetcher), the readme example should be a lot more functional now!
from ragatouille.
That was quick! I was inspecting the source code while you were fixing it. Nice job!
I'm struggling with another issue (not related to your package), but I'll keep you informed.
from ragatouille.
Thanks, glad I could fix it for you!
from ragatouille.
I'm struggling with another issue (not related to your package), but I'll keep you informed.
Oh sorry I glanced over that -- let me know if it's something I can assist with!
from ragatouille.
@bclavie, the 0.0.2b
version isn't available on PyPI, but the code works as I tested it by cloning the repo instead.
from ragatouille.
@bclavie not a bug, but to carry out some benchmarks, I indexed 1000 documents and noticed that the library currently only uses one GPU at a time but loads the embedding model on both devices.
[Jan 06, 15:23:17] #> Creating directory .ragatouille/colbert/indexes/presentation_1000
#> Starting...
#> Starting...
nranks = 2 num_gpus = 2 device=1
[Jan 06, 15:23:21] [1] #> Encoding 17079 passages..
nranks = 2 num_gpus = 2 device=0
[Jan 06, 15:23:21] [0] #> Encoding 31537 passages..
[Jan 06, 15:23:52] [0] avg_doclen_est = 99.43394470214844 len(local_sample) = 31,537
[Jan 06, 15:23:52] [1] avg_doclen_est = 99.43394470214844 len(local_sample) = 17,079
[Jan 06, 15:23:52] [0] Creating 32,768 partitions.
[Jan 06, 15:23:52] [0] *Estimated* 7,650,049 embeddings.
[Jan 06, 15:23:52] [0] #> Saving the indexing plan to .ragatouille/colbert/indexes/presentation_1000/plan.json ..
Clustering 4783720 points in 128D to 32768 clusters, redo 1 times, 20 iterations
Preprocessing in 0.14 s
Iteration 0 (696.46 s, search 696.33 s): objective=1.51976e+06 imbalance=1.742 nsplit=0
Here is the output of nvidia-smi
:
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off |
| 0% 38C P8 16W / 450W | 1036MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce RTX 4090 Off | 00000000:03:00.0 Off | Off |
| 30% 31C P2 67W / 450W | 2616MiB / 24564MiB | 100% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1654032 C ...np/miniconda3/envs/np-ml/bin/python 1026MiB |
| 1 N/A N/A 1654070 C ...np/miniconda3/envs/np-ml/bin/python 2600MiB |
+---------------------------------------------------------------------------------------+
Do you know how I can optimise the embedding/indexing phase?
from ragatouille.
@bclavie Yes, I noticed this is one of the main disadvantages compared to dense embedding models. The 1000 documents took almost 4 hours to process, but the CPU was the bottleneck, not the GPU (even with only one running).
Do you have any QPS benchmarks and memory footprint compared to the number of vectors indexed?
For my use case, which consists of indexing several million documents, ColBERT is probably a better choice as a reranker. Given the number of vectors, I wouldn't be surprised if queries were slower than more traditional methods.
Thanks for all your hard work, ColBERT has always been challenging to use!
from ragatouille.
Related Issues (20)
- add_to_index uses too much GPU RAM and crashes HOT 1
- What should I do if I want a blank, untrained ColBRET? HOT 1
- How to check the centroids and the data in the clusters?
- Feature Request : Please include server search code from official Colbert repository into this repository for production usages.
- How to do Indexing using from_index() on CPU only? HOT 4
- Trainer stuck HOT 7
- How to load a fine-tuned model? HOT 5
- About Fine-Tuning
- Stuck at " Loading segmented_maxsim_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)..." HOT 1
- ImportError: cannot import name 'PromptTemplate' from 'llama_index' (unknown location)
- Compatibility with LangChain 0.2.0 HOT 2
- How to extract embeddings generated by Colbert? HOT 2
- Idea: Make CorpusProcessor (and splitter_fn / preprocessing_fn) to have access to metadata
- Embedding Model with Existing Index
- How to index collection using generator function?
- Training script is not working as is
- Making deletions will alter the collection.json file, hence the search function unusable because we access the collection using list indices.
- can't access my finetuned model
- Use base model or sentence transformer
- ragatouille requires a version of numpy uncompatible with python
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ragatouille.