Comments (8)
@bclavie Yup, same error.
- tested on version
0.0.6b4
- memory doesn't seem to be a problem (it runs and crashes with 6gb left to spare)
- the example is running on a new
conda
env with python 3.11
from ragatouille.
Hey, thanks for raising the issue, I haven't seen anything like this yet 🤔.
It'd be very helpful if you could:
- There's been quite a lot of activity, we're currently on version 0.0.6b1, could you try again with the latest version to make sure it's not related to the upstream multiprocessing which is now bypassed?
- Monitor your memory usage when you run this to make sure it's not OOM?
- Post your dependencies if neither of the above fixes it/reveals the issue?
Thank you!
from ragatouille.
I'm trying to investigate. This appears to be a dependency issue, compounded by an issue when loading the .cpp upstream ColBERT extensions.
While we figure out exactly what caused this, I've reverted some recent dependency updates and pushed a new version to Pypi. Let me know if it fixes it for you guys!
from ragatouille.
Hi @bclavie , I tried with the latest version in PyPi, still the same error
from ragatouille.
@bclavie @akshaydevml an update on this example. I've tested it on a w11 machine and I'm getting the same error output
from ragatouille.
Hello @bclavie, I have the same error on an Intel Mac with 32 gigabytes of ram and Python 11
from ragatouille.
(Copy/pasting this message in a few related issues)
Hey guys!
Thanks a lot for bearing with me as I juggle everything and trying to diagnose this. It’s complicated to fix with relatively little time to dedicate to it, as it seems like the dependencies causing issues aren’t the same for everyone, with no clear platform pattern as of yet. Overall, the issues center around the usual suspects of faiss
and CUDA
.
While because of this I can’t fix the issue with PLAID optimised indices just yet, I’m also noticing that most of the bug reports here are about relatively small collections (100s-to-low-1000s). To lower the barrier to entry as much as possible, #137 is introducing a second index format, which doesn’t actually build an index, but performs an exact search over all documents (as a stepping stone towards #110, which would use an HNSW index to be an in-between compromise between PLAID optimisation and exact search).
This approach doesn’t scale, but offers the best possible search accuracy & is still performed in a few hundred milliseconds at most for small collections. Ideally, it’ll also open up the way to shipping lower-dependency versions (#136)
The PR above (#137) is still a work in progress, as it needs CRUD support, tests, documentation, better precision routing (fp32/bfloat16) etc… (and potentially searching only subset of document ids).
However, it’s working in a rough state for me locally. If you’d like to give it a try (with the caveat that it might very well break!), please feel free to install the library directly from the feat/full_vectors_indexing
branch and adding the following argument to your index()
call:
index(…
index_type=“FULL_VECTORS”,
)
Any feedback is appreciated, as always, and thanks again!
from ragatouille.
Hey @alxpez @YossefAboukrat, this was most likely an issue related to faiss
and should FINALLY be fixed by the new experimental default indexing in 0.0.8, which skips using faiss
(does K-means in pure pytorch) as long as you're indexing fewer than ~100k documents!
from ragatouille.
Related Issues (20)
- Windows support HOT 1
- add_to_index uses too much GPU RAM and crashes HOT 1
- What should I do if I want a blank, untrained ColBRET? HOT 1
- How to check the centroids and the data in the clusters?
- Feature Request : Please include server search code from official Colbert repository into this repository for production usages.
- How to do Indexing using from_index() on CPU only? HOT 4
- Trainer stuck HOT 7
- How to load a fine-tuned model? HOT 5
- About Fine-Tuning
- Stuck at " Loading segmented_maxsim_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)..." HOT 1
- ImportError: cannot import name 'PromptTemplate' from 'llama_index' (unknown location)
- Compatibility with LangChain 0.2.0
- How to extract embeddings generated by Colbert? HOT 2
- Idea: Make CorpusProcessor (and splitter_fn / preprocessing_fn) to have access to metadata
- Embedding Model with Existing Index
- How to index collection using generator function?
- Training script is not working as is
- Making deletions will alter the collection.json file, hence the search function unusable because we access the collection using list indices.
- can't access my finetuned model
- Use base model or sentence transformer
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ragatouille.