Comments (8)
Hey! I believe this is an adjacent issue to #60
Multiprocessing still seems to be causing some problems upstream. The good news is this PR by @Anmol6 stanford-futuredata/ColBERT#294 should be removing it entirely and solve at least some of those problems. It'll hopefully be merged soon, but if you want to try it out in the meantime, a workaround would be to install ColBERT directly from his branch.
from ragatouille.
Hey @GMartin-dev, version 0.0.6b0 now ships with colbert-ai
0.2.18, which should eliminate the mp.manager()
calls on indexing. Could you check it out and let us know if this solves your issue?
from ragatouille.
@bclavie thanks for the tip I just tried it. It seems that the original error it's gone and a new issue emerged.
Now I see the plan.json file:
![image](https://private-user-images.githubusercontent.com/1821407/300325417-0362cff3-53c0-43ee-91ab-23ff55e7e773.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDkyOTIzNTUsIm5iZiI6MTcwOTI5MjA1NSwicGF0aCI6Ii8xODIxNDA3LzMwMDMyNTQxNy0wMzYyY2ZmMy01M2MwLTQzZWUtOTFhYi0yM2ZmNTVlN2U3NzMucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDMwMSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDAzMDFUMTEyMDU1WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ZGVkOGZlMTczNjZmZjFiYzAxZWE3NWI2Y2I4N2UzNjkxMGM5ZTdjM2NmZTdiNGQyYzliNzVhYzQ3MTg0ZDczNiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.6AKUHkV5rRgnvoKdwnAt5CRI6PDDAGndpVw21iAo_E4)
But it's actually missing the index files right?
Successfully installed colbert-ai-0.2.18 ragatouille-0.0.6b0
same command, same dependencies just updated to version you pointed:
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/RAGPretrainedModel.py", line 183, in index
return self.model.index(
^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/models/colbert.py", line 349, in index
self.indexer.index(
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexer.py", line 78, in index
self.__launch(collection)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexer.py", line 87, in __launch
launcher.launch_without_fork(self.config, collection, shared_lists, shared_queues, self.verbose)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/infra/launcher.py", line 93, in launch_without_fork
return_val = run_process_without_mp(self.callee, new_config, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/infra/launcher.py", line 109, in run_process_without_mp
return_val = callee(config, *args)
^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 33, in encode
encoder.run(shared_lists)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 68, in run
self.train(shared_lists) # Trains centroids from selected passages
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 237, in train
bucket_cutoffs, bucket_weights, avg_residual = self._compute_avg_residual(centroids, heldout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 315, in _compute_avg_residual
compressor = ResidualCodec(config=self.config, centroids=centroids, avg_residual=None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/codecs/residual.py", line 24, in __init__
ResidualCodec.try_load_torch_extensions(self.use_gpu)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/codecs/residual.py", line 103, in try_load_torch_extensions
decompress_residuals_cpp = load(
^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load
return _jit_compile(
^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1535, in _jit_compile
return _import_module_from_library(name, build_directory, is_python_module)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1929, in _import_module_from_library
module = importlib.util.module_from_spec(spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 573, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 1233, in create_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
It seems related to this:
stanford-futuredata/ColBERT#195
But not exactly the same...
from ragatouille.
Hey, quite interesting, thank you for running again... It definitely seems like there is a very specific issue for some people on linux+CUDA where there's an issue when loading the custom code, while it's fine for others in very similar (but likely not identical) environments. Is there an actual error raised (EOF
again?) at the end of the traceback you posted or does it just print this and stop?
Could you also run the script after exporting CUDA_VISIBLE_DEVICES=""
to try and help narrow it down as 100% a CUDA related issue?
Could you post your dependency dump, and CUDA version please? cc @Anmol6 so we can try and track exactly what the upstream compatibility issue is 🤔
from ragatouille.
Sorry for the delay on this... and thanks for you new tips!
CUDA:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0
After exporting CUDA_VISIBLE_DEVICES="" it seems that indexing finished correctly!! it also searches etc.
Now I guess it is using CPU with CUDA disabled right?.... it's super slow, but really slow.
Working with 500+ text pieces no more than 300 token each approx. It took like 10 min to index, and search takes 1 min+,
is there any limitation on CPU architectures when running on CPU?
I'm using an Old Dell server with a Intel Xeon E-2224G, normally for transformer based models of relatively small size it works fast enough, and and use the GPU for the biggest LLM. It's a modest setup for testing ideas.
from ragatouille.
Might be worth a shot updating CUDA to 12.x. What gpu were you trying to run this on?
from ragatouille.
In fact I was trying to run it using CPU only, I have an A2000 in the same system but it's being used for other models. This was finally fixed by:
#120 (comment)
from ragatouille.
(Copy/pasting this message in a few related issues)
Hey guys!
Thanks a lot for bearing with me as I juggle everything and trying to diagnose this. It’s complicated to fix with relatively little time to dedicate to it, as it seems like the dependencies causing issues aren’t the same for everyone, with no clear platform pattern as of yet. Overall, the issues center around the usual suspects of faiss
and CUDA
.
While because of this I can’t fix the issue with PLAID optimised indices just yet, I’m also noticing that most of the bug reports here are about relatively small collections (100s-to-low-1000s). To lower the barrier to entry as much as possible, #137 is introducing a second index format, which doesn’t actually build an index, but performs an exact search over all documents (as a stepping stone towards #110, which would use an HNSW index to be an in-between compromise between PLAID optimisation and exact search).
This approach doesn’t scale, but offers the best possible search accuracy & is still performed in a few hundred milliseconds at most for small collections. Ideally, it’ll also open up the way to shipping lower-dependency versions (#136)
The PR above (#137) is still a work in progress, as it needs CRUD support, tests, documentation, better precision routing (fp32/bfloat16) etc… (and potentially searching only subset of document ids).
However, it’s working in a rough state for me locally. If you’d like to give it a try (with the caveat that it might very well break!), please feel free to install the library directly from the feat/full_vectors_indexing
branch and adding the following argument to your index()
call:
index(…
index_type=“FULL_VECTORS”,
)
Any feedback is appreciated, as always, and thanks again!
from ragatouille.
Related Issues (20)
- ImportError: DLL load failed while importing segmented_maxsim_cpp: The specified module could not be found. HOT 1
- Can't search with k over 128 HOT 2
- Inconsistent search results length for high top-k values HOT 4
- Rework Dependencies: ship with barebones dependencies & bundle different features as extras HOT 1
- 02-basic_training.ipynb fails HOT 1
- You have a GPU available, but only `faiss-cpu` is currently installed. HOT 4
- TypeError: array([15055, 320, 22479, 2853, 8197, ..., 374, 3827]) is not JSON serializable HOT 5
- Can't install on WSL 2 Windows 10 or Crashes (using faiss-gpu) HOT 8
- mac m1: trainer.train: ImportError: incompatible architecture (have 'x86_64', need 'arm64') HOT 2
- Pytorch 2.1 on Runpod running Examples hangs with message HOT 5
- llama-index version 0.10.x not compatible HOT 2
- Training resume feature isn't available due to removal in upstream ColBERT HOT 1
- Issue with indexing BGE-M3 (large dimensionality vectors) HOT 4
- Replace ColBERT with jina-colbert-v1-en HOT 2
- ImportError: cannot import name 'Document' from 'llama_index' (unknown location) HOT 11
- ImportError: cannot import name 'LLM' from 'llama_index.core.llms' HOT 1
- Discussion / forum for RAGatouille? HOT 1
- Is there a way to quiet the progress bar printout?
- Colbert late interaction matrix
- examples/06-index_free_use.ipynb _ TypeError: '>' not supported between instances of 'str' and 'int' HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ragatouille.