Coder Social home page Coder Social logo

Comments (8)

bclavie avatar bclavie commented on July 19, 2024 1

Hey! I believe this is an adjacent issue to #60

Multiprocessing still seems to be causing some problems upstream. The good news is this PR by @Anmol6 stanford-futuredata/ColBERT#294 should be removing it entirely and solve at least some of those problems. It'll hopefully be merged soon, but if you want to try it out in the meantime, a workaround would be to install ColBERT directly from his branch.

from ragatouille.

bclavie avatar bclavie commented on July 19, 2024 1

Hey @GMartin-dev, version 0.0.6b0 now ships with colbert-ai 0.2.18, which should eliminate the mp.manager() calls on indexing. Could you check it out and let us know if this solves your issue?

from ragatouille.

GMartin-dev avatar GMartin-dev commented on July 19, 2024

@bclavie thanks for the tip I just tried it. It seems that the original error it's gone and a new issue emerged.
Now I see the plan.json file:
image
But it's actually missing the index files right?

Successfully installed colbert-ai-0.2.18 ragatouille-0.0.6b0

same command, same dependencies just updated to version you pointed:

File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/RAGPretrainedModel.py", line 183, in index
    return self.model.index(
           ^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/models/colbert.py", line 349, in index
    self.indexer.index(
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexer.py", line 78, in index
    self.__launch(collection)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexer.py", line 87, in __launch
    launcher.launch_without_fork(self.config, collection, shared_lists, shared_queues, self.verbose)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/infra/launcher.py", line 93, in launch_without_fork
    return_val = run_process_without_mp(self.callee, new_config, *args)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/infra/launcher.py", line 109, in run_process_without_mp
    return_val = callee(config, *args)
                 ^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 33, in encode
    encoder.run(shared_lists)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 68, in run
    self.train(shared_lists) # Trains centroids from selected passages
    ^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 237, in train
    bucket_cutoffs, bucket_weights, avg_residual = self._compute_avg_residual(centroids, heldout)
                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 315, in _compute_avg_residual
    compressor = ResidualCodec(config=self.config, centroids=centroids, avg_residual=None)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/codecs/residual.py", line 24, in __init__
    ResidualCodec.try_load_torch_extensions(self.use_gpu)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/codecs/residual.py", line 103, in try_load_torch_extensions
    decompress_residuals_cpp = load(
                               ^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load
    return _jit_compile(
           ^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1535, in _jit_compile
    return _import_module_from_library(name, build_directory, is_python_module)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1929, in _import_module_from_library
    module = importlib.util.module_from_spec(spec)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 573, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 1233, in create_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed

It seems related to this:
stanford-futuredata/ColBERT#195

But not exactly the same...

from ragatouille.

bclavie avatar bclavie commented on July 19, 2024

Hey, quite interesting, thank you for running again... It definitely seems like there is a very specific issue for some people on linux+CUDA where there's an issue when loading the custom code, while it's fine for others in very similar (but likely not identical) environments. Is there an actual error raised (EOF again?) at the end of the traceback you posted or does it just print this and stop?

Could you also run the script after exporting CUDA_VISIBLE_DEVICES="" to try and help narrow it down as 100% a CUDA related issue?

Could you post your dependency dump, and CUDA version please? cc @Anmol6 so we can try and track exactly what the upstream compatibility issue is 🤔

from ragatouille.

GMartin-dev avatar GMartin-dev commented on July 19, 2024

Sorry for the delay on this... and thanks for you new tips!
CUDA:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0

After exporting CUDA_VISIBLE_DEVICES="" it seems that indexing finished correctly!! it also searches etc.
Now I guess it is using CPU with CUDA disabled right?.... it's super slow, but really slow.
Working with 500+ text pieces no more than 300 token each approx. It took like 10 min to index, and search takes 1 min+,
is there any limitation on CPU architectures when running on CPU?
I'm using an Old Dell server with a Intel Xeon E-2224G, normally for transformer based models of relatively small size it works fast enough, and and use the GPU for the biggest LLM. It's a modest setup for testing ideas.

from ragatouille.

TheMcSebi avatar TheMcSebi commented on July 19, 2024

Might be worth a shot updating CUDA to 12.x. What gpu were you trying to run this on?

from ragatouille.

GMartin-dev avatar GMartin-dev commented on July 19, 2024

In fact I was trying to run it using CPU only, I have an A2000 in the same system but it's being used for other models. This was finally fixed by:
#120 (comment)

from ragatouille.

bclavie avatar bclavie commented on July 19, 2024

(Copy/pasting this message in a few related issues)

Hey guys!

Thanks a lot for bearing with me as I juggle everything and trying to diagnose this. It’s complicated to fix with relatively little time to dedicate to it, as it seems like the dependencies causing issues aren’t the same for everyone, with no clear platform pattern as of yet. Overall, the issues center around the usual suspects of faiss and CUDA.

While because of this I can’t fix the issue with PLAID optimised indices just yet, I’m also noticing that most of the bug reports here are about relatively small collections (100s-to-low-1000s). To lower the barrier to entry as much as possible, #137 is introducing a second index format, which doesn’t actually build an index, but performs an exact search over all documents (as a stepping stone towards #110, which would use an HNSW index to be an in-between compromise between PLAID optimisation and exact search).
This approach doesn’t scale, but offers the best possible search accuracy & is still performed in a few hundred milliseconds at most for small collections. Ideally, it’ll also open up the way to shipping lower-dependency versions (#136)

The PR above (#137) is still a work in progress, as it needs CRUD support, tests, documentation, better precision routing (fp32/bfloat16) etc… (and potentially searching only subset of document ids).
However, it’s working in a rough state for me locally. If you’d like to give it a try (with the caveat that it might very well break!), please feel free to install the library directly from the feat/full_vectors_indexing branch and adding the following argument to your index() call:

index(…
index_type=FULL_VECTORS”,
)

Any feedback is appreciated, as always, and thanks again!

from ragatouille.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.