Comments (7)
Thanks for the feedback Josh!
I am happy to provide checkpoints that you can use---I generally provide them to anyone who asks! I think that maintaining a set of official model releases is also a great idea, but it may take some time to maintain this publicly with instructions. On the other hand, releasing embeddings seems really expensive though: where can one upload tens of GBs for possibly frequent downloads?
from colbert.
Do you think you can get the checkpoints up with supporting code on HuggingFace model hub (it's free, you can even create an org for Stanford)? I see Vespa have theirs up but it might be nice to have the official one there. I noticed as well somewhere (maybe a closed issue?) that they also had to change the training procedure somehow to make it fit into Transformers. Happy to work with you on this if it helps.
The data hosting might be hard unless your university are willing to host it somewhere. I recall that docTTTTTquery is hosting assets on GitLab and Dropbox.
from colbert.
We've handled the checkpoints sharing by email.
For HuggingFace, I think it's cool but have mixed feelings about this. It would definitely ease experimentation with the model, but I don't how it will handle things like efficient indexing and retrieval. I'll check out Vespa or DPR on HuggingFace to see. We see this repository continuing to evolve (e.g., it will soon support multiple variants of ColBERT's late interaction paradigm) and I'm not sure how easy it will be to maintain two versions.
Ultimately, I think we can help with whatever you have in mind on this repo itself. Or is there something unique to HuggingFace?
from colbert.
HuggingFace Transformer integration would just make it easier for people to experiment outside of your repo and codebase. You would end up hosting your checkpoints on the model hub and implement the Transformer classes necessary to support ColBERT and variants. Training, data prep, etc. could still be in your repo of course, as well as evaluation and experimentation, but for people just wanting to do inference from a model checkpoint, Transformers is becoming the de-facto method (IMO). Retrieval, indexing, etc. would remain in your repo or people (like me) would take care of it using different infrastructure (e.g. Elasticsearch or Vespa.ai for indexing and retrieval instead of FAISS). It would just make things more modular, so we can experiment. Think about it, maybe have a look through Transformers a bit more and see if it makes sense.
from colbert.
This seems reasonable and will try to set it up. I'm closing here now as no immediate action is needed, but we can continue our email conversation.
from colbert.
We've handled the checkpoints sharing by email.
For HuggingFace, I think it's cool but have mixed feelings about this. It would definitely ease experimentation with the model, but I don't how it will handle things like efficient indexing and retrieval. I'll check out Vespa or DPR on HuggingFace to see. We see this repository continuing to evolve (e.g., it will soon support multiple variants of ColBERT's late interaction paradigm) and I'm not sure how easy it will be to maintain two versions.
Ultimately, I think we can help with whatever you have in mind on this repo itself. Or is there something unique to HuggingFace?
We've handled the checkpoints sharing by email.
For HuggingFace, I think it's cool but have mixed feelings about this. It would definitely ease experimentation with the model, but I don't how it will handle things like efficient indexing and retrieval. I'll check out Vespa or DPR on HuggingFace to see. We see this repository continuing to evolve (e.g., it will soon support multiple variants of ColBERT's late interaction paradigm) and I'm not sure how easy it will be to maintain two versions.
Ultimately, I think we can help with whatever you have in mind on this repo itself. Or is there something unique to HuggingFace?
Hello,could I obtain the pre_trained ColModels?I cannot deal with such huge training dataset by my GPUs,but I want to do some deep research based on your pretrained ColBERT.
from colbert.
HuggingFace Transformer integration would just make it easier for people to experiment outside of your repo and codebase. You would end up hosting your checkpoints on the model hub and implement the Transformer classes necessary to support ColBERT and variants. Training, data prep, etc. could still be in your repo of course, as well as evaluation and experimentation, but for people just wanting to do inference from a model checkpoint, Transformers is becoming the de-facto method (IMO). Retrieval, indexing, etc. would remain in your repo or people (like me) would take care of it using different infrastructure (e.g. Elasticsearch or Vespa.ai for indexing and retrieval instead of FAISS). It would just make things more modular, so we can experiment. Think about it, maybe have a look through Transformers a bit more and see if it makes sense.
Hello,could you provide me with the pretrained ColBERT? 😊😊
from colbert.
Related Issues (20)
- Set batch size when indexing HOT 3
- troubleshooting encoding performance HOT 1
- Pre-filtering the documents based on metadata before late-interaction HOT 5
- What is Colbert v1.9?
- Issue: Training "resume" and "resume_optimizer" implementation was removed
- Irrelevant results returned by the Colbert V2 Model HOT 1
- crypt.h: No such file or directory HOT 7
- Basic Training (ColBERTv1-style) -> ujson.JSONDecodeError: Expected object or value HOT 2
- How can I use "all_mpnet_base_v2" model for colbert indexing and searching?
- GPU not working while training a new model in Colab
- [rank1]:[E ProcessGroupNCCL.cpp:523] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL HOT 2
- How to set chunk_size
- Tokens in `skiplist` are not returned (masked out) but they still affect other tokens embeddings. Is this expected? HOT 2
- How to get the mapping information about doc_id with doc_content. HOT 1
- CollectionEncoder blocking on encoder N passages HOT 1
- Focusing retrieval on list of document ids with doc_ids parameter doesn't work
- type object 'ColBERT' has no attribute 'segmented_maxsim' HOT 1
- Where is the qrels.dev.small.tsv?
- How to get rid of the "Duplicate GPU detected : rank 0 and rank 1 both on CUDA device ca000" error while training of the ColBERTv1.9 modell? HOT 1
- Request for AMD gpu support
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from colbert.