Coder Social home page Coder Social logo

ingest.py - versioning about casalioy HOT 12 CLOSED

RHinDFIR avatar RHinDFIR commented on June 8, 2024
ingest.py - versioning

from casalioy.

Comments (12)

hippalectryon-0 avatar hippalectryon-0 commented on June 8, 2024 1

Note for people who have the same issue: the actual problem you had comes from this line:

error loading model: this format is no longer supported (see ggerganov/llama.cpp#1305)

This is because you're using (as in the README) the old format q4 instead of q5. We'll adjust the readme.

from casalioy.

RHinDFIR avatar RHinDFIR commented on June 8, 2024 1

I stumbled on the same thread and re-edited my comment before closing this issue.

Everything is running smooth and ingestion is now super-fast. Keep up the awesome work!

from casalioy.

su77ungr avatar su77ungr commented on June 8, 2024

What version of the main are you running? We changed runners to modules inside ./casalioy

So your env at least should be listing those. also you are likely missing /casalioy/ask_libgen.py.

We had an issue with a PR so I had to revoke some earlier changes. Besides GUI this main's version should be stable

from casalioy.

RHinDFIR avatar RHinDFIR commented on June 8, 2024

What version of the main are you running? We changed runners to modules inside ./casalioy

So your env at least should be listing those. also you are likely missing /casalioy/ask_libgen.py.

We had an issue with a PR so I had to revoke some earlier changes. Besides GUI this main's version should be stable

parent e972eac commit f9cc180

I've cloned the most recent main commit and I'm running through the setup now. I'll let you know how it goes.

from casalioy.

RHinDFIR avatar RHinDFIR commented on June 8, 2024

Side note - the .env example in the README.md doesn't reflect the suggested models to download and user:

image

Nothing huge, just thought I should point it out

from casalioy.

su77ungr avatar su77ungr commented on June 8, 2024

Oh feel free to PR such things and I'll commit ASAP.

Are you running fine again?

from casalioy.

RHinDFIR avatar RHinDFIR commented on June 8, 2024

Will do! I'm just heading home and will take a look at it once I get back.

"python -m pip install --force streamlit sentence_transformers" is taking quite some time to run, so I'll hopefully have some good news once I am home 👍🏻

from casalioy.

hippalectryon-0 avatar hippalectryon-0 commented on June 8, 2024

Jumping in late: you were just missing USE_MOCK=false/true in your .env file

from casalioy.

RHinDFIR avatar RHinDFIR commented on June 8, 2024

Giving this another shot this morning. The installation completed, I modified my .env to the following before attempting to run ./casalioy/ingest.py:

# Generic
MODEL_N_CTX=1024
TEXT_EMBEDDINGS_MODEL=models/ggjt-v1-vic7b-uncensored-q4_0.bin
TEXT_EMBEDDINGS_MODEL_TYPE=LlamaCpp  # LlamaCpp or HF
USE_MLOCK=true

# Ingestion
PERSIST_DIRECTORY=db
DOCUMENTS_DIRECTORY=source_documents
INGEST_CHUNK_SIZE=500
INGEST_CHUNK_OVERLAP=50

# Generation
MODEL_TYPE=LlamaCpp # GPT4All or LlamaCpp
MODEL_PATH=models/ggml-vic7b-q5_1.bin
MODEL_TEMP=0.8
MODEL_STOP=[STOP]
CHAIN_TYPE=stuff

I was then hit with this error:

(casalioy-py3.10) user@DESKTOP-MPA3RT3:/mnt/h/LLM/CASALIOY$ python casalioy/ingest.py
llama.cpp: loading model from models/ggjt-v1-vic7b-uncensored-q4_0.bin
llama_model_load_internal: format     = ggjt v1 (pre #1405)
llama_model_load_internal: n_vocab    = 32001
llama_model_load_internal: n_ctx      = 1024
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 7B
error loading model: this format is no longer supported (see https://github.com/ggerganov/llama.cpp/pull/1305)
llama_init_from_file: failed to load model
Traceback (most recent call last):
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 78, in validate_environment
    values["client"] = Llama(
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/llama_cpp/llama.py", line 161, in __init__
    assert self.ctx is not None
AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/mnt/h/LLM/CASALIOY/casalioy/ingest.py", line 148, in <module>
    main(sources_directory, cleandb)
  File "/mnt/h/LLM/CASALIOY/casalioy/ingest.py", line 142, in main
    ingester.ingest_from_directory(sources_directory, chunk_size, chunk_overlap)
  File "/mnt/h/LLM/CASALIOY/casalioy/ingest.py", line 115, in ingest_from_directory
    encode_fun = get_embedding_model()[1]
  File "/mnt/h/LLM/CASALIOY/casalioy/load_env.py", line 44, in get_embedding_model
    model = LlamaCppEmbeddings(model_path=text_embeddings_model, n_ctx=model_n_ctx)
  File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
  File "pydantic/main.py", line 1102, in pydantic.main.validate_model
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 98, in validate_environment
    raise NameError(f"Could not load Llama model from path: {model_path}")
NameError: Could not load Llama model from path: models/ggjt-v1-vic7b-uncensored-q4_0.bin

I cloned the all-MiniLM-L6-v2 to my root CASALIOY directory and matched my .env file to that in the README.md and ended up with this:

(casalioy-py3.10) user@DESKTOP-MPA3RT3:/mnt/h/LLM/CASALIOY$ python casalioy/ingest.py
Traceback (most recent call last):
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 446, in load_state_dict
    return torch.load(checkpoint_file, map_location="cpu")
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/torch/serialization.py", line 815, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/torch/serialization.py", line 1033, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.

During handling of the above exception, another exception occurred:


Traceback (most recent call last):
  File "/mnt/h/LLM/CASALIOY/casalioy/ingest.py", line 148, in <module>
    main(sources_directory, cleandb)
  File "/mnt/h/LLM/CASALIOY/casalioy/ingest.py", line 142, in main
    ingester.ingest_from_directory(sources_directory, chunk_size, chunk_overlap)
  File "/mnt/h/LLM/CASALIOY/casalioy/ingest.py", line 115, in ingest_from_directory
    encode_fun = get_embedding_model()[1]
  File "/mnt/h/LLM/CASALIOY/casalioy/load_env.py", line 41, in get_embedding_model
    model = HuggingFaceEmbeddings(model_name=text_embeddings_model)
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/langchain/embeddings/huggingface.py", line 54, in __init__
    self.client = sentence_transformers.SentenceTransformer(
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 95, in __init__
    modules = self._load_sbert_model(model_path)
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 840, in _load_sbert_model
    module = module_class.load(os.path.join(model_path, module_config['path']))
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 137, in load
    return Transformer(model_name_or_path=input_path, **config)
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 29, in __init__
    self._load_model(model_name_or_path, config, cache_dir)
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 49, in _load_model
    self.auto_model = AutoModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 467, in from_pretrained
    return model_class.from_pretrained(
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2542, in from_pretrained
    state_dict = load_state_dict(resolved_archive_file)
  File "/mnt/h/LLM/CASALIOY/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 451, in load_state_dict
    raise OSError(
OSError: You seem to have cloned a repository without having git-lfs installed. Please install git-lfs and run `git lfs install` followed by `git lfs pull` in the folder you cloned.

Am I missing something super simple here? I ran the git commands suggested in the last error and was hit with:

(casalioy-py3.10) user@DESKTOP-MPA3RT3:/mnt/h/LLM/CASALIOY$ git lfs install
fatal: 'lfs' appears to be a git command, but we were not
able to execute it. Maybe git-lfs is broken?

from casalioy.

RHinDFIR avatar RHinDFIR commented on June 8, 2024

Ok, I think I may have sorted it.

I addressed the following issue by referring to this thread:

(casalioy-py3.10) user@DESKTOP-MPA3RT3:/mnt/h/LLM/CASALIOY$ git lfs install
fatal: 'lfs' appears to be a git command, but we were not
able to execute it. Maybe git-lfs is broken?

I installed git-lfs and then ran "git lfs pull" in the "all-MiniLM-L6-v2" repo I pulled from HuggingFace. Running ingest.py now gives me:

(casalioy-py3.10) user@DESKTOP-MPA3RT3:/mnt/h/LLM/CASALIOY$ python casalioy/ingest.py
Scanning files
regex.txt
Processing 1211 chunks
Creating a new collection, size=384
Saving 1000 chunks
   0.0% [>      

The README..md file definitely needs updating and I'll see if I can get to it later this afternoon.

from casalioy.

hippalectryon-0 avatar hippalectryon-0 commented on June 8, 2024

I'm not sure I follow, isn't that a git-fls issue ? (which isn't used in the readme, nor required)

PS: the part of the README about downloading models will be gone when #61 is merged

Edit: I just reread your original problem: you don't actually mentioning using git lfs anywhere in the first place, so maybe it's just the error message that mislead you. There's no need to use it.

from casalioy.

RHinDFIR avatar RHinDFIR commented on June 8, 2024

I'm not sure I follow, isn't that a git-fls issue ? (which isn't used in the readme, nor required)

PS: the part of the README about downloading models will be gone when #61 is merged

Beat me to it, I saw your edits as I was typing out my response about the models in the README.

from casalioy.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.