Coder Social home page Coder Social logo

Comments (8)

rtaori avatar rtaori commented on May 19, 2024 1

Hi @RealTong, just following up on this.
I've fixed the issue with falcon #47 and it's now able to run decoding (there still be a small issue #46 we're looking into).

Regardless, could you please pull the latest from main and see if your original GPU issue has been resolved? Just to clarify, I've tested alpaca_eval evaluate_from_model --model_configs 'falcon-7b-instruct' on my system and have verified that the GPU is being utilized.
If you're still running into a GPU issue, please feel free to open another issue and post the relevant details.

from alpaca_eval.

rtaori avatar rtaori commented on May 19, 2024

Hi @RealTong ,
This seems to be a runtime issue instead of a gpu not in use issue. The error stacktrace seems to suggest a recursion error during tokenization. What is your transformers version? Could you upgrade to transformers>=4.29.2 and try again?

from alpaca_eval.

RealTong avatar RealTong commented on May 19, 2024

I've tried transformers>=4.29.2 and it still doesn't work. It starts reporting RecursionError: maximum recursion depth exceeded while calling a Python object after 3 minutes of running.

Is there any other useful information I can provide?

from alpaca_eval.

rtaori avatar rtaori commented on May 19, 2024

I see. The issue could be due to some issue in your local checkpoint, which would be hard for me to help you debug. As a sanity check, could you try running a model that we've already integrated/validated? For example, falcon-7b-instruct?

alpaca_eval evaluate_from_model --model_configs 'falcon-7b-instruct'

from alpaca_eval.

RealTong avatar RealTong commented on May 19, 2024
falcon-7b-instruct:
  prompt_template: "/root/code/eval/models/falcon-7B/prompt.txt"
  fn_completions: "huggingface_local_completions"
  completions_kwargs:
    model_name: "/root/.cache/LLM-Repo/model/falcon-7b-instruct/"
    model_kwargs:
      torch_dtype: 'bfloat16'
    max_new_tokens: 2048
    temperature: 0.7
    top_p: 1.0
    do_sample: True
    trust_remote_code: True
  pretty_name: "Falcon 7B Instruct"
  link: ""

I run alpaca_eval evaluate_from_model --model_configs 'falcon-7b-instruct' and I am prompted with ValueError: Loading /root/.cache/LLM-Repo/model/falcon-7b -instruct/ requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use. Then set the option trust_remote_code=True to remove this error.

Why do I load the local model and execute the code from the network. I have added trust_remote_code: True to the proxy configs.yaml, but it still gives me this error, how should I fix it?

from alpaca_eval.

rtaori avatar rtaori commented on May 19, 2024

Ah great point! Apologize for the oversight. Just patched the configs with the proper flag. Could you try again?

from alpaca_eval.

RealTong avatar RealTong commented on May 19, 2024

Not working :).

Reported error: TypeError: transformers.pipelines.base.infer_framework_load_model() got multiple values for keyword argument 'trust_remote_code'

Here is my configuration

falcon-7b-instruct:
  prompt_template: "/root/code/eval/models/falcon-7B/prompt.txt"
  fn_completions: "huggingface_local_completions"
  completions_kwargs:
    model_name: "/root/.cache/LLM-Repo/model/falcon-7b-instruct/"
    model_kwargs:
      torch_dtype: 'bfloat16'
      trust_remote_code: True
    max_new_tokens: 2048
    temperature: 0.7
    top_p: 1.0
    do_sample: True
  pretty_name: "Falcon 7B Instruct"
  link: ""

from alpaca_eval.

rtaori avatar rtaori commented on May 19, 2024

Ok very interesting, I'll need to investigate this issue more when I get my GPU machines back. Currently our cluster is down for the week, so I'll provide an update next week.

Since this issue is specific to falcon, let's try another model that doesn't require the trust_remode_code Flag. Can you try

alpaca_eval evaluate_from_model --model_configs 'oasst-sft-pythia-12b'

Apologize for the back and forth on resolving this.

from alpaca_eval.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.