Comments (8)
What tokenizer config from huggingface are you trying to load?
from litgpt.
This isn't from Hugging Face, but a configuration output from LoRA finetuning using litgpt finetune lora
.
from litgpt.
{
"chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set content = '<<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ bos_token + '[INST] ' + content | trim + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content | trim + ' ' + eos_token }}{% endif %}{% endfor %}",
"add_bos_token": true,
"add_eos_token": false,
"bos_token": {
"__type": "AddedToken",
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"clean_up_tokenization_spaces": false,
"eos_token": {
"__type": "AddedToken",
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"legacy": null,
"model_max_length": 1000000000000000019884624838656,
"pad_token": null,
"sp_model_kwargs": {},
"tokenizer_class": "CodeLlamaTokenizer",
"unk_token": {
"__type": "AddedToken",
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}
from litgpt.
When you finetune, you load existing Hugging Face hub weights/tokenizer. LitGPT then copies over the tokenizer into your finetuned output so that it can be loaded in subsequent steps.
Did you manually copy over a different tokenizer or modified it yourself?
The tokenizer.py
is a tiny shim over Hugging Face's tokenizers, so we haven't tried to support every possible tokenization config, just the ones that are used by the checkpoints we support. If you are running a "custom" tokenizer the code will need an update to check these different fields.
from litgpt.
Just saw your last message. Looks like it's treating it as a HF tokenizer instead of SentencePiece tokenizer, so this line must be resolving to False
: https://github.com/Lightning-AI/litgpt/blob/main/litgpt/tokenizer.py#L21
from litgpt.
@carmocca It is true that there is no tokenizer.model
and instead there is tokenizer.json
and tokenizer_config.json
. To clarify, this is just the output from litgpt and I didn't modify any of it or copy anything from Hugging Face.
from litgpt.
Which --checkpoint_dir did you use with LoRA? I can try to follow the same steps you did to see if I end up with the same error
from litgpt.
@carmocca I downloaded codellama/CodeLlama-7b-Instruct-hf
with litgpt download
and then used that checkpoint.
from litgpt.
Related Issues (20)
- Blockwise quantization only supports 16/32-bit floats, but got torch.uint8 ( `bnb.nf4` quantisation is not working) HOT 15
- simple API interface for initializing and running model inference
- tokenizer.py HOT 2
- merging intermediate adapter
- Continual pretraining for custom data is not working. Not recognizing TextFiles as a data attribute. HOT 3
- Qwen1.5 Family Support
- Add support for phi-3-mini HOT 2
- Nucleus (top-p) sampling HOT 2
- Add release workflow HOT 2
- The `litgpt evaluate` command attempts to download config files from gated repos HOT 1
- ValueError: 'Meta-Llama-3-8B-Instruct' is not a supported config name HOT 4
- --checkpoint-dir 'xx' is missing the files: ['model_config.yaml'] HOT 2
- OOM Error: CUDA out of memory when finetuning llama3-8b HOT 3
- Conversion to HF checkpoint should generate a checkpoint format that can be loaded directly HOT 1
- Failed to load the finetuned model with `AutoModelForCausalLM.from_pretrained(name, state_dict=state_dict)` HOT 4
- litgpt download doesn't work HOT 7
- Add support for memory-efficient and faster optimizers HOT 1
- combine FSDP with selective activation checkpointing
- A potential bug for multi-GPU training HOT 5
- Why FSDPStrategy is so slow-down when I use multi-machine HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from litgpt.