stanleylsx / llms_tool Goto Github PK
View Code? Open in Web Editor NEW一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。
License: Apache License 2.0
一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。
License: Apache License 2.0
目前项目只支持lora的权重合并吗?prompt tuning和prefix tuning的权重没有如何合并和加载呢?使用prompt tuning和prefix tuning微调后的模型在使用merge_peft_model会出错
AttributeError: 'QWenLMHeadModel' object has no attribute 'merge_and_unload'
instruction 和 input有什么区别?
目前项目只支持lora的权重合并吗?prompt tuning和prefix tuning的权重没有如何合并和加载呢?使用prompt tuning和prefix tuning微调后的模型在使用merge_peft_model会出错
AttributeError: 'QWenLMHeadModel' object has no attribute 'merge_and_unload'
World model is more better than raven , according its tokenizer is far more efficiently.
and this repo:
https://github.com/StarRing2022/RingRWKV'
and please join QQ group 325154699 for techical details.
大佬,能尝试做训练用的WebUI吗?
现在的训练工具,都不太容易使用
你好,请问一下llama2-7B模型预训练需要多少的GPU资源呢
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/root/anaconda3/envs/python3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 274, in apply
return user_fn(self, *args)
File "/root/anaconda3/envs/python3.10/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 157, in backward
torch.autograd.backward(outputs_with_grad, args_with_grad)
File "/root/anaconda3/envs/python3.10/lib/python3.10/site-packages/torch/autograd/init.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward
function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint
functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 55 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print parameter names for further debugging.
wandb: You can sync this run to the cloud by running:
会报错:
ValueError: DeepSpeed Zero-3 is not compatible with low_cpu_mem_usage=True
or with passing a device_map
.
您好,请问sft训练过程中为什么不打印eval loss呢?除了在这里加上eval_dataset,还需要注意哪里吗?
https://github.com/StanleyLsx/llms_tool/blob/main/engines/train.py#L203
Traceback (most recent call last):
File "/root/retrieve/prm/exp0_test0/main.py", line 46, in
train.supervised_fine_tuning(test=True)
File "/root/retrieve/prm/exp0_test0/engines/train.py", line 271, in supervised_fine_tuning
gen_kwargs = self.data_manager.generating_args_preprocess(gen_kwargs)
AttributeError: 'DataManager' object has no attribute 'generating_args_preprocess'
expand_vocab里面的add_new_tokens,会把token加到tokens_trie, 仅适用于slow tokenizer,因为fast模式尽管添加了,但是在tokenize的时候直接调用encode_plus,会调用rust,这就会出问题。即如果下载了llama2-xxx-hf的情况下。
model=AutoTokenizer.from_pretrained("d:/models/Llama-2-7b-chat-hf")
model.add_tokens("谁")
print(model.tokenize("你是谁"))
会发现不生效
建议修改:任选其一
大神能建个微信群或者留个联系方式吗?想请教您问题
感谢咱们项目非常简约又规范的代码,在看和改造的时候都非常舒服
不过在dpo训练的时候,我的loss和rewards/chosen一直是以下这样的,这正常吗?
{'loss': 0.6931, 'learning_rate': 9.99231529256779e-06, 'rewards/chosen': 0.0, 'rewards/rejected': 0.0, 'rewards/accuracies': 0.0, 'rewards/margins': 0.0, 'logps/rejected': -445.9979248046875, 'logps/chosen': -30.411256790161133, 'logits/rejected': 2.6535236835479736, 'logits/chosen': 1.1344398260116577, 'epoch': 0.0}
{'loss': 0.6931, 'learning_rate': 9.991217477220333e-06, 'rewards/chosen': 0.0, 'rewards/rejected': 0.0, 'rewards/accuracies': 0.0, 'rewards/margins': 0.0, 'logps/rejected': -200.39610290527344, 'logps/chosen': -33.55436706542969, 'logits/rejected': 1.5881015062332153, 'logits/chosen': 3.952385187149048, 'epoch': 0.0}
{'loss': 0.6931, 'learning_rate': 9.990119661872873e-06, 'rewards/chosen': 0.0, 'rewards/rejected': 0.0, 'rewards/accuracies': 0.0, 'rewards/margins': 0.0, 'logps/rejected': -155.58612060546875, 'logps/chosen': -40.69193649291992, 'logits/rejected': 2.194831132888794, 'logits/chosen': 1.8327357769012451, 'epoch': 0.0}
{'loss': 0.6931, 'learning_rate': 9.989021846525415e-06, 'rewards/chosen': 0.0, 'rewards/rejected': 0.0, 'rewards/accuracies': 0.0, 'rewards/margins': 0.0, 'logps/rejected': -258.0289306640625, 'logps/chosen': -41.872779846191406, 'logits/rejected': 4.575325965881348, 'logits/chosen': 0.9270402789115906, 'epoch': 0.0}
{'loss': 0.6931, 'learning_rate': 9.987924031177957e-06, 'rewards/chosen': 0.0, 'rewards/rejected': 0.0, 'rewards/accuracies': 0.0, 'rewards/margins': 0.0, 'logps/rejected': -319.283203125, 'logps/chosen': -37.61365509033203, 'logits/rejected': 2.6409215927124023, 'logits/chosen': 2.5549163818359375, 'epoch': 0.0}
{'loss': 0.6931, 'learning_rate': 9.986826215830499e-06, 'rewards/chosen': 0.0, 'rewards/rejected': 0.0, 'rewards/accuracies': 0.0, 'rewards/margins': 0.0, 'logps/rejected': -215.65521240234375, 'logps/chosen': -34.17694091796875, 'logits/rejected': 5.640789985656738, 'logits/chosen': 0.18884596228599548, 'epoch': 0.0}
{'loss': 0.6931, 'learning_rate': 9.986826215830499e-06, 'rewards/chosen': 0.0, 'rewards/rejected': 0.0, 'rewards/accuracies': 0.0, 'rewards/margins': 0.0, 'logps/rejected': -422.649658203125, 'logps/chosen': -43.95429992675781, 'logits/rejected': 1.0700151920318604, 'logits/chosen': 1.0940637588500977, 'epoch': 0.0}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.