Comments (13)
我也遇到这个问题了,显卡是v100,目前查到是在 modeling_chatGLM::SelfAttention::forward(): output = self.dense(context_layer) # 这一行
output的结果中 有inf和-inf, self.dense 的类型是<class 'bitsandbytes.nn.modules.Linear8bitLt'>,初步看上去是 in8量化的问题,Linear8bitLt 内的实现太复杂了,还没看明白。真正原因还没还知道,困扰了2天了。。。。
![]()
这个问题我解决了,解决办法是脚本中启用fp16,加载模型时,load_in_8bit设置成False,即可正常训练,具体原因我也没查出来是为啥
from chatglm-tuning.
@phantommlin hello,你的loss为0问题解决了吗
from chatglm-tuning.
没有哦
from chatglm-tuning.
可能和硬件有关: #19
from chatglm-tuning.
我用P40训练,batch_size等于1时,loss也是0,请问您解决了吗?
{"epoch": 0.0, "learning_rate": 1.9980769230769233e-05, "loss": 0.0, "step": 50},
{"epoch": 0.0, "learning_rate": 1.9961538461538464e-05, "loss": 0.0, "step": 100},
{"epoch": 0.0, "learning_rate": 1.9942307692307695e-05, "loss": 0.0, "step": 150}
更新:batch_size等于2时,step=50时,loss不为0,后续都是0,感觉像是个bug
{"epoch":0.0,"learning_rate":1.9980769230769233e-05,"loss":1.6446,"step":50},
{"epoch":0.0,"learning_rate":1.9961538461538464e-05,"loss":0.0,"step":100},
{"epoch":0.01,"learning_rate":1.9942307692307695e-05,"loss":0.0,"step":150},
{"epoch":0.01,"learning_rate":1.9923076923076926e-05,"loss":0.0,"step":200}
from chatglm-tuning.
我也遇到这个问题了,显卡是v100,目前查到是在
modeling_chatGLM::SelfAttention::forward():
output = self.dense(context_layer) # 这一行
output的结果中 有inf和-inf, self.dense 的类型是<class 'bitsandbytes.nn.modules.Linear8bitLt'>,初步看上去是 in8量化的问题,Linear8bitLt 内的实现太复杂了,还没看明白。真正原因还没还知道,困扰了2天了。。。。
from chatglm-tuning.
启用fp16, load_in_8bit设置为False, 会出现以下报错:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.
from chatglm-tuning.
启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.
问题已解决,更新了peft就可以work了
from chatglm-tuning.
@SizhaoXu bro,“不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.”这个可以吗
from chatglm-tuning.
启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.问题已解决,更新了peft就可以work了
你好,更新peft到什么版本呢,我已经是v0.2.0了
from chatglm-tuning.
启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.问题已解决,更新了peft就可以work了
请问您更新的peft是什么版本?
from chatglm-tuning.
更新peft到最新版本么0.3.0.dev0么?
from chatglm-tuning.
启用fp16, load_in_8bit设置为False, 会出现以下报错:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
这个错怎么解决
from chatglm-tuning.
Related Issues (20)
- 训练数据一点也没拟合,预测结果跟chatglm基础模型一模一样 HOT 1
- 启用fp16, load_in_8bit设置为False, 报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! HOT 1
- model.is_parallelizable = True,model.model_parallel = True,这两句的意思是 就可以多卡load吗? HOT 1
- 训练结果改变不大,用infer代码能知道是加载了正确答案。但是永远不是正确答案。怎么就能让他回答正确答案呢。 HOT 1
- 有没有用macos来finetune的啊
- error:raise DatasetGenerationError("An error occured while generating the dataset) HOT 1
- 请问大佬什么时候能更新支持chatglm2呢? HOT 13
- Lora微调chatglm-6b后chekpoint里面缺乏adapter_config.json文件 HOT 1
- 怎么做evaluate,计算bleu 和rougue值之类的
- 请问大佬是否有计划可以支持下qlora? HOT 1
- 修改max_seq_length好像并没有生效? HOT 1
- 如何支持多卡跑
- 请教一个问题,data_collator中不需要实现attention mask么? HOT 2
- ChatGLM LoRA微调之后,量化quantize=8显存、推理耗时都反向增加 HOT 1
- finetune数据使用data_collator时报错 KeyError:seq_len HOT 2
- 微调语料格式转换出现乱码 HOT 1
- 请问如何读取checkpoint继续训练? HOT 1
- AttributeError: 'ChatGLMModel' object has no attribute 'lm_head' HOT 3
- 请问下如果想让模型学到某个领域的数据集,大概需要多大的数据量呢?
- 这个项目停更了吗
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from chatglm-tuning.