Coder Social home page Coder Social logo

Comments (3)

AHPUymhd avatar AHPUymhd commented on June 12, 2024

image 示例里,微调的参数在模型输出路径文件夹下的checkpoint-1000文件夹中。按照示例的微调方法运行之后,微调结果输出路径不同,为runs/Jan27_01-06-17_autodl-container-049a448514-394ad272/,其中文件也不同。 image image 请问这里该怎么处理

请问您最后解决了吗,我也遇到这个问题了

from self-llm.

CharlieZZss avatar CharlieZZss commented on June 12, 2024

image 示例里,微调的参数在模型输出路径文件夹下的checkpoint-1000文件夹中。按照示例的微调方法运行之后,微调结果输出路径不同,为runs/Jan27_01-06-17_autodl-container-049a448514-394ad272/,其中文件也不同。 image image 请问这里该怎么处理

请问您最后解决了吗,我也遇到这个问题了

我的微调参数是这样设置的:
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=-100,
pad_to_multiple_of=None,
padding=False
)
args = TrainingArguments(
output_dir="/root/autodl-tmp/huan_dataset/output",#相对路径无法生成check-points文件夹
per_device_train_batch_size=1,
gradient_accumulation_steps=8,
logging_steps=5,
num_train_epochs=1,
save_strategy='steps',
save_steps=10,
learning_rate=1e-4,
#gradient_checkpointing=True,这句解开会报错
)
加载微调后模型的代码是这样的:
from peft import PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForSeq2Seq, TrainingArguments, Trainer
import torch
tokenizer = AutoTokenizer.from_pretrained("/root/autodl-tmp/ZhipuAI/chatglm3-6b", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("/root/autodl-tmp/ZhipuAI/chatglm3-6b", trust_remote_code=True, low_cpu_mem_usage=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
p_model = PeftModel.from_pretrained(model, model_id="/root/autodl-tmp/huan_dataset/output/checkpoint-400/") # 将训练所得的LoRa权重加载起来
ipt = tokenizer("<|system|>\n现在你要扮演皇帝身边的女人--甄嬛\n<|user|>\n {}\n{}".format("你是谁?", "").strip() + "<|assistant|>\n", return_tensors="pt").to(model.device)
tokenizer.decode(p_model.generate(**ipt, max_length=128, do_sample=True)[0], skip_special_tokens=True)

from self-llm.

AHPUymhd avatar AHPUymhd commented on June 12, 2024

image 示例里,微调的参数在模型输出路径文件夹下的checkpoint-1000文件夹中。按照示例的微调方法运行之后,微调结果输出路径不同,为runs/Jan27_01-06-17_autodl-container-049a448514-394ad272/,其中文件也不同。 image image 请问这里该怎么处理

请问您最后解决了吗,我也遇到这个问题了

我的微调参数是这样设置的: data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=-100, pad_to_multiple_of=None, padding=False ) args = TrainingArguments( output_dir="/root/autodl-tmp/huan_dataset/output",#相对路径无法生成check-points文件夹 per_device_train_batch_size=1, gradient_accumulation_steps=8, logging_steps=5, num_train_epochs=1, save_strategy='steps', save_steps=10, learning_rate=1e-4, #gradient_checkpointing=True,这句解开会报错 ) 加载微调后模型的代码是这样的: from peft import PeftModel from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForSeq2Seq, TrainingArguments, Trainer import torch tokenizer = AutoTokenizer.from_pretrained("/root/autodl-tmp/ZhipuAI/chatglm3-6b", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("/root/autodl-tmp/ZhipuAI/chatglm3-6b", trust_remote_code=True, low_cpu_mem_usage=True) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) p_model = PeftModel.from_pretrained(model, model_id="/root/autodl-tmp/huan_dataset/output/checkpoint-400/") # 将训练所得的LoRa权重加载起来 ipt = tokenizer("<|system|>\n现在你要扮演皇帝身边的女人--甄嬛\n<|user|>\n {}\n{}".format("你是谁?", "").strip() + "<|assistant|>\n", return_tensors="pt").to(model.device) tokenizer.decode(p_model.generate(**ipt, max_length=128, do_sample=True)[0], skip_special_tokens=True)

谢谢好兄弟!!就是请问你直接生成的是checkpoint格式的输出吗,为什么我生成的是events.out.tfevents.1709949130.autodl-container-7d27418359-b92bcb1c.3146.0这种格式的输出,请问可以麻烦贴出您的全部代码吗,我也是按着教程来的啊

from self-llm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.