Comments (7)
--data.val_split_fraction 0.1
fixed issue.
from litgpt.
I even updated the dataset to have one more indent for the {} to match litgpt's given .json object formatting:
To this:
And I still get the same error.
from litgpt.
Running a test load of the default Alpaca dataset with the command litgpt finetune lora --data Alpaca --checkpoint_dir checkpoints/mobiuslabsgmbh/aanaphi2-v0.1
runs fine, so I think it's my dataset. I can't find any formatting differences between the default dataset and mine, so kind of at a loss here.
from litgpt.
I would like to add that my dataset does exponentially grow in size because it is based around a speaker diarization type of dataset. To keep all the context of the conversation, each time the Speaker 0 and Speaker 1 nametags change in the dataset, the output is loaded back into the "input" format and the text continues until Speakers change, for example:
As you can imagine, it grows very exponentially. This makes me wonder if the error earlier has to do with context length limits or anything like that when initially loading the dataset: as the finetuning scripts work bydetermining the size of the longest tokenized sample in the dataset to determine the block size.
and the files do get quite long. I also did try to truncate the dataset earlier with --train.max_seq_length 256
to no difference.
from litgpt.
Same for me, I don't think size matters as it failed with really small dataset with few records.
from litgpt.
@rasbt can you check this out? There might be a bug in the json datamodule
from litgpt.
#1241 improves the messaging
from litgpt.
Related Issues (20)
- prompt_style HOT 4
- Lora recipes use lots of memory because of not wrapping parameters with gradient in separate FSDP unit HOT 2
- how to pretrain llama2? HOT 4
- Python API
- Stream option HOT 3
- Resolve output characters garbled HOT 4
- Continually pretrained Llama2-7B-hf model inference is not working on 16GB GPU machine HOT 5
- how to pretrain llama2 in custom data? HOT 1
- Is there any best practice for using litdata to load custom data for pretraining? HOT 1
- performing continuous pretraining and then finetuning causes error HOT 3
- pretrain custom dataset gpu memory oom
- Create new CI API key HOT 1
- Some confusion about weight conversion, as I need to use other engineering to evaluate my LLM HOT 2
- Upgrade LitData
- validation output during finetuning HOT 2
- mistralai/Mistral-7B-v0.3 support HOT 4
- How to set max_iters HOT 5
- Specify cache for huggingface openwebtext download HOT 1
- Training lasts just 150 seconds for TinyLlama OpenWebtext dataset
- Mixtral 8x22B support HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from litgpt.