Comments (5)
What are you doing? I need more information.
from finetune_llms.
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py38_cu116/cpu_adam/build.ninja...
Building extension module cpu_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using tokenizers
before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Using /root/.cache/torch_extensions/py38_cu116 as PyTorch extensions root...
ninja: no work to do.
Loading extension module cpu_adam...
Time to load cpu_adam op: 3.346489906311035 seconds
Loading extension module cpu_adam...
Time to load cpu_adam op: 2.9891393184661865 seconds
I solved the previous problem however i face this error [ninja](ninja: no work to do.)
the script stops here for long time
from finetune_llms.
Not sure. Maybe pip install ninja
from finetune_llms.
If you still haven't solved it, can you provide your ds_report
?
from finetune_llms.
This is all JIT now and should be fine if done with the docker image
from finetune_llms.
Related Issues (20)
- Error while running convert_model_to_torch script HOT 3
- How to make the inference of GPT-J run on multiple GPU ? HOT 2
- DeepSpeedZeRoOffload initialize [end] HOT 4
- [QUESTION] single_texts vs group_texts HOT 2
- File: Dockerfile Line:32 HOT 1
- Sends Kill to process when trying to resume a finetune on LLaMA 7B HOT 2
- Running super slow on 4 a100 gpus HOT 2
- cannot import name 'GPTNeoXForCausalLM' from 'transformers' HOT 1
- Can't find a valid checkpoint HOT 1
- gradient overflow when training 13b Llama Model on 7 a100s HOT 1
- Can't perform example_run, getting an error after deepspeed is initialized HOT 2
- "nvcc fatal : Unsupported gpu architechture 'compute_89'" with docker image HOT 3
- Unable to find image 'gpt:latest' locally HOT 1
- torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 394.00 MiB HOT 3
- deepspeed>=0.5.7 is required by recent versions of the transformers package HOT 3
- Incorrect block size? HOT 3
- Training data format for generating Scenario based MCQ's HOT 2
- `RuntimeError: Error building extension 'cpu_adam'AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' HOT 1
- RuntimeError: The expanded size of the tensor (50257) must match the existing size (0) at non-singleton dimension 0. Target sizes: [50257]. Tensor sizes: [0] HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from finetune_llms.