Comments (5)
Hi @AlongWY - part of the issue is the wide range of wheels we would need to publish supporting different torch and cuda/rocm/etc versions. We have provided dockerfiles in the past, but the same issue of what matrix to test and support is quite difficult. That's not to say we won't add this in the future, but that choosing what to provide wheels for is difficult as everyone will want their combinations to be supported.
from deepspeed.
Yes, so i compiled with almost all ops, only evoformer anf sparse attention can't compile. And i compiled with the matrix like this, maybe it's a good start to auto build wheels.
matrx:
python-version: ["3.7", "3.8", "3.9", "3.10", "3.11", "3.12"]
torch-version: ["1.12.1", "1.13.1", "2.0.1", "2.1.2", "2.2.1"]
cuda-version: ["11.8.0", "12.1.1"]
build options
TORCH_CUDA_ARCH_LIST="6.1;7.0;7.5;8.0;8.6;8.9;9.0"
DS_BUILD_OPS=1
DS_BUILD_SPARSE_ATTN=0 // not support torch 2.0
DS_BUILD_EVOFORMER_ATTN=0 // need real cuda device? may be a bug?
from deepspeed.
@AlongWY - for the evoformer attention op, you would need to install cutlass and deepspeed-kernels.
But I'm curious what the advantage of a pre-built wheel with the ops installed is, since if the user grabs the wrong one, or has other environment issues, they will hit them regardless of if they use a wheel with pre-built ops or use the wheel off PyPI and the ops are JIT compiled?
from deepspeed.
I installed cutlass and deepspeed-kernels but can't compile, it use the torch.cuda.get_device_properties(0)
to detect the real device so it not work.
Advantages of Pre-Built Wheel:
- Convenience: No additional compilation steps required for the user.
- Performance: Saves the time and resources needed for JIT compilation.
- Plug-and-Play: Especially beneficial for users lacking the environment or resources for compilation.
Advantages of JIT-Compiled Wheel:
- Flexibility: Adapts to different users' specific environments.
- Compatibility: Reduces issues due to system mismatches.
- Customization: Tailored for specific system environments, enhancing operational efficiency.
from deepspeed.
@AlongWY I mostly agree, though pre-built wheels are not helpful to the users if they do not have the underlying hardware to run on, and they are more likely to think they can run with certain features then get errors later if they cannot. Though we've also observed little difference in speed with pre-built wheels vs JIT compilation as well.
However, this does require additional testing and building support, since even the matrix listed above is 50 different wheels. At least for now, it is unlikely that we will have the bandwidth to support this.
from deepspeed.
Related Issues (20)
- [REQUEST] Enable both CPU and NVMe for optimizer
- [BUG] Mismatch between dtype settings in model and ds_config results in NaN loss
- [REQUEST] Launcher mode with SSH bypass HOT 1
- FileNotFoundError: [Errno 2] No such file or directory: ':/usr/local/cuda/bin/nvcc' HOT 3
- [BUG] Uneven work distribution caused by get_shard_size changes
- [BUG] When initializing model_engine, if an mpu is specified, it can lead to an excessively large checkpoint size, and the checkpoint may not be convertible through the `zero_to_fp32.py` script.
- [BUG] Uneven work distribution caused by get_shard_size changes HOT 7
- [REQUEST] pynvml package seems to be deprecated in favor of nvidia-ml-py HOT 1
- [BUG] BertLMHeadModel.from_pretrained hangs when using zero-3 / zero3-offload HOT 1
- [BUG]Why ZeroOneAdam is easy to OOM compared to Adam optimizer?
- [BUG] Why the results were inconsistent in two identical tests with config zero2 + overlap_comm HOT 3
- [BUG] Zero3: Post backward hook is not triggered for submodules whose inputs have .required_grad=False HOT 1
- Get a error when use deepspeed training with torch.compile HOT 1
- [BUG]AttributeError: module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'
- [BUG] fp_quantizer is not correctly built when non-jit installation
- [BUG]CUDA error in pipeline parallel HOT 3
- [BUG] FlopsProfiler upsample flops compute bug
- [BUG] Version >0.14.0 leads to `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!` HOT 2
- [BUG] Zero3: Gather the params for inference(huggingface_language_model.generate) in the end of 1 epoch and re-partition it for next epoch training HOT 5
- [REQUEST] How to install only the torch CPU version when I execute `pip install deepspeed`.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.