Comments (7)
We now suspect https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/env.html#nccl-graph-mixing-support is at issue here. We had turned it off to get a significant speedup, but we may be misusing that feature.
from nccl.
We actually were still able to reproduce with graph mixing support turned on. Adding a synchronize between usages somehow also doesn't help. We're working on a more minimal reproducer but it will take some time.
from nccl.
from nccl.
from nccl.
Reproducer (on 2 H100s):
import torch
def _test(rank):
torch.cuda.set_device(rank)
torch.distributed.init_process_group(
backend="nccl", rank=rank, world_size=2, init_method="tcp://localhost:2379"
)
size = 100_000
t = torch.zeros(size, dtype=torch.bfloat16, device="cuda")
torch.distributed.all_reduce(t)
torch.distributed.all_reduce(t)
with torch.cuda.graphs.graph(torch.cuda.graphs.CUDAGraph()):
torch.distributed.all_reduce(t)
# Uncommenting this will fix the hang
# torch.distributed.all_reduce(t)
random.seed(0)
for i in range(100_000):
if i % 100 == 0 and rank == 0:
print(i)
size = 49_000
t = torch.zeros(size, dtype=torch.bfloat16, device="cuda")
torch.distributed.all_reduce(t)
torch.cuda.synchronize()
if __name__ == "__main__":
torch.multiprocessing.start_processes(fn=_test, nprocs=2)
from nccl.
Here's a C++ version (thanks claude)
#include <iostream>
#include <nccl.h>
#include <mpi.h>
void test(int rank) {
// setenv("NCCL_WORK_FIFO_DEPTH", "128", 1);
// if (rank == 0) {
// setenv("NCCL_DEBUG", "TRACE", 1);
// setenv("NCCL_DEBUG_SUBSYS", "ALL", 1);
// }
cudaSetDevice(rank);
ncclComm_t comm;
ncclUniqueId id;
if (rank == 0) {
ncclGetUniqueId(&id);
}
MPI_Bcast(&id, sizeof(id), MPI_BYTE, 0, MPI_COMM_WORLD);
ncclCommInitRank(&comm, 2, id, rank);
int size = 100000;
ncclDataType_t dataType = ncclBfloat16;
size_t elemSize = sizeof(uint16_t);
uint16_t* d_data;
cudaMalloc(&d_data, size * elemSize);
cudaMemset(d_data, 0, size * elemSize);
ncclAllReduce(d_data, d_data, size, dataType, ncclSum, comm, cudaStreamDefault);
ncclAllReduce(d_data, d_data, size, dataType, ncclSum, comm, cudaStreamDefault);
cudaStream_t stream;
cudaStreamCreate(&stream);
// Create CUDA graph
cudaGraph_t graph;
cudaGraphCreate(&graph, 0);
cudaStreamBeginCapture(stream, cudaStreamCaptureModeGlobal);
ncclAllReduce(d_data, d_data, size, dataType, ncclSum, comm, stream);
cudaStreamEndCapture(stream, &graph);
cudaGraphExec_t graphExec;
cudaGraphInstantiate(&graphExec, graph, NULL, NULL, 0);
for (int i = 0; i < 10000; ++i) {
if (i % 100 == 0 && rank == 0) {
std::cout << i << std::endl;
}
size = 49000;
cudaMemset(d_data, 0, size * elemSize);
ncclAllReduce(d_data, d_data, size, dataType, ncclSum, comm, cudaStreamDefault);
cudaStreamSynchronize(cudaStreamDefault);
}
cudaGraphExecDestroy(graphExec);
cudaGraphDestroy(graph);
cudaFree(d_data);
ncclCommDestroy(comm);
}
int main(int argc, char* argv[]) {
int rank, world_size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
test(rank);
MPI_Finalize();
return 0;
}
nvcc -o nccl_test repro.cc -lnccl -lmpi
mpirun -np 2 ./nccl_test
from nccl.
resolved by this commit (i assume will be added to master soon) ee3d92b
from nccl.
Related Issues (20)
- ValueError: bytes must be in range(0, 256) HOT 8
- AllReduce Root Always 0 HOT 2
- Why get poor performance when using different channels with samechannels = 0?
- ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error. HOT 4
- NCCL error in: /opt/pytorch/pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1287, invalid usage, NCCL version 2.17.1 ncclInvalidUsage: This usually reflects invalid usage of NCCL library. Last error: Duplicate GPU detected : rank 2 and rank 3 both on CUDA device e2000 HOT 1
- Theoretical time of collectives when reduction in network enabled
- Questions about the memory order HOT 1
- 2 node 16 H20 GPU allreduce performance is not as expected with NVL sharp HOT 3
- Question about the use of tree algorithm in DGX A100 HOT 2
- AllReduce info.proto returns 0 (LL protocol) despite both setting env variable to SIMPLE / Tuning.cc protoEnable array disabling LL/LL128 HOT 5
- torch.distributed.DistBackendError: NCCL error in: /root/pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1333, internal error HOT 2
- Question: "NCCL needs all GPUs of a host to be part of a collective in other to reliably use NVLinks"?
- reduceCopy minSrcs and maxSrcs HOT 2
- all-reduce latency is lower with 2 nodes, 2 GPUs each vs. 2 nodes, 1 GPU each
- Is this description of NCCL_BUFFSIZE's role appropriate? HOT 4
- nccl error for driver 535.161.08 HOT 2
- Question on Multi-Machine Topology Connection in NCCL HOT 1
- 'NCCLCommunicator' object has no attribute 'comm' HOT 4
- NCCL communication stuck with CPU usage 100% HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nccl.