Comments (6)
It should not hang. An error should be generated by the network operation and reported to ncclCommGetAsyncError(), which should be checked by the application.
Edit: the application is actually the NCCL tests. Interesting. How did you launch the NCCL tests?
from nccl.
It should not hang. An error should be generated by the network operation and reported to ncclCommGetAsyncError(), which should be checked by the application.
Edit: the application is actually the NCCL tests. Interesting. How did you launch the NCCL tests?
Thank you for your answer. My nccl-tests was downloaded from: https://github.com/NVIDIA/nccl-tests, and started through mpirun. The instruction is:
unset CUDA_VISIBLE_DEVICES && \
mpirun -np 8 -H 0.0.0.0:8 -v \
--allow-run-as-root --bind-to none --map-by slot \
--mca btl_tcp_if_include bond1 --mca oob_tcp_if_include bond1 \
-x NCCL_SOCKET_IFNAME=bond1 -x UCX_NET_DEVICES=bond1 \
-x NCCL_IB_DISABLE=0 -x NCCL_IB_GID_INDEX=3 -x NCCL_IB_CUDA_SUPPORT=1 \
-x NCCL_MIN_CTAS=4 -x NCCL_P2P_DISABLE=1 -x NCCL_SHM_DISABLE=1 -x _BOOT_STORAGE=0 \
-x NCCL_IB_HCA=mlx5_bond_1,mlx5_bond_2,mlx5_bond_3,mlx5_bond_4,mlx5_bond_5,mlx5_bond_6,mlx5_bond_7,mlx5_bond_8 \
-x NCCL_COLLNET_ENABLE=0 -x LD_LIBRARY_PATH=$LD_LIBRARY_PATH \
-x CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 -x SHARP_COLL_ENABLE_SAT=0 \
-x NCCL_NET_GDR_LEVEL=2 -x NCCL_IB_QPS_PER_CONNECTION=4 \
-x NCCL_IB_TC=160 -x NCCL_PXN_DISABLE=0 \
-mca plm_rsh_args "-p 12345" all_reduce_perf -b 2G -e 2G -f 2 -g 1 -n 2000 -z 0
The ncclCommGetAsyncError() you mentioned exists in testStreamSynchronize() in nccl-tests, but since I did not make NCCL collective blocking, nccl-tests will enter testStreamSynchronize() after sending all the collective communication operations (ncclAllReduce) for all iterations, but now it is stuck in the process of sending collective communication and will never enter testStreamSynchronize().
from nccl.
Ah, I see. You're running with -n 2000. That's why NCCL eventually gets stuck trying to enqueue a kernel to the GPU (the GPU queue is full) and this is a blocking call; we can't do much about it. We would need cudaLaunchKernelExC to have a non-blocking variant which would allow us to retry later.
from nccl.
Ah, I see. You're running with -n 2000. That's why NCCL eventually gets stuck trying to enqueue a kernel to the GPU (the GPU queue is full) and this is a blocking call; we can't do much about it. We would need cudaLaunchKernelExC to have a non-blocking variant which would allow us to retry later.
Oh, I get it. Now I try to resolve this fault with reference:
I created a separate thread to monitor the status of the network card. Once an exception is found, I call ncclCommAbort in the thread, but it does not solve the problem. Instead, it hangs in another function (cudaStreamSynchronize). Is it not possible to call abort in this way?
from nccl.
In theory, setting the communicator in non-blocking mode should make NCCL not block in the ncclAllReduce
call and return ncclInProgress
instead. But I'm not sure we currently handle the case of cudaLaunchKernelExC blocking.
from nccl.
In theory, setting the communicator in non-blocking mode should make NCCL not block in the
ncclAllReduce
call and returnncclInProgress
instead. But I'm not sure we currently handle the case of cudaLaunchKernelExC blocking.
In theory, setting the communicator in non-blocking mode should make NCCL not block in the
ncclAllReduce
call and returnncclInProgress
instead. But I'm not sure we currently handle the case of cudaLaunchKernelExC blocking.
Thank you for your answer. Can I avoid this problem by using ncclCommAbort? I now call this function but it hangs in the cudaStreamSynchronize function. The stack information is as shown in my previous answer.
from nccl.
Related Issues (20)
- Is it possible to swap the calling order of `initTransportsRank` and `ncclTunerPluginLoad` HOT 1
- NCCL Logs Communicator Query HOT 1
- work request complete err: status 5 and vendor err 249 HOT 7
- Is there someway to measure gpu i/o usage or allreduce waiting time? HOT 1
- About sync in nvls algorithm
- NCCL Tree allreduce test cannot reach the theoretical bus bandwidth on 2 nodes with 4 nics HOT 7
- how does NCCL support peer-to-peer connections across NUMA nodes without the features of NICs and NVLinks? HOT 2
- How can I test IB bandwidth when NCCL is running?
- Single or double ring HOT 1
- Missing header file HOT 7
- Why does NVLSTree Allreduce perform worse than Ring Allreduce? HOT 1
- Encountering Random Segmentation Fault During NCCL-Tests HOT 14
- Ring broadcast
- inter-node nvls process when ib sharp not supported HOT 1
- NCCL Error on Multi-Node Training with Mixed GPU Setup HOT 2
- why intra-node ring only search one-direction bw?
- NCCL test, Tree is slower than Ring HOT 2
- Use nsight system to profile nccl p2p, I find something confused ...
- Scheduling to Packet Sending Connection from ncclEnqueueCheck to finally sending the packet in misc/socket.cc::socketProgressOpt
- standardize code style
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nccl.