Comments (4)
Because algorithm-wise, NVLS reduce_scatter transfers one more chunk to itself which causes lower perf compared with Ring.
from nccl.
Thanks for your reply. From my understanding, NVLS can accelerate the collective communication and the message should be sent to the NVSwitch only once. So why NVLS reduce_scatter transfers one more chunk to itself?
from nccl.
because load-reduce involves all ranks on the node, a rank needs to send its own data to switch so the total data each rank sends is nRanks * count
; however, Ring only sends (nRanks - 1) * count
in total which makes it faster.
from nccl.
From the kernel function in reduce_scatter.h, the collective communication ReduceScatter is performed by scatter (prims.scatter) and recv (prims.recv). I guess the data on each GPU/Rank is scattered and reduced in different intra-node NVSwitches and then sent back to the GPU/Ranks. If my logic is correct, to complete ReduceScatter, with respect to the GPU/Rank, NVLS needs only once send and once receive. By the way, could you please let me know how nvls->down and nvls->up is used in the kernnel function? It looks the values of the elements in nvls->down and nvls->up are greater than comm->nranks. Thanks.
because load-reduce involves all ranks on the node, a rank needs to send its own data to switch so the total data each rank sends is
nRanks * count
; however, Ring only sends(nRanks - 1) * count
in total which makes it faster.
from nccl.
Related Issues (20)
- Allreduce timeout
- ALLREDUCE timeout
- ALLREDUCE timeout HOT 9
- Why NCCL P2P(send/recv) operators need a datatype parameters? HOT 3
- Is there any benchmark of P2P communication between NCCL and UCX(ucp)? HOT 2
- NVLink SHARP Performance on AWS P5
- 【the difference between NCCL and cudaMemcpyPeerAsync】
- How can I see the algorithm chosen by NCCL? HOT 2
- cuda memcpy instead of gpu kernel in p2p sendrecv operation HOT 3
- [ext-net] is bundling headers still recommended? HOT 1
- NCCL socket performance over multiple NICs HOT 2
- Build failure on nccl 2.23.4. Missing shmutils.h HOT 5
- Unable to Specify CUDA Stream for Collective Operations Using with torch.cuda.stream() context
- A Question about network buffer HOT 2
- Poor NCCL allreduce performance HOT 4
- 300node 8GPU 4 IB NCCL TEST HOT 2
- [SHArP] about the intranode allreduce performance with SHArP
- [Question] Why ncclSend is non-blocking? HOT 1
- Some questions about fifo buffer design
- How to estimate the communication time of NCCL alltoallv?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nccl.