Comments (5)
The order of the function arguments is inherited from the original C bindings where you specify the buffer (message buffer or receive buffer) first, the communication partner (destination or source) second. This is true for both send and receive functions and since send_receive_into
is just a pair of send
and receive_into
operations packed into a single function, its argument list is the concatenation of the argument lists of those two functions. I am reluctant to give up this similarity to the C bindings.
Also, note that your reading of the improved argument list is not in fact what send_receive_into
does. It performs two operations on the calling process: it sends msg
to destination
and receives a message from source
into buffer
with the added value that these two operations will not cause each other to block, e.g. source
and destination
could both refer to the calling process itself and not cause a deadlock.
from rsmpi.
Ah! I thought that might be the case, thank you. And thank you for the clarification - I've got a fair bit to learn yet 😄
This isn't the most appropriate avenue to ask questions, but: do you have an example of gathering a nested vector at all?
I so far have a root process sending out the required data via a for p in world.size()
loop to each process, where each process runs a mirror for loop to receive the nested vector and push to another vector - using send_receive_into
for the root process and receive_into
for the non-roots.
It seems I need to do the inverse with each non-root iterating through the Vec<Vec<_>> and using a view to send the data, where the root process... receives_into
? in a for loop checking the rank of the msg?
I can't quite see how use of gather_into_root
works here - this would concatenate each sent vector in to one contiguous vector right?
from rsmpi.
Hmm, another step of my confusion here was
for c in start..end {
unsafe { // scoped to drop the mut ref so that the column can be moved
let msg_to =
View::with_count_and_datatype(&self.current[c][..], 1, &data_type);
// the process to send to
// This is the inverse of how you would expect it to operate;
// eg root.send(&msg_to, &process)
process.send(&msg_to);
}
} // snippet from my Game of Life code
Where process.send
is;
Send the contents of a Buffer to the Destination &self
I didn't realise that meant literally callingsend()
on the process you wanted to receive the data. Essentially sending the data to itself.
Followed by root.receive_into(&mut into);
for example where I was reading this as "root receives this in to its buffer" - as opposed to "in the calling process we will receive from root and put it here".
I can get some time in the next while (I have assignments and exams coming up) I'll see if I can draft some sort of introduction or tutorial for you you use - it might be the ideal thing for me to write since I'm an outsider to the codebase and have likely hit many of the same roadblocks other newcomers will.
from rsmpi.
I can't quite see how use of
gather_into_root
works here - this would concatenate each sent vector in to one contiguous vector right?
Yes. MPI knows nothing about Rust datatypes so its operations are defined on C arrays. In this situation you can either go with your solution of emulating the gather
operation with a series of send
and receives
into the individual sub-vectors or gather
into a flat buffer and then partition it afterwards. The second solution might give better efficiency at least for the inter-process communication of the data.
from rsmpi.
I didn't realise that meant literally calling
send()
on the process you wanted to receive the data. Essentially sending the data to itself. Followed byroot.receive_into(&mut into);
for example where I was reading this as "root receives this in to its buffer" - as opposed to "in the calling process we will receive from root and put it here".
I can get some time in the next while (I have assignments and exams coming up) I'll see if I can draft some sort of introduction or tutorial for you you use - it might be the ideal thing for me to write since I'm an outsider to the codebase and have likely hit many of the same roadblocks other newcomers will.
Yeah, this might be a good idea, eventually. The documentation at the moment is geared towards people who have already used MPI through its C or Fortran bindings and mostly serves to build the connection to those. I can imagine that someone who is completely new to the API might struggle at first. However, since rsmpi
is not quite finished yet, I don't know whether it makes sense to write long form documentation just now. It would be another place that has to be modified in case there are API changes, and mostly, I think, without the help of a compiler that warns you when things get out of sync.
As for your observations on the &self
argument of send
and receive
functions: I think this is also something that would be less surprising to someone who knows the original C bindings. The "executor" of MPI point-to-point operations is not explicitly specified, it is always implicitly the calling process.
In the C bindings, receive
has the signature receive(buffer, source)
(with buffer
and source
actually comprising multiple actual arguments). I translated these into a receive
method on a value that implements Source
. I could have translated it as a free function as well. I am not sure why I made the choice initially. Maybe I thought it was a better fit for a Rust library. Maybe I started with send
and thought method call syntax would be appropriate due to the calling a method <--> sending a message
analogy from classical OOP.
from rsmpi.
Related Issues (20)
- macOS compiling error HOT 2
- Worlds and Universes: MPI_FINALIZE HOT 5
- Support for unit testing HOT 4
- Immediate send/recvs hang when the packets being sent are large HOT 16
- Can I send heap allocated data structures with the UserDataType API?
- Using dynamic reference to communicator HOT 12
- Implement Equivalence trait for complex types HOT 2
- Incorrect parsing of `-L` and `-I` paths in build-probe-mpi
- Support `MaybeUninit` HOT 3
- Why I can not compile “mpi-sys”? HOT 2
- Unstable with openmpi in Arch Linux, yet stable with MPICH from conda HOT 4
- clippy lint for derived `Equivalence` of struct with tuple fields HOT 2
- Cannot open include file: 'fficonfig.h' while building with msvc and ms-mpi HOT 10
- Example of using rsmpi across two or more nodes HOT 2
- Problems compiling crates.io version on Arch HOT 3
- push a release with updated bindgen? HOT 1
- Improving ergonomics of sending arrays/vectors HOT 5
- MPI_File support HOT 1
- MPI Communication Analysis possible? HOT 4
- Should ready send be unsafe? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rsmpi.