modular-ml / wrapyfi Goto Github PK
View Code? Open in Web Editor NEWPython Wrapper for Message-Oriented and Robotics Middleware
Home Page: https://wrapyfi.readthedocs.io
License: Other
Python Wrapper for Message-Oriented and Robotics Middleware
Home Page: https://wrapyfi.readthedocs.io
License: Other
Deadlocks occur when a listener encapsulates functions with Wrapify decorators e.g.:
class TheClass(MiddlewareCommunicator)
...
@MiddlewareCommunicator.register(...)
def encapsulated_func(...):
...
return encapsulated,
@MiddlewareCommunicator.register(...)
def encapsulating_func(...)
...
encapsulated, = self.encapsulated_func(...)
...
return result,
Here, the encapsulating_func
would continue forever when the set to listen
and the encapsulated_func
is set to publish
. This can be avoided by tracing the encapsulated calls during initialization and triggering the functions by the order of definition.
The following is planned for Wrapyfi v0.5:
OS Support
Plugins
Features
Data Types
Middleware
Wrapify restricts the decorator to methods defined within classes. There are no technical limitations as to why this isn't possible. However, Wrapify maintains the properties for each class independently, to avoid clashes in naming conventions and triggering multiple instances within the module. This is especially problematic since the publisher ports would clash with other triggering calls. We can resolve this issue by spawning a MiddlewareCommunicator
for the module itself.
To support ROS, we can separate publishers and listeners into categories depending on the middleware they were written for e.g.: yarp_publishers, ros_listeners, etc. What's important is that we maintain the same functionality across the different communication platforms
When a function is set to publish, there must be a single instance of its class. This limitation is due to naming conventions: the publisher port is given a name that would then conflict with another instance of the publisher
The proposed solutions do not require any changes in wrapify itself and can be done by the user:
@MiddlewareCommunicator.register('Image', "ClassName", "$port_name")
def my_image(self, port_name="/some/port/name"):
...
return result,
this would require the user to manage the different function port names per class instance, or we could add a helper function to facilitate this automatically
Add argument to tensor data structures with direct GPU/TPU mapping to support re-mapping on mirrored node e.g.,
@PluginRegistrar.register
class MXNetTensor(Plugin):
def __init__(self, load_mxnet_device=None, map_mxnet_devices=None, **kwargs):
where map_mxnet_devices
should be {'all': mxnet.gpu(0)
when load_mxnet_device=mxnet.gpu(0)
and map_mxnet_devices=None
.
For instance, when load_mxnet_device=mxnet.gpu(0)
or load_mxnet_device="cuda:0"
, map_mxnet_devices
can be set manually as a dictionary representing the source device as key and the target device as value for non-default device maps.
Suppose we have the following wrapified function:
@MiddlewareCommunicator.register("NativeObject", args.mware, "Notify", "/notify/test_native_exchange",
carrier="tcp", should_wait=True, load_mxnet_device=mxnet.cpu(0),
map_mxnet_devices={"cuda:0": "cuda:1", mxnet.gpu(1): "cuda:0", "cuda:3": "cpu:0",
mxnet.gpu(2): mxnet.gpu(0)})
def exchange_object(self):
msg = input("Type your message: ")
ret = {"message": msg,
"mx_ones": mxnet.nd.ones((2, 4)),
"mxnet_zeros_cuda1": mxnet.nd.zeros((2, 3), ctx=mxnet.gpu(1)),
"mxnet_zeros_cuda0": mxnet.nd.zeros((2, 3), ctx=mxnet.gpu(0)),
"mxnet_zeros_cuda2": mxnet.nd.zeros((2, 3), ctx=mxnet.gpu(2)),
"mxnet_zeros_cuda3": mxnet.nd.zeros((2, 3), ctx=mxnet.gpu(3))}
return ret,
then the source and target gpus 1 & 0 would be flipped, gpu 3 would be placed on cpu 0, and gpu 2 would be placed on gpu 0. Defining mxnet.gpu(1): mxnet.gpu(0)
and cuda:1
: cuda:2
in the same mapping should raise an error since the same device is mapped to two different targets.
When using two decorators for two separate returns as in the torch_tensor.py
example, the topic is read twice:
@MiddlewareCommunicator.register("NativeObject", args.mware, "Notify", "/notify/test_native_exchange",
carrier="", should_wait=True, load_torch_device='cpu')
@MiddlewareCommunicator.register("NativeObject", args.mware, "Notify", "/notify/test_native_exchange2",
carrier="", should_wait=True, load_torch_device='cpu')
def exchange_object(self, msg):
Allow classes/structs to be passed across middleware. The serializer would need to be modified to do something similar to jsonpickle, however, jsonpickle is unsafe since it executes scripts. Find a safer approach
Issue occurs in wrapify/connect/wrapper.py when multiple instances of a function are called.
The pickling of multiple instances in activate_communication
is not possible for YARP. One solution includes storing the name of the instance only
def register (...)
def encapsulate(...)
return_func_listen = return_func_type
return_func_publish = return_func_type
...
...
lsn.Publishers.registry[communicator["return_func_publish"]](...)
Currently, ROS messages (std_msgs, geometry_msgs ..., + custom message types following the ROS topic/package mapping convention) can be published and received using ROS by setting the data structure to ROSMessage. There are multiple ways to make this happen potentially:
Add directories to Wrapyfi named "servers" and "clients" following the same convention as "publishers" and "listeners" containing all middleware. The request/reply paradigm should always wait: should_wait=True being always the case.
The equivalent of a server setting is activated by:
activate_communication(<Method name>, mode="reply")
The equivalent of a client setting is activated by:
activate_communication(<Method name>, mode="request")
The arguments are captured from the client (requester) and passed through to the server (replier), which returns the "reply" back to the server (itself) and the client.
Changes need to be made in the wrapyfi/connect/wrapper.py script to reflect new addition. Must initialize the server and client scanners in the init. Must carefully separate the publish/listen from reply/request
MQTT is a reliable PUB/SUB protocol that is commonly used in industrial IoT. A server/client (REQ/REP) communication pattern will not be supported. Will rely on paho-mqtt in Python for initial implementation. Broker can be spawned independently as a standalone or automatically on first publisher creation. This will follow the current ZeroMQ structure: flags can be passed directly as dedicated MQTT properties
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.