eclipse-hono / hono Goto Github PK
View Code? Open in Web Editor NEWEclipse Hono™ Project
Home Page: https://eclipse.dev/hono
License: Eclipse Public License 2.0
Eclipse Hono™ Project
Home Page: https://eclipse.dev/hono
License: Eclipse Public License 2.0
Hono is supposed to make sure that telemetry data flowing downstream, i.e. from devices to the back end, can only be received and processed by consumers that are authorized to do so. For that to work the data needs to be associated with a security context containing, among other things, the subjects representing the device identity and (optionally) the device owner (tenant).
Routing and access decisions can then be made by Hono based on this information. In order to make consistent authorization decisions throughout Hono's components, e.g. Hono Messaging, Dispatch Router and protocol adapters, we need a system/component for centrally managing authorizations. Ideally we do not need to implement such a component ourselves but can instead use an existing one like KeyCloak or Eclipse ACS.
Hi,
in the EventDispatcherApplication, the following method implementation is a bit strange to me :
private String determineAmqpUri() { if (EventDispatcherApplication.ENVIRONMENT.get("RABBITMQ_PORT_5672_TCP") != null) { return "amqp://" + System.getenv(DOCKER_RABBITMQ_ADDR) + ":" + System.getenv(DOCKER_RABBITMQ_PORT); } else { return EventDispatcherApplication.ENVIRONMENT.get("AMQP_URI"); } }
Of course there are two ways to define the AMQP connection to the RabbitMQ broker. Regarding the first one is seems it's needed to define the env var "RABBITMQ_PORT_5672_TCP" and then the two DOCKER_RABBITMQ_ADDR (defined as "RABBITMQ_PORT_5672_TCP_ADDR") and DOCKER_RABBITMQ_PORT (defined as "RABBITMQ_PORT_5672_TCP_PORT").
Is the testing env var "RABBITMQ_PORT_5672_TCP" a typo ? Should it has the "RABBITMQ_PORT_5672_TCP_PORT" value (so DOCKER_RABBITMQ_PORT) ?
Should the test be on both env vars port and addr ?
Or it is desired behavior and you need to define three different env vars ?
Thanks,
Paolo.
Clients connecting to Hono's telemetry endpoint need to be authenticated in order to be able to authorize the upload or retrieval of telemetry data.
We need at least username/password based authentication but we should also support token based authentication, e.g. Json Web Tokens issued via OAuth.
As you know the body of an AMQP message can be Data (raw binary), AMQP Value and AMQP Sequence (a sequence of AMQP Value).
If our APIs should be strictly AMQP based it means (to me) that other than protocol features in terms of communication we have to use the builtin type system it provides.
It's true that a lot of users prefer to have body formatted in a well known way using for example JSON or XML; it means that this data can be transferred as raw binary AMQP Data but of course as String type AMQP Value as well (my opinion ? I don't like last solution :-))
I'm for using the AMQP type system and AMQP Value for structured types that users want to send/receive to/from Hono. It means mapping their data to Map, List and so on (in AMQP). Every JSON and/or XML representation can be mapped in this way.
Of course, what does it mean ?
If users prefer to use AMQP direct connection then they have to use AMQP Value as body message (mapping their data on Map, List and so on).
If they prefer to use adapters (HTTP; LWM2M and so on), they can use JSON or XML (or other) and Hono has to map to native AMQP Value.
The other solution (as describe in the current APIs) is to use content-type and then AMQP Data.
Currently my opinion is to have internal Hono system leverages as much as possible on AMQP and on its type system too.
Btw ... I'm thinking about that ... just writing to have your opinions here :-)
Inside the payload of an AMQP message for telemetry could be useful to have the "message-id" as property (defined by AMQP spec) but not mandatory (in general the AMQP client library generates a message-id if it's not specified).
It could be useful on the application layer on the service side which gets telemetry messages from Hono.
The example module needs to be updated to use the Hono Server module instead of the (deprecated) Hono Dispatcher module (which will soon be removed).
Hono is supposed to make sure that telemetry data flowing downstream, i.e. from devices to the back end, can only be received and processed by consumers that are authorized to do so. For that to work the data needs to be associated with a security context containing, among other things, the subjects representing the device identity and (optionally) the device owner.
Routing and access decisions can then be made by Hono based on this information. Provisioning of that kind of data will be easier to automate if we provide an API for managing device identity information.
Just to be more explicit on the Telemetry API specification ...
In the "Upload Telemetry Data" section after the table of delivery modes we wrote "All other combinations are not supported by Hono and result in a termination of the link". It's totally right.
I think that we should be clearer even when describe the relation of "tenant-id" in the target address and as application property.
The following :
Client has established an AMQP link in role sender with Hono using target address telemetry/${tenant-id}. If the target address contains a value for tenant-id then each message sent to Hono on this link MUST also contain the tenant-id application-property having the same value. If the target address doesn't contain a tenant-id then Hono will determine the tenant the device belongs to dynamically for each message sent.
could be ...
Client has established an AMQP link in role sender with Hono using target address telemetry/${tenant-id}. If the target address contains a value for tenant-id then each message sent to Hono on this link MUST also contain the tenant-id application-property having the same value otherwise it results in a termination of the link. If the target address doesn't contain a tenant-id then Hono will determine the tenant the device belongs to dynamically for each message sent.
Finally, in the latter case what's happen if tenant-id is specified as application property (but not in the target address) ? I suppose that Hono doesn't consider it because it determines the tenant dynamically for each message sent. Could we be more explicit on that ? What do you think ?
We should promote Hono architecture and project from early days of the project. I will write initial article to make some noise around the project.
Action items:
Bootstrap project for end-to-end tests running in a docker environment.
In order to define a relation between a command sent to a device and related result from the device itself, it's useful to specify the "message-id" as AMQP property in the command message.
At same time, the response from the device can contain the "correlation-id" (AMQP property) with the value of the message-id received.
It's true both for command execution result and my proposal about feedback from device (see issue #18 )
The API doesn't provide information about command parameters.
Currently, command is an application property and could be the same for a variable number of parameters for it (a variable number of application properties ?).
Or command and related parameters should be encoded inside the body as Map for the Amqp Value.
(sorry for duplicating this "issue" here other than dev internal list but it's useful to have all issue tracked IMHO)
The authorization service must provide information if a connected/authenticated client is allowed to publish and/or retrieve telemetry data (and later to issue command&control messages etc.). The first version will simply keep the data in memory.
When the device sends a telemetry message, Hono doesn't reply with any data but it leverages on AMQP delivery state. The delivery state is provided by Hono with the "disposition" message and can be ACCEPTED or REJECTED as described but ...
... it's true if the device sends the messages with "AT LEAST ONCE" QoS level so it means that the message isn't pre-settled on the device but it needs the disposition frame from the receiver (Hono).
This information is used for re-delivery of the message by the device (if Hono doesn't receive it).
In the telemetry scenario where data is sent (sometimes) at high rates and where if we lost a message a new one will come soon, the "AT LEAST ONCE" QoS is useless.
The "AT MOST ONCE" is feasible and it means that the device sends the message in a pre-settled state so ... the receiver (Hono) won't send any disposition message because the sender isn't waiting for that.
Current API describes the "AT LEAST ONCE" scenario speaking about possible delivery states (ACCEPTED and REJECTED) but the "AT MOST ONCE" should be expected as well.
Provide an additional endpoint for crucial messages, like alarm messages, which are handled with a higher priority and better better qos (than telemetry messages).
We are currently working on the Vertx-Proton client, which we recommend for new development of AMQP 1.0 client code. Here is the example of the vertx-proton API usage [1].
You can find the latest snapshot of the vertx-proton in this [2] repository.
I would say +1 for using vertx-proton for AMQP 1.0 communication. What do you think?
[1] https://github.com/vert-x3/vertx-proton/blob/initial-work/src/test/java/io/vertx/proton/example/HelloWorld.java
[2] https://oss.sonatype.org/content/repositories/snapshots/io/rhiot/vertx-proton/
AMQP credit based flow control needs to be implemented for the telemetry upload so that Hono only accepts data if it also can offload it to the downstream Dispatch Router/Business Application.
Hi,
I like too much the idea to use HTTP status code (in the REST style) for status in the response of device registration API.
Why don't we use it in the same way for command and control API instead of just a 0 or 1 as status ?
For example :
200 command executed and the response has a payload
204 command executed but no payload
202 command accepted (it could be useful for long running command on the device)
405 method not allowed (method in terms of command sent to the client)
400 bad request
.. and so on ...
what do you think ?
Paolo.
It would be great to have a modern and fresh looking web site for the project instead of just the metadata template based one provided by default.
sadly but truly, I have no talent nor knowledge of how to do this nowadays (my times authoring plain <html>
are long gone ;-))
So maybe you have an interest in Hono but (currently) lack the skills to contribute code but enjoy designing web sites and have a feeling for colors and proportions? Then please consider helping us out :-)
Running the Docker based integration test should be possible on travis-ci (https://docs.travis-ci.com/user/docker/). Maybe we can setup a nightly job (https://docs.travis-ci.com/user/cron-jobs/) or (https://nightli.es/) that runs the integration tests.
Use gordons/qpid-dispatch:0.6.0-rc1 docker image in example/tests.
We need to define the API protocol adapters use to send telemetry data upstream as well as the API consumers like solutions use to receive telemetry data.
The API needs to be based on AMQP 1.0 and must make sure that clients can only send/receive data of devices belonging to tenants the clients are authorized for.
link [Hono Example](hono-example/readme.md)
returns 404
into https://github.com/eclipse/hono#run-the-example chapter
In order to connect the Hono Client to the Hono Server which is located behind the proxy.
We should use eclipsehono
[1] DockerHub username, not hono
. The latter has been already taken by somebody not associated with Eclipse.
Also, we should push example config to DockerHub (ideally automate the process) in order to allow people to run example as Docker image out of the box.
See #40.
Now that the vertx-proton projetc has created an initial release available from Maven Central we should start to use it :-)
We basically need the same as for HTTP (see #3).
It would be great if we could use Eclipse Paho/Mosquitto for it. But that's not a prerequisite.
Hi,
Maybe we could remove all the other topologies options from Topology Options page [1] and leave Hybrid only? Is there anybody in our community interested in using non-hybrid topologies? If not, then I propose to remove them to make our architectural view clearer for people new to Hono.
Just an idea :) .
Hi,
I see that the creation of a sender by the HonoClient doesn't allow to specify the QoS but it uses AT_MOST_ONCE internally. Hono API supports telemetry with AT_LEAST_ONCE delivery, so it could be good to add this possibility to the HonoClient.
The other point is that the TelemetrySender class provides more overloads of "send" method but not one with possibility to pass "application properties". Of course, the user can use the overload with Message as parameter but he has to deal with AMQP message structure. It could be good to have an overload with a Map for application properties. In my experience, sometimes users prefer to use application properties and not payload to carry information.
What do you think ? If you agree I can take care about that :-)
Paolo.
Hono does not strive to implement a new message broker but instead tries to leverage existing messaging infrastructure to meet its quality goals. It seems reasonable to assume that the amount of telemetry data flowing upstream from devices to back end applications will be orders of magnitude larger than the data being sent to devices from back end applications in order to invoke operations or configure properties on the devices.
It seems therefore feasible to leverage Apache Kafka (at least) for transporting telemetry data downstream taking advantage of Kafka's qualities regarding horizontal scalability, low latency and fault tolerance.
We will need to come up with ideas of how to map downstream data to Kafka's topics and how to allow for parallel message production/consumption in the context of Hono.
Hi,
What do you think about using GitBooks to maintain our documentation? It is based on GitHub markdown, but allows you to align the documentation nicely in a form of a book. For example [1].
What do you think?
[1] https://rhiot.gitbooks.io/rhiotdocumentation/content/gateway/index.html
Implement failover strategy for client connections between Hono Server and Qpid Dispatch Router. To keep it simple we could start by detecting the connection loss and reconnect to another node of the router network. May we can the circuit breaker pattern for this purpose? Suggestions and ideas welcome!
In the command and control scenario, it's useful to specify the TTL (Time To Live) for a command.
Sometimes if the command isn't executed in a short time by the device (i.e. in 5 secs, 10 secs, 1 min, ...) because it's offline, it could be useless (or dangerous !!!) to execute it when the device will come back online after a long time (i.e. 30 min, 1 hour, ...).
For this reason specifying a TTL inside the command could be very useful.
It could be provided by the "ttl" field inside the Header section as defined in the AMQP specification.
Of course it's true if a "store and forward" approach is used where the command is delivered to the device using a queue for example. When a "direct" connection is setup between sender (the service) and receiver (the device) through a "dispatch router" for example, the TTL can't be used because the message isn't delivered to destination (the device is offline) but the dispatch router doesn't store it.
In that case the sender immediately knows that receiver isn't available thanks to the dispatch router behavior (disposition message with MODIFIED delivery state).
In conclusion, it's useful to add the TTL inside the message format table.
We should lean towards having a single Maven configuration for Docker image releases. In particular such configuration should include:
io.fabric8
, not Jolokia groupId)hono/artifactId
, until Maven property overriding artifactId is setAfter having this configuration we will be able to introduce a single Maven profile to release all the images in a single mvn install -PpushDocker
command invocation.
Why we are including information such latitude and longitude in the API reference implemantation ?
Are they too much related to the application domain and the related business ? Aren't they ?
When a service (the sender) sends a command to the device, how does it know that the device received the message (but not executed it yet) ?
Of course, we can rely on AMQP disposition frame (even if it is sent only when the sender doesn't pre-settle the command message) sent by the receiver.
However, not always the receiver of the command is exactly the "final" receiver ... the device.
Using a "store and forward" mechanism with a queue in the middle, the sender knows (thanks to the AMQP disposition frame) that the message was delivered to the queue (i.e. in the broker) but not to the device. In this scenario, could be useful to have a "feedback" channel from which the sender can read ack messages from devices that truly received the commands.
The related address could be something like this :
control/deviceId/tenantId/feedback
The correlationId property in the AMQP specification can be used to define the relationship between the command (identified by its messageId) and the feedback itself.
Following modules are currently not used or must be updated:
hono-dispatcher
(->remove)hono-api
(-> remove)hono-client
(-> update)For hono-client
we should extract the telemetry client from hono-example
that works together with the hono-server
.
In order to send the register/deregister operation command, the "registration" address for the sender seems to be ok.
In order to receive the related reply, the receiver should be attached on a different address.
We could add the "reply-to" property inside the register/deregister message (i.e the client which send the request could define it like something "registration/", ...). On that address, Hono will sent the response on the executed operation.
The same is valid for the operation "Retrieve information about a registered device"
Just a proposal ...
In a scenario with an IoT gateway in the field which gets data from local devices and then sends them to Hono, it could be useful to provide through the Telemetry API the messages "grouping" feature (with properties like "group-id", "group-sequence" and so on).
Grouping data could be useful to the receiver service for further analysis.
What do you think ?
From time to time [1] Hono Server tests are failing for reasons similar to those:
Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.756 sec <<< FAILURE! - in org.eclipse.hono.server.HonoServerTest
testTelemetryUpload(org.eclipse.hono.server.HonoServerTest) Time elapsed: 3.46 sec <<< ERROR!
java.util.concurrent.TimeoutException: Timed out
at org.eclipse.hono.server.HonoServerTest.testTelemetryUpload(HonoServerTest.java:214)
This happens only in CI environment. I didn't manage to reproduce this issues on my local machine.
Hi,
Good Maven practice is to keep jar dependencies in a BOM module.
Also it is also a good practice to extract dependencies and plugins version numbers into <properties>
. This approach allows to keep versions of our libraries/plugins nicely aligned and also makes it easier to monitor updates in selected dependencies using commands like:
mvn versions:display-property-updates
If nobody objects I will be happy to introduce BOM and extract properties.
Before I commit anything into a project, I have to have at least one CI server monitoring it :) .
I propose to add Travis CI configuration to the project. It allows us to have free CI builds for our project for free. We use Travis for many of our projects here at Red Hat.
Let's consider the scenario where devices are behind an IoT gateway because they aren't able to connect to Hono directly (i.e. modbus, BLE, ZigBee, ...)
Let's consider the Telemetry scenario ...
In such case we should have only one telemetry ingestion endpoint on Hono to which only the gateway connect to; of course the gateway is like a device for Hono so it should have a "deviceid" but ...
I think we need to manage the id of the "originator" (originator-id) of the data inside the messages sent because we could have the need to "block" a device because it's compromised (i.e. hackered). In that case we don't need to "block" the gateway and the entire devices network behind it.
This information could be useful in the command reply path as well, when we need to know the final device which received the command and not the gateway.
Let's discuss about that ....
It would be great to have a protocol adapter for receiving telemetry data from devices using plain HTTP. The adaptor could simply take the payload from incoming requests and forward it to Hono's dispatcher using the client component.
This adaptor would be aprticularly useful for testing and demonstration pruposes but could be evolved into a fully functional REST adaptor including security, scalability, fail-over etc.
@kartben Benjamin, Do you think it is OK to push early versions of our images to that DockerHub account? Or maybe Eclipse has some restrictions in this regards.
Both Hono Server as well as Hono Client should support the configuration of TLS params via environment variables.
The correlation-id inside the register/deregister operation message doesn't make sense to me.
In order to correlate request and response we have to use the correlation-id only inside the response. This id should match with the message-id which is inside the request.
It's something like the command and control, so message-id and correlation-id descriptions could be got from there.
It means to me :
The same is valid for the operation "Retrieve information about a registered device"
having a nice Java API library as a deliverable would, IMHO, help newcomers get started easily with the project (and write their own adapters). So maybe we should think about adding that to list of official artifacts we will deliver and support.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.