Coder Social home page Coder Social logo

accenture / reactive-interaction-gateway Goto Github PK

View Code? Open in Web Editor NEW
583.0 38.0 64.0 18.91 MB

Create low-latency, interactive user experiences for stateless microservices.

Home Page: https://accenture.github.io/reactive-interaction-gateway

License: Apache License 2.0

Elixir 92.14% JavaScript 2.38% Java 2.15% CSS 0.05% Dockerfile 0.57% Shell 2.17% Mustache 0.53% Euphoria 0.01%
api-gateway server-sent-events unidirectional-data-flow reactive frontend event-subscriptions live-data event-driven-architecture event-driven-microservices microservices

reactive-interaction-gateway's Introduction

Logo

RIG - Reactive Interaction Gateway

Makes frontend<->backend communication reactive and event-driven.

Build Status DockerHub

About

The Reactive Interaction Gateway (RIG) is the glue between your client (frontend) and your backend. It makes communication between them easier by (click the links to learn more)

  • picking up backend events and forwarding them to clients based on subscriptions: this makes your frontend apps reactive and eliminates the need for polling. You can do this
    • asynchronously - using Kafka, Nats or Kinesis.
    • synchronously - if you don't want to manage a (potentially complex) message broker system like Kafka.
  • forwarding client requests to backend services either synchronously, asynchronously or a mix of both:
    • synchronously - if requests are being sent synchronously, RIG acts as a reverse proxy: RIG forwards the request to an HTTP endpoint of a backend service, waits for the response and sends it to the client.
    • asynchronously - fire&forget - RIG transforms a HTTP request to a message for asynchronous processing and forwards it to the backend asynchronously using either Kafka, NATS or Amazon Kinesis.
    • synchronously with asynchronous response - a pseudo-synchronous request: RIG forwards the client request to the backend synchronously via HTTP and waits for the backend response by listening to Kafka/NATS and forwarding it to the still open HTTP connection to the frontend.
    • asynchronously with asynchronous response - a pseudo-synchronous request: RIG forwards the client request to the backend asynchronously via Kafka or NATS and waits for the backend response by listening to Kafka/NATS and forwarding it to the still open HTTP connection to the frontend.

Built on open standards, RIG is very easy to integrate – and easy to replace – which means low-cost, low-risk adoption. Unlike other solutions, RIG does not leak into your application – no libraries or SDKs required. Along with handling client requests and publishing events from backend to the frontend, RIG provides many out-of-the-box features.

This is just a basic summary of what RIG can do. There is a comprehensive documentation available on our website. If you have any unanswered question, check out the FAQ section to get them answered.

Getting Started

Get Involved

License

The Reactive Interaction Gateway (patent: granted) is licensed under the Apache License 2.0 - see LICENSE for details.

Acknowledgments

The Reactive Interaction Gateway is sponsored and maintained by Accenture.

Kudos to these awesome projects:

  • Elixir
  • Erlang/OTP
  • Phoenix Framework
  • Brod
  • Distillery

reactive-interaction-gateway's People

Contributors

arthurjordao avatar azer0s avatar dependabot[bot] avatar gonzochic avatar igorlacik avatar ionelaluncanuaccenture avatar jakobwgnr avatar johnrkriter avatar kevinbader avatar knappek avatar kramos avatar lofim avatar mmacai avatar pavestru avatar skariyania avatar steveoliver avatar urnamma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reactive-interaction-gateway's Issues

Prep for OSS release

  • polished documentation
    • add code of conduct
    • add contribution guidelines
  • make sure mix docs works as expected
  • prepare for upload to hex.pm
  • rename stuff to rig everywhere, remove references to previous fsa/bra/ares life

Have a local-development-only Docker image for an easy-to-use dev setup

The idea is to ship RIG in a development configuration that's really easy to use. Ideally, all that'd be needed is this:

$ docker run -d -p 4000:4000 -p 4010:4010 accenture/reactive-interaction-gateway:latest-local-development-only

Some ideas around the default settings that come with the image:

  • JWT validation could be disabled (new feature!) and just log a warning instead
  • Logging incoming and outgoing messages could be set to a more detailed level

Since the whole point is to have zero configuration on the users' end, it probably doesn't make sense to apply this to the AWS flavor or enable the Kafka integration. Ideas welcome!

Kubernetes-Native

Aim is to make RIG a nice Kubernetes service that you "just deploy" and that can automatically find the other instances and wire itself up.

AC: Able to be launched as Kubernetes-Service easily on "own" Kubernetes cluster as well as stock Kubernetes-as-a-Service (AWS / Azure / Google) offerings.

reverse proxy: support URL rewriting

Right now, {id} can be used as a placeholder for variable parts of an URL. There are issues with the current approach:

  • The name inside the curly braces does not matter - it can't be referred to in the target URL.
  • The format is non-standard and quite inflexible.

I think that if we want to support URL rewriting we should adopt industry standards, which basically means using regular expressions. Using regular expressions is admittedly a bit more complicated than the current approach might be, but it is very efficient (expressions can be pre-compiled) and flexible.

The implementation should enable the following scenario:

Given the request URL `/books/123/chapters/456`,
and the rule `/books/([^/]+)/chapters(?<chapter_id>/?|/.*)` => `http://chapter-service:8080/v1/chapters/chapter_id`,
then the request is forwarded to `http://chapter-service:8080/v1/chapters/456`.

Proposed :re options: :no_auto_capture, :dupnames, :unicode, :anchored, :caseless.

docker-compose based example setup

It'd be nice to have an examples/ folder that contains little projects that showcase how RIG can be used. I'm thinking about a simple setup, with a docker-compose file that starts up RIG, Kafka, a backend service and a frontend website.

SSE doesn't work for channels example

When using SSE, RIG's logs say the client is connected, but the UI doesn't notice: the connect button is still active, the disconnect button is still inactive, and the notification thingy above the message window still says not connected.

Support forwarding inbound HTTP request bodies as events to Kafka & Kinesis

Right now, HTTP requests are forwarded to backend services. Additionally, other "targets" should be available, like publishing the request to a Kafka or Kinesis topic.

Benefits

  • If a service is down at the time the request is sent, the request is not lost but can be processed as soon as the service is back up.
  • Frontends don't see 504 Gateway Timeout errors, since RIG immediately responds. Assuming the processing service notifies the frontends about picking up the request, the UI just has another state to deal with (e.g., submitted - processing - processed).

Performance tests

Some ideas:

  • How fast can messages be routed when scaling out (with multiple connections per user to different RIG instances)?
  • How many concurrent downstream connections (SSE, WebSockets) can be handled?
  • How big is the typical request latency for proxied requests?
  • How large can the blacklist grow before performance (?) degrades?

We need a logo!

Logos are important. There are some ideas, perhaps a pictogram of an oil rig, but we're open to creative ideas here

RabbitMQ as Message Broker

Hello.

Not an issue, but not sure where else to post a query.

Will RIG work with RabbitMQ as the message broker instead of Kafka? Is this something you've perhaps already done?

Thanks in advance.

Remove references to Phoenix from the client API

It's nice that we use Phoenix, but users really shouldn't have to care what those phx_* messages mean. Starting with SSE, we should get rid of all references that leak that we use Phoenix as the web framework.

Support additional JWT algorithms

Currently

RIG supports hs256 only; the secret is read from JWT_SECRET_KEY.

Idea:

Joken, the library we're using for handling JWTs, supports more than just the HS* algorithms. The idea is to use the hs/rs/... functions found in signer.ex and support specifying the algorithm using an additional environment variable. For example:

JWT_ALG=HS256
JWT_KEY=my-symmetric-key

or

JWT_ALG=RS256
JWT_KEY=my-public-key

Basically, you'd do something along the lines of:

jwt_alg
|> String.downcase
|> case do
  "hs" <> _ = alg -> Signer.hs(alg, jwt_key)
  "rs" <> _ = alg -> Signer.rs(alg, jwt_key)
  ...
end

Additional thoughts

Joken does not support token encryption, which might be a future requirement. Here's a nice overview about JWT algorithms and what they're used for.

Create a static website with more focused documentation

The idea is to have different sections for different users:

  • For the developer that just wants to get an overview and get started quickly.
  • For the ops person that wants to know what can be configured in what way.
  • For the Elixir dev that wants to know how RIG is built.

The plan is to use Docusaurus, hosted on GitHub pages.

Helm chart issues

The helm chart config has 2 issues.

The first issue is related to port configuration.
In deployment.yaml template there are 2 entries for ports (api and inbound) with their names specified as api-port and inbound-port. This is fine but the config is missing the environment variables (API_PORT and INBOUND_PORT) that would tell RIG on what port to run.

The second issue is that the values.yaml file declares DNS name as reactive-interaction-gateway-service-headless.default.svc.cluster.local but the service is created without the -service suffix. So the correct name would be reactive-interaction-gateway-headless.default.svc.cluster.local.

I'll create a PR for this when I get some time.

Add support for token type in Authorization header

Since we support sending auth (bearer) token via Authorization header we should be compliant with what the rest of the industry is doing.

The OAuth RFC 7.1 describes token types.
The current implementation does not consider types at all and will fail to validate the token if the type is set to for example "Bearer".

So the proper form should look like this:
Authorization : Bearer eyJ0eXAiOiJK

Instead of this:
Authorization : eyJ0eXAiOiJK

AFAIK we only support Bearer token for now.
The question here is what should the gateway do in case it receives a different type.

Either way this should be configurable. We should be able to specify that we expect a type or not.
AUTH_TOKEN_TYPE=Bearer sounds reasonable to me.

I'm open to suggestions.

Endpoint for disconnect_channel_session should either make use of the user parameter or be models to not require it at all

The RG API router contains a route for deleting user sessions (disconnecting user based on his JTI) and it looks like this:

delete "/:user/sessions/:jti", ChannelsController, :disconnect_channel_session

However the :user parameter is not used and I can happily send whatever string I want.

Request URL: http://localhost:3000/rg/users/undefined/sessions/8bf39303-1a10-4cf8-9765-b00f82ef258b
Request Method: DELETE
Status Code: 204 No Content

Notice the undefined part of the URL.

The handler should either check for existence of the user and that it contains the session we want to delete (as the REST semantics make it appear).
Or the route should be modeled in a different way not containing the user parameter at all.

Event Subscriptions (rfc)

Event Subscriptions

Previously, RIG's focus was on simple frontend use-cases, where it made sense to built terminology and configuration around userId, groups, etc. Being an event gateway, however, it makes a lot more sense to focus on events and supporting current use-cases through that lense.

A focus on events allows us to make RIG more generic and thus more flexible in terms of the use cases it supports. To that end, frontends should be able to connect to RIG before any user authentication happens. This is needed to support use cases where anonymous users are notified about certain types of events, like a sports game website that broadcasts game events. Additionally, it makes it easier to use RIG even in case you need it only for authenticated users, because with single-page apps it's really simple to connect to RIG after page load and keep the connection open from there on, instead of binding the connection lifetime to the user session's lifetime.

Finally, we are adopting the upcoming Cloud Events Spec in order to streamline interfaces and increase compatibility with other applications down the road.

Let's begin with a deep dive on events and subscriptions.

Events and event types

We're going to use the Cloud Events Spec wherever possible. For example, incoming events are expected to feature an "eventType" field.

An "official" example of such an event type is com.github.pull.create. We can infer the following properties:

  • Event types use reverse-dns notation, which means the type name contains parent-to-child relations defined by the dot character.
  • Event types are likely going to be unrelated to specific entities or (user) sessions. For example, for a repository "my-org/my-repo", we do not expect to see events like com.github.pull.create.my-org/my-repo; instead, the repository ID is likely to be found in the CloudEvent's data field (as there is no "subject"-like field mentioned in the spec).

Following those observations/assumptions, we assume to events that look similar to the following (based on Github's get a single pull request API):

{
  "cloudEventsVersion": "0.1",
  "eventType": "com.github.pull.create",
  "source": "/desktop-app",
  "eventID": "A234-1234-1234",
  "eventTime": "2018-04-05T17:31:00Z",
  "data": {
    "assignee": {
      "login": "octocat",
    },
    "head": {
      "repo": {
        "full_name": "octocat/Hello-World",
      },
    },
    "base": {
      "repo": {
        "full_name": "octocat/Hello-World",
      },
    },
  }
}

Because of this, RIG's internal subscriptions cannot rely on the event type only. RIG is built for routing events to users' devices or sessions, so it must also have a notion of those things built into the subscription mechanism.

The idea: introduce "extractors" that can extract information from an event, and allow subscriptions to match against that extracted information. Let's take a look at an example:

  • Assume there is an event type com.github.pull.create;
  • Assume the user is interested in events that refer to the "octocat/Hello-World";
  • Assume the user only interested in new pull requests assigned to the "octocat" user;
  • We start RIG with an extractor configuration that uses JSON Pointer to find data:
extractors:
  com.github.pull.create:
    assignee:
      # "assignee" is the field name that can be referred to in the subscription request
      # (see subscription request example below).
      # Each field has a field index value that needs to remain the same unless all RIG
      # nodes are stopped and restarted. This can be compared to gRPC field numbers and
      # the same rule of thumb applies: always append fields and never reuse a field
      # index/number.
      stable_field_index: 0
      # JWT values take precedence over values given in a subscription request:
      jwt:
        # Describes where to find the value in the JWT:
        json_pointer: /username
      event:
        # Describes where to find the value in the event:
        json_pointer: /data/assignee/login

    head_repo:
      stable_field_index: 1
      # This is extracted from subscription requests, rather than in the JWT. In the
      # request body the field is referred to by name, so a `json_pointer` is required
      # for the event only:
      event:
        json_pointer: /data/head/repo/full_name

    base_repo:
      stable_field_index: 2
      event:
        json_pointer: /data/base/repo/full_name
  • The frontend sends a subscription that refers to those fields:
{
  "eventType": "com.github.pull.create",
  "oneOf": [
    { "head_repo": "octocat/Hello-World" },
    { "base_repo": "octocat/Hello-World" }
  ]
}
  • The frontend receives the event outlined above because one of the constraint defined under oneOf is fulfilled. Note that within each constraint object, all fields must match, so the constraints are defined in conjunctive normal form.

If a JSON Pointer expression returns more than one value, there is a match if, and only if, the target value is included in the JSON Pointer result list.

Implementation

Matching relies on ETS match specs - subscriptions are kept in an ETS table, for each event type. The tables contain all key/value pairs as defined in the extractor for the event type; they contain the values as defined in the subscription. If a value is not set in a subscription, the missing value is set to nil. For example, the subscription above would be reflected in two records:

{connection_pid, {:assignee, "octocat"}, {:head_repo, "octocat/Hello-World"}, {:base_repo, nil}}
{connection_pid, {:assignee, "octocat"}, {:head_repo, nil}, {:base_repo, "octocat/Hello-World"}}

This structure allows for very efficient matching. There is also a dedicated table per event type, so ownership is easy and there are no concurrent requests per table. At the time of writing, the default limit on the number of ETS tables is 1400 per node, but this can be changed using ERL_MAX_ETS_TABLES. If that ever becomes impractical, putting all subscriptions in a single table should work just as well.

The processes consuming events from Kafka and Kinesis are not the right place for running any filtering or routing logic, as we need them to be as fast as possible. Instead, for each event type there is one process on each node, enabling the consumer processes to quickly hand-off events by looking at only the event type field. Those "filter" processes own their event-type specific ETS table. For any given event, they can use their ETS table to obtain the list of processes to send the events to.

                                                               +
                                                      Node A   |   Node B
                                                               |
                                                               |
                                                               |
                        +                                      |                            +
                        |                                      |                            |
                        | events                               |                            | events
                        |                                      |                            |
                        |                                      |                            |
              +---------v----------+                           |                  +---------v----------+
              |                    |                           |                  |                    |
              |   Kafka Consumer   |                           |                  |   Kafka Consumer   |
              |                    |                           |                  |                    |
              +---+-------------+--+                           |                  +---+-------------+--+
                  |             |                              |                      |             |
                  |             |                              |                      |             |
   foo.bar events |             | foo.baz events               |       foo.bar events |             | foo.baz events
                  |             |                              |                      |             |
                  |             |                              |                      |             |
+-----------------v---+     +---v-----------------+            |    +-----------------v---+     +---v-----------------+
|                     |     |                     |            |    |                     |     |                     |
|  Filter             |     |  Filter             |            |    |  Filter             |     |  Filter             |
|  eventType=foo.bar  |     |  eventType=foo.baz  |            |    |  eventType=foo.bar  |     |  eventType=foo.baz  |
|                     |     |                     |            |    |                     |     |                     |
+---------------------+     +----+-------------+--+            |    +---------------------+     +---+-------------+---+
                                 |             |               |                                    |             |
                                 |             |               |                                    |             |
                                 |             |               |                                    |             |
                            foo.bar events that|       <----------------------------------------------------------+
                            satisfy the connections'           |                                    |
                            subscription constraints   <--------------------------------------------+
                                 |             |               |
                                 |             |               |       A connection subscribes to all filters (periodically),
                      +----------v---+     +---v----------+    |       using the filters' process group. For incoming events,
                      |              |     |              |    |       the filter processes check against all subscription
                      |  WebSocket   |     |     SSE      |    |       constraints and forward the events that match to the
                      |  connection  |     |  connection  |    |       respective connection processes (using the pids stored
                      |              |     |              |    |       in the filter's ETS table).
                      +--------------+     +--------------+    +

Processes, process groups and lifecycles:

  • Consumer processes (Kafka and Kinesis)
    • permanent
  • Filter processes
    • The consumer processes have to start filter processes on demand, on their respective node.
    • Filter process stop themselves after not receiving messages for some time.
    • Filter processes join process groups, such that for each event type there is one such group.
  • Connection processes
    • are tied to the connection itself
  • Subscription entries in the filters' ETS table..
    • ..are created and refreshed periodically by the connection process, which sends the request to all filter processes in the event-type group. The HTTP call that creates the subscription does not directly call a filter process, but instead informs the connection process itself of the new subscription, which in turn registers with the respective filter processes.
    • ..have a per record time-to-live, used to keep the data current. If a connection process dies, the subscription records will no longer be refreshed and get removed eventually.

Connection, Authentication & Authorization

In RIG 1.x, a valid JWT was required in order for a frontend to be able to establish a connection. Starting with RIG 2.0, this will no longer be the case:

  • Any device may establish a WebSocket or SSE connection.
  • If the connection request carries a valid JWT in the Authorization header, the JWT-based subscriptions are set up automatically.[*] Otherwise, no subscriptions are set up.
  • Using the subscriptions endpoint, a frontend can add subscriptions to an existing connection. If the request carries a valid JWT in the Authorization header, the JWT-based subscriptions are set up automatically.[*] Previous JWT-based subscriptions are replaced or removed.

[*] Iterating over the extractors configuration, RIG builds a list of field names by looking for jwt field mappings. In the example above that would be only the "assignee" field with the JSON Pointer /username.

In order to permit or prevent users from creating subscriptions, an administrator can choose one of three options:

  • Anyone can subscribe to any event.
  • Require a valid JWT to subscribe to events.
  • Invoke an external service for subscriptions that are not JWT-based. The service sees the subscription request as sent by the frontend and indicates whether to allow or deny the subscription.

A subscription request, which may contain multiple subscriptions:

{
  "subscriptions": [
    {
      "eventType": "com.github.pull.create",
      "oneOf": [
        { "head_repo": "octocat/Hello-World" },
        { "base_repo": "octocat/Hello-World" }
      ]
    }
  ]
}

In this example, the subscription's constraints are fulfilled when either of the head_repo and base_repo fields match. If the subscription should only apply to cases where both fields match, it should look like this instead:

{
  "subscriptions": [
    {
      "eventType": "com.github.pull.create",
      "oneOf": [
        { "head_repo": "octocat/Hello-World", "base_repo": "octocat/Hello-World" }
      ]
    }
  ]
}

Allow most settings to be changed without recompilation

Values in config.exs are compiled into the release, so they cannot be changed without recompiling the app. Since Docker is our most important deployment target right now, two Docker-ish ways are possible:

  • having a toml/yaml/json file for configuration that can be mounted into a container,
  • using environment variables.

More helpful logs and responses on missing JWT fields

Actual

A missing jti in the given JWT causes a no function clause matching in RigWeb.Presence.Socket.check_token_not_blacklisted/1 error in the logs, while the client simply sees "unauthorized".

The log message for a missing "roles" field is better, but again there is no informative response towards the client.

Expected

A missing jti in the given JWT causes a log message similar to Missing fields in user-supplied JWT: ["jti", "roles"].. The client sees the same response.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.