Coder Social home page Coder Social logo

event-management-wg's People

Contributors

markackert avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

event-management-wg's Issues

Data Compatibility

Run a series of tests for data format compatibility, given the following:

Live service operations:

  • Producer may be using an up-level schema
  • Consumer may be using an up-level schema

Wire fmt:

  • With AVRO
  • Without AVRO
  • Ignore CloudEvents

Document the producer and consumer experiences working with data at different schema levels, encountered error conditions, workarounds, etc.

Make recommendations on our use of AVRO and how strongly we should recommend it as part of best practices / conformance criteria.

Producer - Job Events

Create a producer which can track Job events - Submitted, Completed, Phase Changed, etc.

Consider how the z/OS event capture could be re-used as part of a later library creation, #4 .

Question:

  • Using exits or lower level programming tools?
    • How does z/OSMF capture system information under the hood, or other mainframe products?
  • Can we capture JES events? via Exits?
  • Can we re-use existing proprietary code, open source it, vs develop it ourselves?
  • SSI JES Interface - another option for exposing Job information

Granularly:

  • Collector ( system events )
  • Transformer - takes system events and prepares them for production
  • Producer -> Kafka

Follow-ups:

  • Look into code donations from ISV or IBM for the collection of system events.

Producer - Simple w/ Mock Data in Java

Create a Kafka producer capable of creating events on a configurable set of topics and time slices.

This producer may be re-used in future performance tests to populate Kafka with data.

Producer Library POC

POC a simple producer library which contains APIs or interfaces for capturing z/OS events and publishing them to a Kafka bus. Refactor the existing Jobs producer to use this library.

Consider extensibility - if this library can't capture every type of event, is there a way we can subscribe for events or allow extenders to write their own z/OS event capture interface?

Wire format - AVRO, cloud events in Java

Using mock events of different size and formats (500 bytes, 1KB, 5KB, 10KB, 100KB), use Avro and Cloud Events plugins to write events to Kafka and consume them.

Use topics like the following:

  • WIRE.FMT.PLAIN
  • WIRE.FMT.CE
  • WIRE.FMT.AVRO
  • WIRE.FMT.AVRO_CE

Avro schemas should be placed in:

  • WIRE.FMT.AVRO.SCHEMA
  • WIRE.FMT.AVRO_CE.SCHEMA

If possible, measure the performance difference between the different formats, both in terms of throughput and disk usage. Document the user experience when choosing the different formats.

If possible, integrate the Avro and CloudEvent functionality into the producer and consumer libraries, update those issues, or open a new issue to do so.

SSL_SASL SAF Integration

See #20 . In addition to that SASL callback, we could provide a SAF verifier. (Would only work in Kafka running on Z).

Integration: APIML Frontend

Is it possible to have APIML frontend connections to the Kafka Broker?

If so, what are the impacts on:

  • APIs and Networking. Can we use Kafka's proprietary network model, Kafka's REST API, both?
  • Security. What are the authentication and authorization impacts? Does this change any operational considerations? (i.e., ignore Kafka ACLs and "trust" anyone with ZOWE.EVENT.MGMT resource?)
  • Performance. Some performance degradation expected, how much?
  • Configuration burden. How much more configuration is required, what is the estimated impact to installers/maintainers?

Install Kafka on Z (Bare-Metal)

Work through an installation of Kafka on Z, without use of a z/OS package manager.

There will be struggles with the Kafka scripts, which use features in bash.

  1. How much effort is it to convert critical-path scripts to sh? Start/stop/etc.
  2. What other options are there? Rocket's bash could help, but we don't want pre-req on software which requires licensing.

End-to-end: Job Submission Use Case

  • Requires #1 or #2
  • Requires #5
  • Requires either #7 or #9

We should create an end-to-end test scenario where a user submits a real Job via z/OSMF REST API, and attempts to retrieve the job completion information from Kafka.

Client - Simple Consumption in Java

Create a simple Kafka client in Java which can consume configurable set of topics and log/publish some basic consumption statistics ( msg count per topic, global, etc.).

Broker Data Exhaustion

Evaluate the behavior of the system under different data exhaustion scenarios. Discuss the results with the WG.

Broker up scenarios:

  • Topic runs out of space (local)
  • Broker runs out of space (global)

Broker down scenarios:

  • Producer runs out of local space / cache while broker is down
  • Producer goes down while broker is down and messages were pending (expectation: these messages are lost)

Webhook Client

Build a simple webhook client which consumes messages from Kafka topics, and allows users to configure webhook endpoints to forward the content to. Simple user interface or no user interface at all - just a configuration file.

Mainframe Security Integrations

Test mainframe security integrations with Kafka. What configuration options do we have?

This should include:

  • Mainframe UIDs/Password authentication
  • Mainframe authorization for Kafka ACLs
  • Certificates stored in Keyrings

We should assume:

  • No use of LDAP
  • For keyrings, Kafka must run on Z
  • For authentication, Kafka may be on or off Z
  • For authorization, Kafka may be on or off Z
  • If Kafka is off Z for authN/authZ cases, we may use z/OSMF and/or APIML for authentication APIs.

End-to-end: Job Submission with Restrictions

Follows #15 .

The same POC should be expanded to consider authorization limitations. The solution should ensure that the user submitting the job is able to retrieve their job information without seeing other jobs running on the system. i.e., the user should not gain unauthorized visibility into the system.

Suggestion: use dynamic topic to send user end-of-job notification. Here's an off-the-cuff flow, the agent managing the authN/authZ check and copying the message to the new topic is undefined.

User1 - > ZOWE.SYS1.JOB.NOTIFICATION { jobId: '...', jobNumber: '...' } -> Kafka authN/authZ check -> responds { topic: 'ZOWE.SYS1.USER1.JOB.EVENTS' }.
Kafka -> ZOWE.SYS1.USER1.JOB.EVENTS -> consume from ZOWE.SYS1.JOBS filter where (jobId=,jobNr=), push to new topic.
User -> retrieves event from ZOWE.SYS1.USER.JOB.EVENTS.

SSL_SASL APIML Integration

The SASL authentication option in Kafka has a PLAIN operating mode where a username and password is passed to the server-side component and routed to a user-defined callback so long as user's JAR is on the classpath (sasl.server.callback.handler.class).

We could use this feature do deliver an authentication workflow that leverages APIML's /auth/ endpoint.

Kafka Monitoring Solutions

From WG Notes:

How are we monitoring Kafka?

  • Existing infrastructure / MF products, or something distributed-friendly (Grafana/etc.)
  • Can we provide guidance on how to monitor and operate, either with MF products or new products - training/education

Document what metrics and integrations are available from Kafka. JMX, OpenTelemetry, etc. How would customers aggregate, visualize, exhaust these metrics?

Install Kafka on z/Linux + K8S

Install a Kafka instance on our z/Linux + K8S infrastructure.

The Kubernetes infrastructure may be out of date and require an upgrade. Networking challenges unknown...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.