Coder Social home page Coder Social logo

conduktor / kafka-security-manager Goto Github PK

View Code? Open in Web Editor NEW
359.0 39.0 159.0 270 KB

Manage your Kafka ACL at scale

Home Page: https://hub.docker.com/r/simplesteph/kafka-security-manager

License: MIT License

Shell 0.61% Scala 99.39%
kafka acl ksm zookeeper docker acl-changes broker security

kafka-security-manager's Introduction

Actions Status

An open-source project by Conduktor.io

This project is sponsored by conduktor.io. Conduktor provides a platform to help central teams defining global security and governance controls and developers have access to a central Kafka console and get a self-serve experience. It also helps visualize your ACLs (attached to Service Accounts) in your Apache Kafka cluster!

Kafka Security Manager Diagram

Kafka Security Manager

Kafka Security Manager (KSM) allows you to manage your Kafka ACLs at scale by leveraging an external source as the source of truth. Zookeeper just contains a copy of the ACLs instead of being the source.

Kafka Security Manager Diagram

There are several advantages to this:

  • Kafka administration is done outside of Kafka: anyone with access to the external ACL source can manage Kafka Security
  • Prevents intruders: if someone were to add ACLs to Kafka using the CLI, they would be reverted by KSM within 10 seconds.
  • Full auditability: KSM provides the guarantee that ACLs in Kafka are those in the external source. Additionally, if for example your external source is GitHub, then PRs, PR approvals and commit history will provide Audit the full log of who did what to the ACLs and when
  • Notifications: KSM can notify external channels (such as Slack) in order to give feedback to admins when ACLs are changed. This is particularly useful to ensure that 1) ACL changes are correctly applied 2) ACL are not changed in Kafka directly.

Your role is to ensure that Kafka Security Manager is never down, as it is now a custodian of your ACL.

Parsers

CSV

The csv parser is the default parser and also the fallback one in case no other parser is matched.

This is a sample CSV acl file:

KafkaPrincipal,ResourceType,PatternType,ResourceName,Operation,PermissionType,Host
User:alice,Topic,LITERAL,foo,Read,Allow,*
User:bob,Group,PREFIXED,bar,Write,Deny,12.34.56.78
User:peter,Cluster,LITERAL,kafka-cluster,Create,Allow,*

Important Note: As of KSM 0.4, a new column PatternType has been added to match the changes that happened in Kafka 2.0. This enables KSM to manage LITERAL and PREFIXED ACLs. See #28

YAML

The yaml parser will load ACLs from yaml instead, to activate the parser just provide files with yml or yaml extension.

An example YAML permission file might be:

users:
  alice:
    topics:
      foo:
        - Read
      bar*:
        - Produce
  bob:
    groups:
      bar:
        - Write,Deny,12.34.56.78
      bob*:
        - All
    transactional_ids:
      bar-*:
        - All
  peter:
    clusters:
      kafka-cluster:
        - Create

The YAML parser will handle automatically prefix patterns by simply appending a star to your resource name.

It also supports some helpers to simplify setup:

  • Consume (Read, Describe)
  • Produce (Write, Describe, Create, Cluster Create)

Sources

Current sources shipping with KSM include:

  • File
  • GitHub
  • GitLab (using Personal Auth Tokens)
  • BitBucket
  • Amazon S3
  • Build your own (and contribute back!)

Building

sbt clean test
sbt universal:stage

Fat JAR:

sbt clean assembly

This is a Scala app and therefore should run on the JVM like any other application

Artifacts

By using the JAR dependency, you can create your own SourceAcl.

RELEASES artifacts are deployed to Maven Central:

build.sbt (see Maven Central for the latest version)

libraryDependencies += "io.conduktor" %% "kafka-security-manager" % "version"

Configuration

Security configuration - Zookeeper client

Make sure the app is using a property file and launch options similar to your broker so that it can

  1. Authenticate to Zookeeper using secure credentials (usually done with JAAS)
  2. Apply Zookeeper ACL if enabled

Kafka Security Manager does not connect to Kafka.

Sample run for a typical SASL Setup:

target/universal/stage/bin/kafka-security-manager -Djava.security.auth.login.config=conf/jaas.conf

Where conf/jaas.conf contains something like:

Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/kafka/secrets/zkclient1.keytab"
    principal="zkclient/[email protected]";
};

Security configuration - Admin client

When configured authorizer class is io.conduktor.ksm.compat.AdminClientAuthorizer, kafka-security-manager will use kafka admin client instead of direct zookeeper connection. Configuration example would be

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin-secret";
};

Configuration file

For a list of configuration see application.conf. You can customise them using environment variables or create your own application.conf file and pass it at runtime doing:

target/universal/stage/bin/kafka-security-manager -Dconfig.file=path/to/config-file.conf

Overall we use the lightbend config library to configure this project.

Environment variables

The default configurations can be overwritten using the following environment variables:

  • KSM_READONLY=false: enables KSM to synchronize from an External ACL source. The default value is true, which prevents KSM from altering ACLs in Zookeeper

  • KSM_EXTRACT_ENABLE=true: enable extract mode (get all the ACLs from Kafka formatted as a CSV or YAML)

  • KSM_EXTRACT_FORMAT=csv: selects which format to extract the ACLs with (defaults to csv, supports also yaml)

  • KSM_REFRESH_FREQUENCY_MS=10000: how often to check for changes in ACLs in Kafka and in the Source. 10000 ms by default. If it's set to 0 or negative value, for example -1, then KMS executes ACL synchronization just once and exits

  • KSM_NUM_FAILED_REFRESHES_BEFORE_NOTIFICATION=1: how many times that the refresh of a Source needs to fail (e.g. HTTP timeouts) before a notification is sent. Any value less than or equal to 1 here will notify on every failure to refresh.

  • AUTHORIZER_CLASS: authorizer class for ACL operations. Default is SimpleAclAuthorizer, configured with

    • AUTHORIZER_ZOOKEEPER_CONNECT: zookeeper connection string
    • AUTHORIZER_ZOOKEEPER_SET_ACL=true (default false): set to true if you want your ACLs in Zookeeper to be secure (you probably do want them to be secure) - when in doubt set as the same as your Kafka brokers.

    No-zookeeper authorizer class on top of Kafka Admin Client is bundled with KSM as io.conduktor.ksm.compat.AdminClientAuthorizer, configured with options for org.apache.kafka.clients.admin.AdminClientConfig:

    • ADMIN_CLIENT_ID - client.id, an id to pass to the server when making requests, for tracing/audit purposes, default kafka-security-manager Properties below are not provided to client unless environment variable is set:
    • ADMIN_CLIENT_BOOTSTRAP_SERVERS - bootstrap.servers
    • ADMIN_CLIENT_SECURITY_PROTOCOL - security.protocol
    • ADMIN_CLIENT_SASL_JAAS_CONFIG -sasl.jaas.config - alternative to system jaas configuration
    • ADMIN_CLIENT_SASL_MECHANISM - sasl.mechanism
    • ADMIN_CLIENT_SSL_KEY_PASSWORD - ssl.key.password
    • ADMIN_CLIENT_SSL_KEYSTORE_LOCATION - ssl.keystore.location
    • ADMIN_CLIENT_SSL_KEYSTORE_PASSWORD - ssl.keystore.password
    • ADMIN_CLIENT_SSL_TRUSTSTORE_LOCATION - ssl.truststore.location
    • ADMIN_CLIENT_SSL_TRUSTSTORE_PASSWORD - ssl.truststore.password
  • SOURCE_CLASS: Source class. Valid values include

    • io.conduktor.ksm.source.NoSourceAcl (default): No source for the ACLs. Only use with KSM_READONLY=true
    • io.conduktor.ksm.source.FileSourceAcl: get the ACL source from a file on disk. Good for POC
    • io.conduktor.ksm.source.GitHubSourceAcl: get the ACL from GitHub. Great to get started quickly and store the ACL securely under version control.
    • io.conduktor.ksm.source.GitLabSourceAcl: get the ACL from GitLab using personal access tokens. Great to get started quickly and store the ACL securely under version control.
      • SOURCE_GITLAB_REPOID GitLab project id
      • SOURCE_GITLAB_FILEPATH Path to the ACL file in GitLab project
      • SOURCE_GITLAB_BRANCH Git Branch name
      • SOURCE_GITLAB_HOSTNAME GitLab Hostname
      • SOURCE_GITLAB_ACCESSTOKEN GitLab Personal Access Token. See Personal access tokens to authenticate with the GitLab API.
    • io.conduktor.ksm.source.S3SourceAcl: get the ACL from S3. Good for when you have a S3 bucket managed by Terraform or Cloudformation. This requires region, bucketname and objectkey. See Access credentials for credentials management.
      • SOURCE_S3_REGION AWS S3 Region
      • SOURCE_S3_BUCKETNAME AWS S3 Bucket name
      • SOURCE_S3_OBJECTKEY The Object containing the ACL CSV in S3
    • io.conduktor.ksm.source.BitbucketServerSourceAcl: get the ACL from Bitbucket Server using the v1 REST API. Great if you have private repos in Bitbucket.
    • io.conduktor.ksm.source.BitbucketCloudSourceAcl: get the ACL from Bitbucket Cloud using the Bitbucket Cloud REST API v2.
    • io.conduktor.ksm.source.HttpSourceAcl: get the ACL from an HTTP endpoint. You can enable Google OAuth OIDC Token Authentication.
      • SOURCE_HTTP_URL HTTP endpoint to retrieve ACL data.
      • SOURCE_HTTP_METHOD HTTP Method. Default is GET.
      • SOURCE_HTTP_AUTH_TYPE To enable Http Authentication. googleiam for Google IAM. Default is NONE.
      • SOURCE_HTTP_AUTH_GOOGLEIAM_SERVICE_ACCOUNT Google Service Account name.
      • SOURCE_HTTP_AUTH_GOOGLEIAM_SERVICE_ACCOUNT_KEY Google Service Account Key in JSON string encoded. If not the key isn't configured, it'll try to get the token from environment.
      • SOURCE_HTTP_AUTH_GOOGLEIAM_TARGET_AUDIENCE Google Target Audience for token authentication.
  • NOTIFICATION_CLASS: Class for notification in case of ACL changes in Kafka.

    • io.conduktor.ksm.notification.ConsoleNotification (default): Print changes to the console. Useful for logging
    • io.conduktor.ksm.notification.SlackNotification: Send notifications to a Slack channel (useful for devops / admin team)
  • ACL_PARSER_CSV_DELIMITER: Change the delimiter character for the CSV Parser (useful when you have SSL)

Running on Docker

Building the image

./build-docker.sh

Docker Hub

Alternatively, you can get the automatically built Docker images on Docker Hub

Running

(read above for configuration details)

Then apply to the docker run using for example (in EXTRACT mode):

docker run -it -e AUTHORIZER_ZOOKEEPER_CONNECT="zookeeper-url:2181" -e KSM_EXTRACT_ENABLE=true \
            conduktor/kafka-security-manager:latest

Any of the environment variables described above can be used by the docker run command with the -e options.

Example

docker-compose up -d
docker-compose logs kafka-security-manager
# view the logs, have fun changing example/acls.csv
docker-compose down

For full usage of the docker-compose file see kafka-security-manager

Extracting ACLs

You can initially extract all your existing ACL in Kafka by running the program with the config ksm.extract.enable=true or export KSM_EXTRACT_ENABLE=true

Output should look like:

[2018-03-06 21:49:44,704] INFO Running ACL Extraction mode (ExtractAcl)
[2018-03-06 21:49:44,704] INFO Getting ACLs from Kafka (ExtractAcl)
[2018-03-06 21:49:44,704] INFO Closing Authorizer (ExtractAcl)

KafkaPrincipal,ResourceType,PatternType,ResourceName,Operation,PermissionType,Host
User:bob,Group,PREFIXED,bar,Write,Deny,12.34.56.78
User:alice,Topic,LITERAL,foo,Read,Allow,*
User:peter,Cluster,LITERAL,kafka-cluster,Create,Allow,*

You can then use place this CSV anywhere and use it as your source of truth.

Compatibility

KSM Version Kafka Version Notes
1.1.0-SNAPSHOT 2.8.x updated log4j dependency
1.0.1 2.8.x updated log4j dependency
0.11.0 2.5.x renamed packages to io.conduktor. Breaking change on extract config name
0.10.0 2.5.x YAML support
Add configurable num failed refreshes before notification
0.9 2.5.x Upgrade to Kafka 2.5.x
0.8 2.3.1 Add a "run once" mode
0.7 2.1.1 Kafka Based ACL refresher available (no zookeeper dependency)
0.6 2.0.0 important stability fixes - please update
0.5 2.0.0
0.4 2.0.0 important change: added column 'PatternType' in CSV
0.3 1.1.x
0.2 1.1.x upgrade to 0.3 recommended
0.1 1.0.x might work for earlier versions

Contributing

You can break the API / configs as long as we haven't reached 1.0. Each API break would introduce a new version number.

PRs are welcome, especially with the following:

  • Code refactoring / cleanup / renaming
  • External Sources for ACLs (JDBC, Microsoft AD, etc...)
  • Notification Channels (Email, etc...)

Please open an issue before opening a PR.

Release process

  • update version in [build.sbt] (make sure to use format vX.Y.Z)
  • update [README.md] and [CHANGELOG.md]
  • push the tag (eg: v0.10.0)
  • update version in [build.sbt] to the next snapshot version

That's it !

kafka-security-manager's People

Contributors

andriyfedorov avatar brunodomenici avatar devshawn avatar diablo2050 avatar gurinderu avatar infogulch avatar isaackuang avatar ivan-klass avatar ivanbmstu avatar j1king avatar jmcristobal2 avatar kentso avatar kgupta26 avatar michellewen1516 avatar ntrp avatar o0oxid avatar radishthehut avatar satish-centrifuge avatar sderosiaux avatar shayaantx avatar silverbadge avatar simplesteph avatar thealmightygrant avatar trobert avatar vultron81 avatar zhangzimou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kafka-security-manager's Issues

Add No-Op Source (Read Only KSM)

No-Op source would allow a user to decide not to alter Zookeeper Acl based on an external source. This would allow a read-only version of KSM to run

Add a UI

PR most welcome.

Features

  • I'm thinking right now a read only API to view the list of ACLs in a searchable table.

Implementation

To make things fun (and always keep on learning):

Tasks

  • gRPC Endpoints
  • rest gateway
  • ui

Unable to run once

Hi there,

Application raises java.lang.IllegalArgumentException when refresh.frequency.ms (or KSM_REFRESH_FREQUENCY_MS) is set to "-1".

Error:

Exception in thread "main" java.lang.IllegalArgumentException
	at java.util.concurrent.ScheduledThreadPoolExecutor.scheduleAtFixedRate(ScheduledThreadPoolExecutor.java:565)
	at com.github.simplesteph.ksm.KafkaSecurityManager$.delayedEndpoint$com$github$simplesteph$ksm$KafkaSecurityManager$1(KafkaSecurityManager.scala:55)
	at com.github.simplesteph.ksm.KafkaSecurityManager$delayedInit$body.apply(KafkaSecurityManager.scala:13)
	at scala.Function0.apply$mcV$sp(Function0.scala:39)
	at scala.Function0.apply$mcV$sp$(Function0.scala:39)
	at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
	at scala.App.$anonfun$main$1$adapted(App.scala:80)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.App.main(App.scala:80)
	at scala.App.main$(App.scala:78)
	at com.github.simplesteph.ksm.KafkaSecurityManager$.main(KafkaSecurityManager.scala:13)
	at com.github.simplesteph.ksm.KafkaSecurityManager.main(KafkaSecurityManager.scala)

Error while reading file from S3 bucket

I am using com.github.simplesteph.ksm.source.S3SourceAcl source class to read CSV file from the bucket but I am getting the following exception

Exception

WARN Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use. (com.amazonaws.services.s3.internal.S3AbortableInputStream)
kafka-security-manager_1  | [2020-03-03 19:57:46,948] ERROR unexpected exception (com.github.simplesteph.ksm.KafkaSecurityManager$)
kafka-security-manager_1  | java.util.concurrent.ExecutionException: java.io.IOException: Attempted read on closed stream.
kafka-security-manager_1  |     at java.util.concurrent.FutureTask.report(FutureTask.java:122)
kafka-security-manager_1  |     at java.util.concurrent.FutureTask.get(FutureTask.java:192)
kafka-security-manager_1  |     at com.github.simplesteph.ksm.KafkaSecurityManager$.delayedEndpoint$com$github$simplesteph$ksm$KafkaSecurityManager$1(KafkaSecurityManager.scala:79)
kafka-security-manager_1  |     at com.github.simplesteph.ksm.KafkaSecurityManager$delayedInit$body.apply(KafkaSecurityManager.scala:18)
kafka-security-manager_1  |     at scala.Function0.apply$mcV$sp(Function0.scala:39)
kafka-security-manager_1  |     at scala.Function0.apply$mcV$sp$(Function0.scala:39)
kafka-security-manager_1  |     at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
kafka-security-manager_1  |     at scala.App.$anonfun$main$1$adapted(App.scala:80)
kafka-security-manager_1  |     at scala.collection.immutable.List.foreach(List.scala:392)
kafka-security-manager_1  |     at scala.App.main(App.scala:80)
kafka-security-manager_1  |     at scala.App.main$(App.scala:78)
kafka-security-manager_1  |     at com.github.simplesteph.ksm.KafkaSecurityManager$.main(KafkaSecurityManager.scala:18)
kafka-security-manager_1  |     at com.github.simplesteph.ksm.KafkaSecurityManager.main(KafkaSecurityManager.scala)
kafka-security-manager_1  | Caused by: java.io.IOException: Attempted read on closed stream.
kafka-security-manager_1  |     at org.apache.http.conn.EofSensorInputStream.isReadAllowed(EofSensorInputStream.java:107)
kafka-security-manager_1  |     at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:133)
kafka-security-manager_1  |     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
kafka-security-manager_1  |     at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
kafka-security-manager_1  |     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
kafka-security-manager_1  |     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
kafka-security-manager_1  |     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
kafka-security-manager_1  |     at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
kafka-security-manager_1  |     at java.security.DigestInputStream.read(DigestInputStream.java:161)
kafka-security-manager_1  |     at com.amazonaws.services.s3.internal.DigestValidationInputStream.read(DigestValidationInputStream.java:59)
kafka-security-manager_1  |     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
kafka-security-manager_1  |     at com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
kafka-security-manager_1  |     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
kafka-security-manager_1  |     at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
kafka-security-manager_1  |     at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
kafka-security-manager_1  |     at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
kafka-security-manager_1  |     at java.io.InputStreamReader.read(InputStreamReader.java:184)
kafka-security-manager_1  |     at java.io.BufferedReader.read1(BufferedReader.java:210)
kafka-security-manager_1  |     at java.io.BufferedReader.read(BufferedReader.java:286)
kafka-security-manager_1  |     at java.io.BufferedReader.fill(BufferedReader.java:161)
kafka-security-manager_1  |     at java.io.BufferedReader.read(BufferedReader.java:182)
kafka-security-manager_1  |     at com.github.tototoshi.csv.ReaderLineReader.readLineWithTerminator(ReaderLineReader.java:21)
kafka-security-manager_1  |     at com.github.tototoshi.csv.CSVReader.parseNext$1(CSVReader.scala:33)
kafka-security-manager_1  |     at com.github.tototoshi.csv.CSVReader.readNext(CSVReader.scala:51)
kafka-security-manager_1  |     at com.github.tototoshi.csv.CSVReader.allWithOrderedHeaders(CSVReader.scala:101)
kafka-security-manager_1  |     at com.github.tototoshi.csv.CSVReader.allWithHeaders(CSVReader.scala:97)
kafka-security-manager_1  |     at com.github.simplesteph.ksm.parser.CsvAclParser.aclsFromReader(CsvAclParser.scala:79)
kafka-security-manager_1  |     at com.github.simplesteph.ksm.AclSynchronizer.run(AclSynchronizer.scala:98)
kafka-security-manager_1  |     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
kafka-security-manager_1  |     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
kafka-security-manager_1  |     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
kafka-security-manager_1  |     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
kafka-security-manager_1  |     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
kafka-security-manager_1  |     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
kafka-security-manager_1  |     at java.lang.Thread.run(Thread.java:748)

Configuration

      SOURCE_CLASS: "com.github.simplesteph.ksm.source.S3SourceAcl"
      SOURCE_S3_REGION: "us-east-1"
      SOURCE_S3_BUCKETNAME: "bucket-name"
      SOURCE_S3_OBJECTKEY: "acls.csv"
      AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
      AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
      AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}

Support HA mode

Would be great to have a KSM master and a standby for HA mode. PR welcome!

[Slack Notifications] Use new API

Incoming Webhooks are a simple way to post messages from apps into Slack. These integrations lack newer features and they will be deprecated and possibly removed in the future.

In the current implementation following properties are defined in the configuration file, but username, icon and channel are ignored.

  • webhook
  • username
  • icon
  • channel

Recommend and latest API is following:
https://api.slack.com/methods/chat.postMessage

Slack Notification Exception Handling

If KSM fails to connect to Slack the KSM app itself terminates seemingly due to a lack of exception handling around sending Slack notifications. Please can some exception handling be added?

[2019-08-27 08:44:14,600] ERROR unexpected exception (com.github.simplesteph.ksm.KafkaSecurityManager$) java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: connect timed out

Customizing log4j.properties

Hi, I'm using docker-compose for my kafka-security-manager.
I wanted to set the logLevel to ERROR, but I couldn't. Is there any way to customize that?

Idea: support for connecting to multiple Zookeeper nodes

Current design requires specifying a single zookeeper node that the Kafka Security Manager service will connect to. It'd be nice if the configuration allowed for a list and rotation logic to handle when a zookeeper node was unavailable due to maintenance/outage.

Alternatively, how about a configuration where multiple Kafka Security Managers are run; one on each zookeeper node? And use Zookeeper to perform leader election. Only the leader instance of Kafka Security Manager would apply ACLs; others would sit idle. Any ZK/KSM node could be taken offline at any time and a new leader would be elected - there'd be no disruption of either ZooKeeper or Kafka Security Manager service.

Stop KSM after one run

Hi Stephane,

is there any way how to tell to KSM to stop after the first processing/execution of the ACL?
We want to use external trigger to start KSM (e.g. Github merge to Master or S3 file upload) and then stop it in order to safe the costs.

Would it be possible to add this feature, if it doesn't exist yet?

Thanks and regards,
Petros

Bitbucket Cloud config required despite not being used

Our KSM deployments use com.github.simplesteph.ksm.source.BitbucketServerSourceAcl as we have a private Bitbucket deployment.

However, now our KSM apps running on :latest are failing to startup due to:

Exception in thread "main" com.typesafe.config.ConfigException$UnresolvedSubstitution: application.conf @ jar:file:/opt/docker/lib/com.github.simplesteph.ksm.kafka-security-manager-0.9-SNAPSHOT.jar!/application.conf: 115: Could not resolve substitution to a value: ${SOURCE_BITBUCKET_CLOUD_AUTH_PASSWORD}

Seems this config is weirdly required even though we don't use the cloud source ACL type.

I guess due to https://github.com/simplesteph/kafka-security-manager/blob/master/src/main/resources/application.conf#L114-L115

All other ENV checks are optional other than these.

DelegationToken not support on Kafka 1.x.x

Hi,
I found a problem using kafka-security-manager release latest or v03-release .
As soon you start the container, also using the default configuration to do not make any changes in zookeeper, the kafka-security-manager is creating a key automatically in /kafka-acl/DelegationToken, after kafka-acls is concerning about the DelegationToken key in zookeeper, as far as I know this feature "DelegationToken" can be used only in the kafka 2, do you have also figured out this problem ?
Thank you,
Frank

S3AclSource

Thumbs up if you want this feature, PR welcome

Unable to use basic auth

I get this exception when trying to use basic auth:

[2019-07-02 14:58:26,349] ERROR Refreshing the source failed (AclSynchronizer)
java.io.UnsupportedEncodingException: UTF ​-8
	at java.lang.StringCoding.encode(StringCoding.java:341)
	at java.lang.String.getBytes(String.java:918)
	at com.github.simplesteph.ksm.source.GitHubSourceAcl.$anonfun$refresh$1(GitHubSourceAcl.scala:58)
	at scala.Option.foreach(Option.scala:257)
	at com.github.simplesteph.ksm.source.GitHubSourceAcl.refresh(GitHubSourceAcl.scala:57)
	at com.github.simplesteph.ksm.AclSynchronizer.$anonfun$run$1(AclSynchronizer.scala:76)
	at scala.util.Try$.apply(Try.scala:209)
	at com.github.simplesteph.ksm.AclSynchronizer.run(AclSynchronizer.scala:76)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Note the space between "UTF" and "-8". I believe there are extraneous characters here:

https://github.com/simplesteph/kafka-security-manager/blob/15f802df04f0837275e67c349e39db5ea348256d/src/main/scala/com/github/simplesteph/ksm/source/GitHubSourceAcl.scala#L58

That line looks reasonable in GitHub, but when I open it in vi it looks like this:

val basicB64 = Base64.getEncoder.encodeToString(basic.getBytes("UTF<200c><200b>-8"))

I believe that string should just be "UTF-8".

Secure Rest API

Currently REST API is open, it would be nice to have some basic authentication.

Problem with GRPC Gateway Server

On the latest docker image with v2.0.0 support I get the following error when I try and hit the GRPC gateway REST endpoint:

java.lang.NoClassDefFoundError: javax/activation/MimetypesFileTypeMap at grpcgateway.handlers.SwaggerHandler.<init>(SwaggerHandler.scala:86) at grpcgateway.server.GrpcGatewayServerBuilder$$anon$1.initChannel(GrpcGatewayServerBuilder.scala:36) at grpcgateway.server.GrpcGatewayServerBuilder$$anon$1.initChannel(GrpcGatewayServerBuilder.scala:32) at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:113) at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:105) at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:617) at io.netty.channel.DefaultChannelPipeline.access$000(DefaultChannelPipeline.java:46) at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1467) at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1141) at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:666) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:510) at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:423) at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:482) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:844)

This is due to the docker image using "FROM openjdk:latest" as its base which results in it using OpenJDK 10.x. I worked around this issue by putting "--add-modules java.activation" as an option to Java in my docker start script. You might want to consider pinning the version of the base docker image you are using to a known working version. Additionally you might want to consider using the "-slim" version of the OpenJDK base docker image since the current one you are using is almost 500 MB.

YAML support, Directory Support and compound syntax

Hi, as part of a project we implemented support for YAML format and multi-file support. The YAML support allows to write a more human readable permission config:

users:
  C=DE,O=org,OU=WEB,CN=t1.example.com,L=Stuttgart,ST=reg:
    topics:
      Topic1:
       - Read
      TopicPref_*:
        - All
    groups:
      team1-app1-*:
        - Read
        - Describe

In the YAML example you can also see the compound syntax where you define All to give all permissions on a topic.

The multi-file support allows to organize the permissions in different files to make separation of concerns more easy.

The implementation was done around version 0.5 so I would need to invest some time to create a valid pull request, are you interested in merging this kind of features? Otherwise I will spare the time ^^'

Authenticate to zookeeper with client certificate

Is it possible to authenticate to zookeeper with client certificate?

If possible, it is not immediately obvious from the examples how to do this.

If not possible yet, it would be a very good feature.

Could not login

This is file jaas.conf

Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/kafka/secrets/zkclient1.keytab"
    principal="zkclient/[email protected]";
};

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin-secret";
};

And I get this error

[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=10.10.12.94:2181,10.10.12.95:2181,10.10.12.96:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@30dae81
[main-SendThread(kafka-3:2181)] WARN org.apache.zookeeper.SaslClientCallbackHandler - Could not login: the Client is being asked for a password, but the ZooKeeper Client code does not currently support obtaining a password from the user. Make sure that the Client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)' and restart the Client. If you still get this message after that, the TGT in the ticket cache has expired and must be manually refreshed. To do so, first determine if you are using a password or a keytab. If the former, run kinit in a Unix shell in the environment of the user who is running this Zookeeper Client using the command 'kinit <princ>' (where <princ> is the name of the Client's Kerberos principal). If the latter, do 'kinit -k -t <keytab> <princ>' (where <princ> is the name of the Kerberos principal, and <keytab> is the location of the keytab file). After manually refreshing your cache, restart this Client. If you continue to see this message after manually refreshing your cache, ensure that your KDC host's clock is in sync with this host's clock.
[main-SendThread(kafka-3:2181)] WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No password provided Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.

Publish Maven Artifact

Makes life easy for people to get the jars and extend the project with their own proprietary sources without the need to fork the project

Question CSV Multipe Operation possible?

Hi,

is it possible to write multiple operations on same acl line on csv format as on the json version?

Example
Normal Example with Write, Describe and Read Operations
"User:CN=Test",Topic,ClientTopic,Write,Allow,* "User:CN=Test",Topic,ClientTopic,Describe,Allow,* "User:CN=Test",Topic,ClientTopic,Read,Allow,*

Is it possible to write multiple operations on same line?
"User:CN=Test",Topic,ClientTopic,Write;Describe;Read,Allow,*

Json Example where it is possible:
{ "version": 1, "acls": [{ "principals": ["user:alice”, "group: kafka-devs"], "permissionType": "ALLOW", "operations": [ "READ", "WRITE" ], "hosts": ["host1", "host2" ] }] }

(Edit by Stephane): Thumbs up or down based on if you want this feature

Unable to use selfmade Callbackhandler Classes

I am running Kafka in a Kubernetes environment. I have implemented sasl callbackhandler for Kafka to login and verify users via Keycloak. Sadly I can't add the login handler to the ksm container, since there is no way to add the kafka property "sasl.login.callback.handler.class". Moreover I would have to extend the java classpath to add my jar file containing the callbackhandlers, which is also not possible at the moment, as far as I know. To enable these features would be awesome!

Fail to connect to bitbucket as the resource.

Hi Stephane,

I always got the error of "WARN Too many invalid password attempts. Log in at https://id.atlassian.com/ to restore access." When I use bitbucket as my resource. The user name is just my user name, and I created an APP password as the password. I used bitbucket cloud. Do you have any idea about my problem? Thanks

Support Kafka 2 ACL format

Kafka 2 now supports prefixed and literal resource pattern KAFKA-6841. This currently working with default value for Kafka 2 of literal. It would be great to get KSM to support both using a new column resource-pattern in the input csv file so topics ACLs can be suffixed with wildcards when the resource-pattern is prefixed

Docker Build pushed to Docker Hub

Travis expert, PR welcome. Requirements

  • Every build on master is pushed to :latest
  • Every tag / releases is pushed to :tag
  • Convenient way of passing creds from Travis CI to Docker Hub (?)
  • Documentation (README.md)

running without Docker?

Could you please document the invocation without Docker?
Either my java -jar oder java -cp attempts are wrong or I'm having a Scala version problem...

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.