Coder Social home page Coder Social logo

vdesabou / kafka-docker-playground Goto Github PK

View Code? Open in Web Editor NEW
604.0 15.0 197.0 159.06 MB

🐳✨ Fully automated Apache Kafka® and Confluent Docker based examples // 👷‍♂️ Easily build examples or reproduction models

Home Page: https://kafka-docker-playground.io

License: MIT License

Dockerfile 0.84% Shell 87.53% Java 7.11% Python 1.53% PLpgSQL 0.08% C# 0.53% Go 0.16% JavaScript 0.46% Ruby 0.04% Makefile 0.01% Jupyter Notebook 1.72%
kafka confluent-kafka kafka-connectors kafka-connect confluent-cloud confluent-platform confluent

kafka-docker-playground's People

Contributors

92twinturboz avatar anliksim avatar ashwinpankaj avatar cdpop avatar danielpetisme avatar dependabot[bot] avatar framiere avatar gushai avatar imgbotapp avatar ismailmayat avatar javabrett avatar jocelyndrean avatar kpatelatwork avatar ksilin avatar maaarv avatar mcascallares avatar michaelhussey avatar mosheblumbergx avatar mukkachaitanya avatar nathannam avatar ncliang avatar rajdangwal avatar rkr714 avatar rrao714 avatar samishaikh avatar schm1tz1 avatar smithjohntaylor avatar tanish0019 avatar vdesabou avatar yashmayya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kafka-docker-playground's Issues

Azure tests failing

Error is:

The directory object quota limit for the Principal has been exceeded

connect-azure-data-lake-storage-gen2-sink is failing

[2020-06-18 17:27:54,080] ERROR WorkerSinkTask{id=azure-datalake-gen2-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: Unable to Create DataLakeGen2StorageOutputStream for accountName 'playgroundtravis4141' key '/datalake_topic/partition=0/datalake_topic+0+0000000000.avro' (org.apache.kafka.connect.runtime.WorkerSinkTask)
1379org.apache.kafka.connect.errors.ConnectException: Unable to Create DataLakeGen2StorageOutputStream for accountName 'playgroundtravis4141' key '/datalake_topic/partition=0/datalake_topic+0+0000000000.avro'
1380	at io.confluent.connect.azure.datalake.gen2.storage.DataLakeGen2StorageOutputStream.<init>(DataLakeGen2StorageOutputStream.java:59)
1381	at io.confluent.connect.azure.datalake.gen2.storage.AzureDataLakeGen2Storage.create(AzureDataLakeGen2Storage.java:155)
1382	at io.confluent.connect.azure.storage.format.avro.AvroRecordWriterProvider$1.write(AvroRecordWriterProvider.java:59)
1383	at io.confluent.connect.azure.storage.TopicPartitionWriter.writeRecord(TopicPartitionWriter.java:380)
1384	at io.confluent.connect.azure.storage.TopicPartitionWriter.checkRotationOrAppend(TopicPartitionWriter.java:209)
1385	at io.confluent.connect.azure.storage.TopicPartitionWriter.executeState(TopicPartitionWriter.java:172)
1386	at io.confluent.connect.azure.storage.TopicPartitionWriter.write(TopicPartitionWriter.java:136)
1387	at io.confluent.connect.azure.datalake.gen2.AzureDataLakeGen2SinkTask.put(AzureDataLakeGen2SinkTask.java:146)
1388	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
1389	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
1390	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
1391	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
1392	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
1393	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
1394	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
1395	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
1396	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
1397	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
1398	at java.lang.Thread.run(Thread.java:748)
1399Caused by: PUT https://playgroundtravis4141.dfs.core.windows.net/topics/datalake_topic/partition%3D0/datalake_topic%2B0%2B0000000000.avro?resource=file&timeout=90
1400StatusCode=403
1401StatusDescription=This request is not authorized to perform this operation using this permission.
1402ErrorCode=AuthorizationPermissionMismatch
1403ErrorMessage=This request is not authorized to perform this operation using this permission.
1404RequestId:3846932b-201f-010c-3795-45b672000000
1405Time:2020-06-18T17:27:53.7707274Z
1406	at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:134)
1407	at org.apache.hadoop.fs.azurebfs.services.AbfsClient.createPath(AbfsClient.java:243)
1408	at io.confluent.connect.azure.datalake.gen2.storage.DataLakeGen2StorageOutputStream.<init>(DataLakeGen2StorageOutputStream.java:55)
1409	... 18 more

Kerberos tests failing with ubi8 image

Current kerberos tests are broken with UBI8 image (5.4.0-1-ubi8)

Could not configure server because SASL configuration did not allow the  ZooKeeper server to authenticate itself properly: javax.security.auth.login.LoginException: No password provided

SNMP source connector is broken

09:01:25 Creating SNMP Source connector
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3787  100  3056  100   731  15280   3655 --:--:-- --:--:-- --:--:-- 18840
{
  "error_code": 500,
  "message": "Failed to find any class that implements Connector and which name matches io.confluent.connect.snmp.SnmpSourceConnector, available connectors are: PluginDesc{klass=class io.confluent.connect.snmp.SnmpTrapSourceConnector, name='io.confluent.connect.snmp.SnmpTrapSourceConnector', version='1.1.2', encodedVersion=1.1.2, type=source, typeName='source', location='file:/usr/share/confluent-hub-components/confluentinc-kafka-connect-snmp/lib/'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSinkConnector, name='org.apache.kafka.connect.file.FileStreamSinkConnector', version='5.5.1-ce', encodedVersion=5.5.1-ce, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSourceConnector, name='org.apache.kafka.connect.file.FileStreamSourceConnector', version='5.5.1-ce', encodedVersion=5.5.1-ce, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorCheckpointConnector, name='org.apache.kafka.connect.mirror.MirrorCheckpointConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorHeartbeatConnector, name='org.apache.kafka.connect.mirror.MirrorHeartbeatConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorSourceConnector, name='org.apache.kafka.connect.mirror.MirrorSourceConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockConnector, name='org.apache.kafka.connect.tools.MockConnector', version='5.5.1-ce', encodedVersion=5.5.1-ce, type=connector, typeName='connector', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSinkConnector, name='org.apache.kafka.connect.tools.MockSinkConnector', version='5.5.1-ce', encodedVersion=5.5.1-ce, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSourceConnector, name='org.apache.kafka.connect.tools.MockSourceConnector', version='5.5.1-ce', encodedVersion=5.5.1-ce, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.SchemaSourceConnector, name='org.apache.kafka.connect.tools.SchemaSourceConnector', version='5.5.1-ce', encodedVersion=5.5.1-ce, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSinkConnector, name='org.apache.kafka.connect.tools.VerifiableSinkConnector', version='5.5.1-ce', encodedVersion=5.5.1-ce, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSourceConnector, name='org.apache.kafka.connect.tools.VerifiableSourceConnector', version='5.5.1-ce', encodedVersion=5.5.1-ce, type=source, typeName='source', location='classpath'}"
}

connect/connect-azure-search-sink is failing

15:01:03 Creating Azure Search service
Command group 'search' is in preview. It may be changed/removed in a future release.
Creating search service 'playgroundtravis5202' would exceed quota of sku 'free' for subscription 'xxx'. RequestId: xxxx

NPE with GCS Source connector

To investigate:

er)
[2020-05-19 12:30:11,574] ERROR WorkerSourceTask{id=GCSSourceConnector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
java.lang.NullPointerException
        at io.confluent.connect.cloud.storage.source.StorageSourceTask.start(StorageSourceTask.java:73)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:213)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
[2020-05-19 12:30:11,577] ERROR WorkerSourceTask{id=GCSSourceConnector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
[2020-05-19 12:30:11,577] INFO Stopping source connector task with assigned folders null (io.confluent.connect.cloud.storage.source.StorageSourceTask)

Travis: ccloud demo failing with error "Cannot complete request because of a conflicting operation (e.g. worker rebalance)"

17:14:58 Creating MySQL source connector
840Unable to find image 'imega/jq:latest' locally
841  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
842                                 Dload  Upload   Total   Spent    Left  Speed
843100   504    0     0  100   504      0   2507 --:--:-- --:--:-- --:--:--  2495latest: Pulling from imega/jq
8447962620bcd88: Pulling fs layer
845100   617  100   113  100   504    174    776 --:--:-- --:--:-- --:--:--   950
8467962620bcd88: Download complete
8477962620bcd88: Pull complete
848Digest: sha256:eb2bb4e9299e05761a12a1028f311c7c020c015fe82a535cc6ed4990d6f0e5b2
849Status: Downloaded newer image for imega/jq:latest
850{
851  "error_code": 409,
852  "message": "Cannot complete request because of a conflicting operation (e.g. worker rebalance)"
853}

UBI8: curl: (35) error:141A318A:SSL routines:tls_process_ske_dhe:dh key too small

18:33:29 ####################################################
62418:33:29 Executing replicator-2way-ssl.sh in dir connect/connect-replicator
62518:33:29 ####################################################
62618:33:29 Generate keys and certificates used for SSL
627Removing network plaintext_default
628Network plaintext_default not found.
629Creating network "plaintext_default" with the default driver
630Creating zookeeper ... 
631Creating client    ... 
632Creating kdc       ... 
633Creating broker2   ... 
634a-registry ... 
63518:34:06 Waiting up to 240 seconds for Kafka Connect connect to start
63618:36:20 Connect connect has started!
63718:36:20 ########
63818:36:20 ##  SSL authentication
63918:36:20 ########
64018:36:20 Sending messages to topic test-topic-ssl
641>[2020-03-12 18:36:30,525] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {test-topic-ssl=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
642>>>>>>>>>>18:36:31 Creating Confluent Replicator connector with SSL authentication
643  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
644                                 Dload  Upload   Total   Spent    Left  Speed
645  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
646curl: (35) error:141A318A:SSL routines:tls_process_ske_dhe:dh key too small

connect-splunk-sink is failing

Using image splunk/splunk:latest, I get:

[2020-06-15 14:28:11,862] ERROR encountered io exception (com.splunk.hecclient.Indexer)
java.net.SocketException: Connection reset
        at java.net.SocketInputStream.read(SocketInputStream.java:210)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
        at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
        at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280)
        at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
        at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
        at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
        at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
        at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157)
        at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
        at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
        at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
        at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
        at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
        at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
        at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
        at com.splunk.hecclient.Indexer.executeHttpRequest(Indexer.java:140)
        at com.splunk.hecclient.Indexer.send(Indexer.java:120)
        at com.splunk.hecclient.HecChannel.send(HecChannel.java:61)
        at com.splunk.hecclient.LoadBalancer.send(LoadBalancer.java:56)
        at com.splunk.hecclient.Hec.send(Hec.java:234)
        at com.splunk.kafka.connect.SplunkSinkTask.send(SplunkSinkTask.java:285)
        at com.splunk.kafka.connect.SplunkSinkTask.sendEvents(SplunkSinkTask.java:277)
        at com.splunk.kafka.connect.SplunkSinkTask.handleEvent(SplunkSinkTask.java:253)
        at com.splunk.kafka.connect.SplunkSinkTask.put(SplunkSinkTask.java:103)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
[2020-06-15 14:28:11,874] INFO add 1 failed batches (com.splunk.kafka.connect.SplunkSinkTask)
[2020-06-15 14:28:11,874] INFO total failed batches 1 (com.splunk.kafka.connect.SplunkSinkTask)
[2020-06-15 14:29:11,877] WARN attempting to resend batch fc6c009f-bdd5-4b39-930b-7c8d64467e28 with 3 events, this is attempt 1 out of -1 for this batch  (com.splunk.kafka.connect.SplunkSinkTask)
[2020-06-15 14:29:11,885] ERROR encountered io exception (com.splunk.hecclient.Indexer)
java.net.SocketException: Connection reset
        at java.net.SocketInputStream.read(SocketInputStream.java:210)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
        at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
        at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280)
        at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
        at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
        at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
        at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
        at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157)
        at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
        at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
        at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
        at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
        at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
        at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
        at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
        at com.splunk.hecclient.Indexer.executeHttpRequest(Indexer.java:140)
        at com.splunk.hecclient.Indexer.send(Indexer.java:120)
        at com.splunk.hecclient.HecChannel.send(HecChannel.java:61)
        at com.splunk.hecclient.LoadBalancer.send(LoadBalancer.java:56)
        at com.splunk.hecclient.Hec.send(Hec.java:234)
        at com.splunk.kafka.connect.SplunkSinkTask.send(SplunkSinkTask.java:285)
        at com.splunk.kafka.connect.SplunkSinkTask.handleFailedBatches(SplunkSinkTask.java:141)
        at com.splunk.kafka.connect.SplunkSinkTask.put(SplunkSinkTask.java:74)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

This is caused by this change https://splunk.github.io/docker-splunk/CHANGELOG.html#8021:

All HEC related variables were revised to follow a nested dict format in default.yml, i.e. splunk.hec_enableSSL is now splunk.hec.ssl. See the Provision HEC example in the docs.

Blob storage issue

[2020-03-10 13:03:05,140] ERROR WorkerSinkTask{id=azure-blob-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: org.apache.kafka.connect.errors.ConnectException: java.lang.reflect.InvocationTargetException
        at io.confluent.connect.azure.blob.AzureBlobStorageSinkTask.start(AzureBlobStorageSinkTask.java:125)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:301)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:189)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.ConnectException: java.lang.reflect.InvocationTargetException
        at io.confluent.connect.storage.StorageFactory.createStorage(StorageFactory.java:55)
        at io.confluent.connect.azure.blob.AzureBlobStorageSinkTask.start(AzureBlobStorageSinkTask.java:104)
        ... 9 more
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at io.confluent.connect.storage.StorageFactory.createStorage(StorageFactory.java:50)
        ... 10 more
Caused by: java.lang.NoClassDefFoundError: io/netty/handler/codec/http/QueryStringDecoder
        at com.microsoft.azure.storage.blob.SharedKeyCredentials.getCanonicalizedResource(SharedKeyCredentials.java:171)
        at com.microsoft.azure.storage.blob.SharedKeyCredentials.buildStringToSign(SharedKeyCredentials.java:105)
        at com.microsoft.azure.storage.blob.SharedKeyCredentials.access$0(SharedKeyCredentials.java:85)
        at com.microsoft.azure.storage.blob.SharedKeyCredentials$SharedKeyCredentialsPolicy.sendAsync(SharedKeyCredentials.java:264)
        at com.microsoft.azure.storage.blob.RequestRetryFactory$RequestRetryPolicy.attemptAsync(RequestRetryFactory.java:155)
        at com.microsoft.azure.storage.blob.RequestRetryFactory$RequestRetryPolicy.sendAsync(RequestRetryFactory.java:75)
        at com.microsoft.azure.storage.blob.RequestIDFactory$RequestIDPolicy.sendAsync(RequestIDFactory.java:60)
        at com.microsoft.azure.storage.blob.TelemetryFactory$TelemetryPolicy.sendAsync(TelemetryFactory.java:70)
        at com.microsoft.rest.v2.http.HttpPipeline.sendRequestAsync(HttpPipeline.java:67)
        at com.microsoft.rest.v2.RestProxy.sendHttpRequestAsync(RestProxy.java:99)
        at com.microsoft.rest.v2.RestProxy.invoke(RestProxy.java:126)
        at com.microsoft.azure.storage.blob.$Proxy41.create(Unknown Source)
        at com.microsoft.azure.storage.blob.GeneratedContainers.createWithRestResponseAsync(GeneratedContainers.java:215)
        at com.microsoft.azure.storage.blob.ContainerURL.create(ContainerURL.java:208)
        at com.microsoft.azure.storage.blob.ContainerURL.create(ContainerURL.java:178)
        at io.confluent.connect.azure.blob.storage.AzureBlobStorage.createContainerIfNotExists(AzureBlobStorage.java:91)
        at io.confluent.connect.azure.blob.storage.AzureBlobStorage.<init>(AzureBlobStorage.java:78)
        ... 15 more
Caused by: java.lang.ClassNotFoundException: io.netty.handler.codec.http.QueryStringDecoder
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 32 more

See https://travis-ci.com/vdesabou/kafka-docker-playground/jobs/295621528

Update cloud-exporter examples with config file

Example:

config:
  http:
    baseurl: https://api.telemetry.confluent.cloud/
    timeout: 60
  listener: 0.0.0.0:2112
  noTimestamp: false
  delay: 60
  granularity: PT1M
rules:
  - clusters:
      - ${CCLOUD_API_CLUSTER_ID} # changeme
    metrics:
      - io.confluent.kafka.server/active_connection_count
      - io.confluent.kafka.server/request_count
      - io.confluent.kafka.server/partition_count
      - io.confluent.kafka.server/successful_authentication_count
    labels:
      - cluster_id
      - type
  - clusters:
      - ${CCLOUD_API_CLUSTER_ID} # changeme
    metrics:
      - io.confluent.kafka.server/received_bytes
      - io.confluent.kafka.server/sent_bytes
      - io.confluent.kafka.server/received_records
      - io.confluent.kafka.server/sent_records
      - io.confluent.kafka.server/retained_bytes
    labels:
      - cluster_id
      - topic
      - type
      - partition
    topics:
      - sample
      - test-topic

Connect Images Fails to Build Due to Kudu2

When running Connect tools, the build fails after not finding kafka-connect-kudu2 which no longer exists:

Step 61/61 : RUN confluent-hub install --no-prompt confluentinc/kafka-connect-kudu2:latest
 ---> Running in cbae011ab9c3
Running in a "--no-prompt" mode
Unable to find a component

Error: Component not found, specify either valid name from Confluent Hub in format: <owner>/<name>:<version:latest> or path to a local file
ERROR: Service 'connect' failed to build: The command '/bin/sh -c confluent-hub install --no-prompt confluentinc/kafka-connect-kudu2:latest' returned a non-zero code: 1
Waiting up to 240 seconds for Kafka Connect to start

ERROR: The logs in connect container do not show 'Finished starting connectors and tasks' after 240 seconds. Please troubleshoot with 'docker container ps' and 'docker container logs'.

Need to remove this line from Dockerfile-Connector-Hub
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-kudu2:latest

Replicator with connect: error with 5.4.0 NoClassDefFoundError: io/confluent/connect/replicator/util/RegexValidator

Getting:

[2020-01-23 14:29:30,125] WARN /connectors/replicate-europe-to-us/config (org.eclipse.jetty.server.HttpChannel)
javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: io/confluent/connect/replicator/util/RegexValidator
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:408)
	at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:852)
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:544)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1581)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1307)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:482)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1549)
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1204)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
	at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
	at org.eclipse.jetty.server.Server.handle(Server.java:494)
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268)
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
	at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: io/confluent/connect/replicator/util/RegexValidator
	at org.glassfish.jersey.servlet.internal.ResponseWriter.rethrow(ResponseWriter.java:254)
	at org.glassfish.jersey.servlet.internal.ResponseWriter.failure(ResponseWriter.java:236)
	at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:436)
	at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:261)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679)
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392)
	... 28 more
Caused by: java.lang.NoClassDefFoundError: io/confluent/connect/replicator/util/RegexValidator
	at io.confluent.connect.replicator.ReplicatorSourceConnectorConfig.baseConfigDef(ReplicatorSourceConnectorConfig.java:156)
	at io.confluent.connect.replicator.ReplicatorSourceConnectorConfig.<clinit>(ReplicatorSourceConnectorConfig.java:123)
	at io.confluent.connect.replicator.ReplicatorSourceConnector.config(ReplicatorSourceConnector.java:161)
	at org.apache.kafka.connect.connector.Connector.validate(Connector.java:129)
	at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:313)
	at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:745)
	at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:742)
	at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:342)
	at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:282)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
Caused by: java.lang.ClassNotFoundException: io.confluent.connect.replicator.util.RegexValidator
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 14 more

Missing connector examples

Here is the list of missing connector examples compared to this list:

Fix problem with gstat

Running ccloud, we get:

./ccloud-generate-env-vars.sh: line 35: gstat: command not found

We would need to install:

brew install binutils

Do not use docker to curl for creating connectors

docker exec connect \
     curl -X PUT \
     -H "Content-Type: application/json" \
     --data '{
        "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
          "tasks.max": "1",
          "topics": "mysql-application",
          "key.ignore": "true",
          "connection.url": "http://elasticsearch:9200",
          "type.name": "kafka-connect",
          "name": "elasticsearch-sink"
          }' \
     http://localhost:8083/connectors/elasticsearch-sink/config | jq .

It's an overkill to use docker command to do that. curl should be installed anywhere

connect-azure-blob-storage-source failing

[2020-05-18 15:43:02,577] ERROR WorkerConnector{id=azure-blob-source} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector)
2122java.lang.LinkageError: loader constraint violation: when resolving method "com.fasterxml.jackson.databind.module.SimpleModule.<init>(Lcom/fasterxml/jackson/core/Version;)V" the class loader (instance of org/apache/kafka/connect/runtime/isolation/PluginClassLoader) of the current class, com/fasterxml/jackson/datatype/jsr310/JavaTimeModule, and the class loader (instance of sun/misc/Launcher$AppClassLoader) for the method's defining class, com/fasterxml/jackson/databind/module/SimpleModule, have different Class objects for the type com/fasterxml/jackson/core/Version used in the signature
2123	at com.fasterxml.jackson.datatype.jsr310.JavaTimeModule.<init>(JavaTimeModule.java:113)
2124	at com.azure.core.util.serializer.JacksonAdapter.initializeObjectMapper(JacksonAdapter.java:261)
2125	at com.azure.core.util.serializer.JacksonAdapter.<init>(JacksonAdapter.java:75)
2126	at com.azure.core.util.serializer.JacksonAdapter.createDefaultSerializerAdapter(JacksonAdapter.java:109)
2127	at com.azure.core.http.rest.RestProxy.createDefaultSerializer(RestProxy.java:613)
2128	at com.azure.core.http.rest.RestProxy.create(RestProxy.java:665)
2129	at com.azure.storage.blob.implementation.ServicesImpl.<init>(ServicesImpl.java:58)
2130	at com.azure.storage.blob.implementation.AzureBlobStorageImpl.<init>(AzureBlobStorageImpl.java:216)
2131	at com.azure.storage.blob.implementation.AzureBlobStorageBuilder.build(AzureBlobStorageBuilder.java:93)
2132	at com.azure.storage.blob.BlobServiceAsyncClient.<init>(BlobServiceAsyncClient.java:110)
2133	at com.azure.storage.blob.BlobServiceClientBuilder.buildAsyncClient(BlobServiceClientBuilder.java:109)
2134	at com.azure.storage.blob.BlobServiceClientBuilder.buildClient(BlobServiceClientBuilder.java:82)
2135	at io.confluent.connect.azure.blob.storage.AzureBlobSourceStorage.<init>(AzureBlobSourceStorage.java:95)
2136	at io.confluent.connect.azure.blob.storage.AzureBlobStorageSourceConnector.createStorage(AzureBlobStorageSourceConnector.java:51)
2137	at io.confluent.connect.cloud.storage.source.StorageSourceConnector.start(StorageSourceConnector.java:64)
2138	at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
2139	at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
2140	at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:196)
2141	at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:264)
2142	at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
2143	at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:126)
2144	at org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1206)
2145	at org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1202)
2146	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
2147	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2148	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2149	at java.lang.Thread.run(Thread.java:748)

connect-cassandra-sink is failing

[2020-06-12 07:50:53,201] ERROR WorkerConnector{id=cassandra-sink} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector)
1228java.lang.NoClassDefFoundError: com/google/common/util/concurrent/FutureFallback
1229	at com.datastax.driver.core.GuavaCompatibility.selectImplementation(GuavaCompatibility.java:136)
1230	at com.datastax.driver.core.GuavaCompatibility.<clinit>(GuavaCompatibility.java:52)
1231	at com.datastax.driver.core.Cluster.<clinit>(Cluster.java:71)
1232	at io.confluent.connect.cassandra.CassandraSessionFactoryImpl.newSession(CassandraSessionFactoryImpl.java:44)
1233	at io.confluent.connect.cassandra.CassandraSinkConnector.doStart(CassandraSinkConnector.java:52)
1234	at io.confluent.connect.cassandra.CassandraSinkConnector.start(CassandraSinkConnector.java:45)
1235	at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
1236	at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
1237	at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:196)
1238	at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:264)
1239	at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
1240	at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:126)
1241	at org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1206)
1242	at org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1202)
1243	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
1244	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
1245	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
1246	at java.lang.Thread.run(Thread.java:748)
1247Caused by: java.lang.ClassNotFoundException: com.google.common.util.concurrent.FutureFallback
1248	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
1249	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
1250	at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
1251	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
1252	... 18 more

Connect container is not starting up in time

When running azure-data-lake-storage-gen2.sh, the process fails to start Kafka Connect:

Creating kdc             ... done
Creating schema-registry ... done
Creating zookeeper       ... done
Creating client          ... done
Creating broker          ... done
Creating connect         ... done
Creating control-center  ... done
14:03:33 Waiting up to 480 seconds for Kafka Connect connect to start
ERROR: The logs in connect container do not show 'Finished starting connectors and tasks' after 480 seconds. Please troubleshoot with 'docker container ps' and 'docker container logs'.```

The `connect` container stops within about 20 seconds, but the logs do now show any errors or exceptions.

Travis intermittent error

to investigate: https://travis-ci.com/vdesabou/kafka-docker-playground/jobs/277226008

16:57:14 ####################################################
112416:57:14 schema-registry logs
1125===> ENV Variables ...
1126ALLOW_UNSIGNED=false
1127COMPONENT=schema-registry
1128CONFLUENT_DEB_VERSION=1
1129CONFLUENT_MAJOR_VERSION=5
1130CONFLUENT_MINOR_VERSION=3
1131CONFLUENT_MVN_LABEL=
1132CONFLUENT_PATCH_VERSION=2
1133CONFLUENT_PLATFORM_LABEL=
1134CONFLUENT_VERSION=5.3.2
1135CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
1136HOME=/root
1137HOSTNAME=schema-registry
1138KAFKA_VERSION=5.3.2
1139LANG=C.UTF-8
1140PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
1141PWD=/
1142PYTHON_PIP_VERSION=8.1.2
1143PYTHON_VERSION=2.7.9-1
1144SCALA_VERSION=2.12
1145SCHEMA_REGISTRY_HOST_NAME=schema-registry
1146SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=broker:9092
1147SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081
1148SHLVL=1
1149ZULU_OPENJDK_VERSION=8=8.38.0.13
1150_=/usr/bin/env
1151===> User
1152uid=0(root) gid=0(root) groups=0(root)
1153===> Configuring ...
1154===> Running preflight checks ... 
1155===> Check if Kafka is healthy ...
1156[main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: 
1157	bootstrap.servers = [broker:9092]
1158	client.dns.lookup = default
1159	client.id = 
1160	connections.max.idle.ms = 300000
1161	metadata.max.age.ms = 300000
1162	metric.reporters = []
1163	metrics.num.samples = 2
1164	metrics.recording.level = INFO
1165	metrics.sample.window.ms = 30000
1166	receive.buffer.bytes = 65536
1167	reconnect.backoff.max.ms = 1000
1168	reconnect.backoff.ms = 50
1169	request.timeout.ms = 120000
1170	retries = 5
1171	retry.backoff.ms = 100
1172	sasl.client.callback.handler.class = null
1173	sasl.jaas.config = null
1174	sasl.kerberos.kinit.cmd = /usr/bin/kinit
1175	sasl.kerberos.min.time.before.relogin = 60000
1176	sasl.kerberos.service.name = null
1177	sasl.kerberos.ticket.renew.jitter = 0.05
1178	sasl.kerberos.ticket.renew.window.factor = 0.8
1179	sasl.login.callback.handler.class = null
1180	sasl.login.class = null
1181	sasl.login.refresh.buffer.seconds = 300
1182	sasl.login.refresh.min.period.seconds = 60
1183	sasl.login.refresh.window.factor = 0.8
1184	sasl.login.refresh.window.jitter = 0.05
1185	sasl.mechanism = GSSAPI
1186	security.protocol = PLAINTEXT
1187	send.buffer.bytes = 131072
1188	ssl.cipher.suites = null
1189	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
1190	ssl.endpoint.identification.algorithm = https
1191	ssl.key.password = null
1192	ssl.keymanager.algorithm = SunX509
1193	ssl.keystore.location = null
1194	ssl.keystore.password = null
1195	ssl.keystore.type = JKS
1196	ssl.protocol = TLS
1197	ssl.provider = null
1198	ssl.secure.random.implementation = null
1199	ssl.trustmanager.algorithm = PKIX
1200	ssl.truststore.location = null
1201	ssl.truststore.password = null
1202	ssl.truststore.type = JKS
1203
1204[main] WARN org.apache.kafka.clients.ClientUtils - Couldn't resolve server broker:9092 from bootstrap.servers as DNS resolution failed for broker
1205[main] ERROR io.confluent.admin.utils.cli.KafkaReadyCommand - Error while running kafka-ready.
1206org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
1207	at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:407)
1208	at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:65)
1209	at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:138)
1210	at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
1211Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
1212	at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:88)
1213	at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:47)
1214	at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:367)
1215	... 3 more

connect-aws-kinesis-source failing

[2020-05-18 16:37:56,135] ERROR WorkerConnector{id=kinesis-source} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector)
1559java.lang.LinkageError: loader constraint violation: when resolving overridden method "io.confluent.command.record.Command$CommandKey.getDefaultInstanceForType()Lcom/google/protobuf/Message;" the class loader (instance of org/apache/kafka/connect/runtime/isolation/PluginClassLoader) of the current class, io/confluent/command/record/Command$CommandKey, and its superclass loader (instance of sun/misc/Launcher$AppClassLoader), have different Class objects for the type com/google/protobuf/Message used in the signature
1560	at io.confluent.license.LicenseStore.<clinit>(LicenseStore.java:57)
1561	at io.confluent.license.LicenseManager.<init>(LicenseManager.java:141)
1562	at io.confluent.connect.utils.licensing.ConnectLicenseManager$Builder.lambda$build$0(ConnectLicenseManager.java:210)
1563	at io.confluent.connect.utils.licensing.ConnectLicenseManager.registerOrValidateLicense(ConnectLicenseManager.java:255)
1564	at io.confluent.connect.kinesis.KinesisSourceConnector.doStart(KinesisSourceConnector.java:55)
1565	at io.confluent.connect.kinesis.KinesisSourceConnector.start(KinesisSourceConnector.java:46)
1566	at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
1567	at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
1568	at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:196)
1569	at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:264)
1570	at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
1571	at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:126)
1572	at org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1206)
1573	at org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1202)
1574	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
1575	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
1576	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
1577	at java.lang.Thread.run(Thread.java:748)

IBM Source Connector Missing JARS

Current instructions for IBM Source Connector say to simply run
./ibm-mq.sh

Doing this alone will net you a java.lang.ClassNotFoundException: com.ibm.mq.jms.MQConnectionFactory when creating connector


How I fixed. Download the JAR files from IBM:

Copy JAR files into IBM connect directory
cp /Users/joe.feocco/Downloads/ibm/wmq/JavaSE /Users/joe.feocco/repos/kafka-docker-playground/connect/connect-ibm-mq-source/JavaSE

Edit docker-compose volumes

    volumes:
        - ../../connect/connect-ibm-mq-source/JavaSE:/usr/share/confluent-hub-components/confluentinc-kafka-connect-ibmmq/lib/JavaSE

Then run ./ibm-mq.sh

5.4.0: java.lang.NoClassDefFoundError: com/fasterxml/jackson/core/Versioned

When using producer with "io.confluent.kafka.serializers.KafkaJsonSerializer", I get:

[2020-01-27 15:11:29,706] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer)
Exception in thread "main" org.apache.kafka.common.KafkaException: Failed to construct kafka producer
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:432)
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298)
        at com.github.vdesabou.ProducerExample.main(ProducerExample.java:45)
Caused by: java.lang.NoClassDefFoundError: com/fasterxml/jackson/core/Versioned
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at io.confluent.kafka.serializers.KafkaJsonSerializer.configure(KafkaJsonSerializer.java:48)
        at io.confluent.kafka.serializers.KafkaJsonSerializer.configure(KafkaJsonSerializer.java:43)
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:369)
        ... 2 more
Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.core.Versioned
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 17 more

[connect-tibco-source] Permissions problems queues.conf, factories.conf

Running current Docker-for-Mac and recent compose, I'm hitting a permissions error when queues.conf and factories.conf are ADDed to the tibems image build with root permissions where tibusr:tibgrp is needed.

Resulting container fail-to-start:

$ docker logs tibco-ems

TIBCO Enterprise Message Service Community Edition.
Copyright 2003-2019 by TIBCO Software Inc.
All rights reserved.

Version 8.5.1 V4 9/12/2019

2020-08-08 04:54:53.343 WARNING: Warning: configuration file 'tibemsd.conf' not found and has been created. All configuration settings have been reset to defaults.
2020-08-08 04:54:53.343 Process started from '/opt/tibco/ems/8.5/bin/tibemsd'.
2020-08-08 04:54:53.343 Process Id: 1
2020-08-08 04:54:53.343 Hostname: tibco-ems
2020-08-08 04:54:53.343 Hostname IP address: 172.23.0.2
2020-08-08 04:54:53.343 Reading configuration from 'tibemsd.conf'.
2020-08-08 04:54:53.346 Server name: 'E4EMS-SERVER'.
2020-08-08 04:54:53.346 Storage Location: '.'.
2020-08-08 04:54:53.346 Routing is disabled.
2020-08-08 04:54:53.346 Authorization is disabled.
2020-08-08 04:54:53.346 The server will attempt to trace warnings about destinations that are growing unbounded above 52428800 bytes or 50000 messages.
2020-08-08 04:54:53.346 Set server properties 'large_destination_memory' and 'large_destination_count' respectively to alter these thresholds.
2020-08-08 04:54:53.346 Created file 'users.conf'
2020-08-08 04:54:53.347 Administrator user not found, created with default password.
2020-08-08 04:54:53.347 Created file 'groups.conf'
2020-08-08 04:54:53.347 Administrator group not found, created with default member.
2020-08-08 04:54:53.347 Created file 'transports.conf'
2020-08-08 04:54:53.347 Created file 'channels.conf'
2020-08-08 04:54:53.347 Created file 'stores.conf'
2020-08-08 04:54:53.348 Created file 'topics.conf'
2020-08-08 04:54:53.348 ERROR: Failed to create file 'queues.conf'
2020-08-08 04:54:53.348 FATAL: Exception in startup, exiting.

Permissions in the tibems image, where EMS will start as tibusr:

tibusr@9261ef02e156:~$ whoami
tibusr
tibusr@9261ef02e156:~$ ls -las
total 36
4 drwxr-xr-x 1 tibusr tibgrp 4096 Aug  8 05:22 .
8 drwxr-xr-x 1 root   root   4096 Aug  8 01:55 ..
4 -rw-r--r-- 1 tibusr tibgrp  220 May 15  2017 .bash_logout
4 -rw-r--r-- 1 tibusr tibgrp 3526 May 15  2017 .bashrc
4 -rw-r--r-- 1 tibusr tibgrp  675 May 15  2017 .profile
0 -rw-r--r-- 1 tibusr tibgrp    0 Aug  8 05:22 channels.conf
8 -rw------- 1 root   root   4152 Aug  7 23:44 factories.conf
0 -rw-r--r-- 1 tibusr tibgrp    0 Aug  8 05:22 groups.conf
4 -rw------- 1 root   root   3211 Aug  7 23:44 queues.conf
0 -rw-r--r-- 1 tibusr tibgrp    0 Aug  8 05:22 stores.conf
0 -rw-r--r-- 1 tibusr tibgrp    0 Aug  8 05:22 tibemsd.conf
0 -rw-r--r-- 1 tibusr tibgrp    0 Aug  8 05:22 topics.conf
0 -rw-r--r-- 1 tibusr tibgrp    0 Aug  8 05:22 transports.conf
0 -rw-r--r-- 1 tibusr tibgrp    0 Aug  8 05:22 users.conf

Docker/Compose versions

$ docker version
Client: Docker Engine - Community
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        48a66213fe
 Built:             Mon Jun 22 15:41:33 2020
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.12
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       48a66213fe
  Built:            Mon Jun 22 15:49:27 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
$ docker-compose version
docker-compose version 1.26.2, build eefe0d31
docker-py version: 4.2.2
CPython version: 3.7.7
OpenSSL version: OpenSSL 1.1.1g  21 Apr 2020

Couchbase source connector test broken since upgrading to 3.4.8

[2020-07-08 12:17:38,043] ERROR WorkerSourceTask{id=couchbase-source-1} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Couldn't create filter in CouchbaseSourceTask due to an error
	at com.couchbase.connect.kafka.CouchbaseSourceTask.createFilter(CouchbaseSourceTask.java:150)
	at com.couchbase.connect.kafka.CouchbaseSourceTask.start(CouchbaseSourceTask.java:90)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:215)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: example.KeyFilter
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.kafka.common.utils.Utils.loadClass(Utils.java:352)
	at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:341)
	at com.couchbase.connect.kafka.CouchbaseSourceTask.createFilter(CouchbaseSourceTask.java:148)
	... 9 more

https://travis-ci.com/github/vdesabou/kafka-docker-playground/jobs/358712229

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.