Coder Social home page Coder Social logo

hazelcast-docker's Introduction

Hazelcast Docker

This repository contains Dockerfiles for the official Hazelcast Docker images.

Quick Start

Hazelcast

You can launch a Hazelcast Docker Container by running the following command. Check Hazelcast Versions for the versions to replace $HAZELCAST_VERSION.

$ docker run hazelcast/hazelcast:$HAZELCAST_VERSION

This command will pull a Hazelcast Docker image and run a new Hazelcast instance.

Hazelcast Versions

You can find the full list of Hazelcast versions at the Official Hazelcast Docker Hub.

Hazelcast Hello World

For the simplest end-to-end scenario, you can create a Hazelcast cluster with two Docker containers and access it from the client application.

$ docker run -e HZ_NETWORK_PUBLICADDRESS=<host_ip>:5701 -p 5701:5701 hazelcast/hazelcast:$HAZELCAST_VERSION
$ docker run -e HZ_NETWORK_PUBLICADDRESS=<host_ip>:5702 -p 5702:5701 hazelcast/hazelcast:$HAZELCAST_VERSION

Note that:

  • each container must publish the 5701 port under a different host machine port (5701 and 5702 in the example)
  • supplying a custom HZ_NETWORK_PUBLICADDRESS is critical for autodiscovery. Otherwise, Hazelcast will bind to Docker's internal ports.
  • <host_ip> needs to be the host machine address that will be used for the Hazelcast communication

After setting up the cluster, you can start the client application to check if it works correctly.

Hazelcast Enterprise

You can launch a Hazelcast Enterprise Docker Container by running the following command. Check Hazelcast Enterprise Versions for the versions to replace $HAZELCAST_VERSION.

Please request a trial license here or contact [email protected].

$ docker run -e HZ_LICENSEKEY=<your_license_key> hazelcast/hazelcast-enterprise:$HAZELCAST_VERSION

Hazelcast Enterprise Versions

You can find the full list of Hazelcast Enterprise versions at the Official Hazelcast Docker Hub.

Hazelcast Enterprise Hello World

To run two Hazelcast nodes, use the following commands.

$ docker run -p 5701:5701 -e HZ_LICENSEKEY=<your_license_key> -e HZ_NETWORK_PUBLICADDRESS=<host_ip>:5701 hazelcast/hazelcast-enterprise:$HAZELCAST_VERSION
$ docker run -p 5702:5701 -e HZ_LICENSEKEY=<your_license_key> -e HZ_NETWORK_PUBLICADDRESS=<host_ip>:5702 hazelcast/hazelcast-enterprise:$HAZELCAST_VERSION

Note that:

  • This example assumes unencrypted communication channels for Hazelcast members and clients. Hazelcast allows you to encrypt socket-level communication between Hazelcast members and between Hazelcast clients and members. Refer to this section to learn about enabling TLS/SSL encryption.

Management Center Hello World

Whether you started the Hazelcast or Hazelcast Enterprise cluster, you could use the Management Center application to monitor and manage your cluster.

docker run \
  -e MC_INIT_CMD="./mc-conf.sh cluster add -H=/data -ma <host_ip>:5701 -cn dev" \
  -p 8080:8080 hazelcast/management-center:$MANAGEMENT_CENTER_VERSION

Now, you can access Management Center from your browser using the following URL: https://localhost:8080. You can read more about the Management Center Docker image here.

Note that the way the Management Center is started changed since Hazelcast 4.0. If you use Hazelcast 3.x, please find the instructions here.

Hazelcast Defined Environment Variables

JAVA_OPTS

As shown below, you can use JAVA_OPTS environment variable if you need to pass multiple VM arguments to your Hazelcast member.

$ docker run -e JAVA_OPTS="-Xms512M -Xmx1024M" hazelcast/hazelcast

PROMETHEUS_PORT

The port of the JMX Prometheus agent. For example, if you set PROMETHEUS_PORT=8080, then you can access metrics at: http://<hostname>:8080/metrics. You can also use PROMETHEUS_CONFIG to set a path to the custom configuration.

LOGGING_LEVEL

The logging level can be changed using the LOGGING_LEVEL variable, for example, to see the DEBUG logs.

$ docker run -e LOGGING_LEVEL=DEBUG hazelcast/hazelcast

Available logging levels are (from highest to lowest): OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL. The default logging level is INFO. Invalid levels will be assumed OFF.

Note that if you need some more custom logging configuration, you can specify a configuration file.

$ docker run -v <config-file-path>:/opt/hazelcast/config/log4j2.properties hazelcast/hazelcast

LOGGING_CONFIG

since version 5.1

The logging configuration can be changed using the LOGGING_CONFIG variable, for example you can mount your own Log4j2 configuration file and set the path using this variable. The default value is set to /opt/hazelcast/config/log4j2.properties. A relative or an absolute path can be provided.

We also provide log4j2-json.properties file in the image. This is using the Log4j2 log4j-layout-template-json module. To use it you can do the following:

$ docker run -e LOGGING_CONFIG=log4j2-json.properties hazelcast/hazelcast

See the Log4j2 manual for reference.

Customizing Hazelcast

Memory

Hazelcast Docker image respects the container memory limits, so you can specify it with the -m parameter.

$ docker run -m 512M hazelcast/hazelcast:$HAZELCAST_VERSION

Note that by default Hazelcast uses up to 80% of the container memory limit, but you can configure it by adding -XX:MaxRAMPercentage to the JAVA_OPTS variable.

Configuring Hazelcast via Environment Variables

Configuration entries of your cluster can be overritten without changing the declarative configuration files (XML/YAML), see Overriding Configuration documentation section.

Assume that you want to have the following configuration for your cluster, represented as YAML:

hazelcast:
  cluster-name: dev
  network:
    port:
      auto-increment: true
      port-count: 100
      port: 5701

If you want to use the environment variables, the above would be represented as a set of the following environment variables:

$ docker run -e HZ_CLUSTERNAME=dev \
  -e HZ_NETWORK_PORT_AUTOINCREMENT=true \
  -e HZ_NETWORK_PORT_PORTCOUNT=100 \
  -e HZ_NETWORK_PORT_PORT=5701 \
  hazelcast/hazelcast

Using Custom Hazelcast Configuration File

If you need to configure Hazelcast with your own hazelcast.yaml (or hazelcast.xml), you can mount the host folder which contains Hazelcast configuration and pass hazelcast.config JVM property. For example, assuming you placed Hazelcast configuration as /home/ubuntu/hazelcast/hazelcast.yaml, you can execute the following command.

$ docker run -e JAVA_OPTS="-Dhazelcast.config=/opt/hazelcast/config_ext/hazelcast.yaml" -v /home/ubuntu/hazelcast:/opt/hazelcast/config_ext hazelcast/hazelcast

Alternatively, you can extend Hazelcast base image adding your Hazelcast configuration file.

Extending CLASSPATH with new jars or files

Hazelcast has several extension points i.e MapStore API where you can provide your own implementation to add specific functionality into Hazelcast Cluster. If you have custom jars or files to put into classpath of docker container, you can simply use Docker volume and use CLASSPATH environment variable in the docker run command. For example, assuming you placed your custom JARs into /home/ubuntu/hazelcast/, you can execute the following command.

$ docker run -e CLASSPATH="/opt/hazelcast/CLASSPATH_EXT/*" -v /home/ubuntu/hazelcast:/opt/hazelcast/CLASSPATH_EXT hazelcast/hazelcast

Alternatively, you can extend Hazelcast base image adding your custom JARs.

Using TLS (Hazelcast Enterprise Only)

The HZ_NETWORK_SSL_ENABLED environment variable can be used to enable TLS for the communication. The key material folder should be mounted and properly referenced by using JAVA_OPTS variable.

  1. Generate a sample key material (self-signed certificate)
$ mkdir keystore
$ keytool -validity 365 -genkeypair -alias server -keyalg EC -keystore ./keystore/server.keystore -storepass 123456 -keypass 123456 -dname CN=localhost
$ keytool -export -alias server -keystore ./keystore/server.keystore -storepass 123456 -file ./keystore/server.crt
$ keytool -import -noprompt -alias server -keystore ./keystore/server.truststore -storepass 123456 -file ./keystore/server.crt
  1. Run Hazelcast Enterprise with TLS enabled:
$ docker run -e HZ_LICENSEKEY=<your_license_key> \
    -e HZ_NETWORK_SSL_ENABLED=true \
    -v `pwd`/keystore:/keystore \
    -e "JAVA_OPTS=-Djavax.net.ssl.keyStore=/keystore/server.keystore -Djavax.net.ssl.keyStorePassword=123456
    -Djavax.net.ssl.trustStore=/keystore/server.truststore -Djavax.net.ssl.trustStorePassword=123456" \
    hazelcast/hazelcast-enterprise

Extending Hazelcast Base Image

If you'd like to customize your Hazelcast member, you can extend the Hazelcast base image and provide your configuration file or/and custom JARs. To do that, you need to create a new Dockerfile and build it with docker build command.

In the Dockerfile example below, we are creating a new image based on the Hazelcast image and adding our configuration file and a custom JAR from our host to the container, which will be used with Hazelcast when the container runs.

FROM hazelcast/hazelcast:$HAZELCAST_VERSION

# Adding custom hazelcast.yaml
ADD hazelcast.yaml ${HZ_HOME}
ENV JAVA_OPTS -Dhazelcast.config=${HZ_HOME}/hazelcast.yaml

# Adding custom JARs to the classpath
ADD custom-library.jar ${HZ_HOME}

Graceful Shutdown

You can stop the member using the docker command: docker stop <containerid>.

By default, Hazelcast is configured to TERMINATE on receiving the SIGTERM signal from Docker, which means that a container stops quickly, but the cluster's data safety relies on the backup stored by other Hazelcast members.

The other option is to use the GRACEFUL shutdown, which triggers the partition migration before shutting down the Hazelcast member. Note that it may take some time, depending on your data size. To use that approach, configure the following properties:

  • Add hazelcast.shutdownhook.policy=GRACEFUL to your JAVA_OPTS environment variable
  • Add hazelcast.graceful.shutdown.max.wait=<seconds> to your JAVA_OPTS environment variable
    • Default value is 600 seconds
  • Stop the container using docker stop --time <seconds>
    • It defines how much time Docker waits before sending SIGKILL
    • Default value is 10 seconds
    • Value should be greater or equal hazelcast.graceful.shutdown.max.wait
    • Alternatively, you can configure the Docker timeout upfront by docker run --stop-timeout <seconds>

You can debug and monitor Hazelcast instances running inside Docker containers.

Managing and Monitoring

You can use JMX or Prometheus for application monitoring.

JMX

You can use the standard JMX protocol to monitor your Hazelcast instance. Start a Hazelcast container with the following parameters.

$ docker run -p 9999:9999 -e JAVA_OPTS='-Dhazelcast.jmx=true -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false' hazelcast/hazelcast

Now you can connect using the address: localhost:9999.

Prometheus

You can use the JMX Prometheus agent and expose JVM and JMX Hazelcast metrics.

$ docker run -p 8080:8080 -e PROMETHEUS_PORT=8080

Then, the metrics are available at: http://localhost:8080/metrics. Note that you can add also -e JAVA_OPTS='-Dhazelcast.jmx=true' to expose JMX via Prometheus (otherwise, just JVM metrics are visible).

Debugging

Remote Debugger

To debug your Hazelcast with the standard Java Tools support, use the following command to start Hazelcast container:

$ docker run -p 5005:5005 -e JAVA_TOOL_OPTIONS='-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005' hazelcast/hazelcast

Now you can connect with your remote debugger using the address: localhost:5005.

Building Your Hazelcast Image

You may want to build your own Hazelcast Docker image with some custom JARs. For example, if you want to test if your change in the Hazelcast Root repository works fine in the Kubernetes environment or you just need to use an entry processor JAR. To do it, place your JARs into the current directory, build the image, and push it into the Docker registry.

Taking our first example, imagine you did some change in the Hazelcast Root repository and would like to test it on Kubernetes. You need to build hazelcast-SNAPSHOT.jar and then do the following.

$ cd hazelcast-oss
$ cp <path-to-hazelcast-jar> ./
$ docker build -t <username>/hazelcast:test .
$ docker push <username>/hazelcast:test

Then, use the image <username>/hazelcast:test in your Kubernetes environment to test your change.

Additional documentation can be found here.

Docker Images Usages

Hazelcast Docker Repositories

You can find all Hazelcast Docker Images on Docker Store Hazelcast Page. https://store.docker.com/profiles/hazelcast

You can find Docker files by going to the corresponding hazelcast-docker repo tag. See the full list here: https://github.com/hazelcast/hazelcast-docker/releases

Management Center

Please see Management Center Repository for Dockerfile definitions and have a look at available images on Docker Hub page.

Hazelcast Kubernetes

Hazelcast is prepared to work in the Kubernetes environment. For details, please check:

Automatic rebuilding (Hazelcast Enterprise only)

Every 24 hours maintained Hazelcast Enterprise docker images are checked against updates of the base system or system libraries. If any of them are present the images are rebuilt and republished.

hazelcast-docker's People

Contributors

bilalyasar avatar cagric0 avatar cheels avatar dependabot[bot] avatar devopshazelcast avatar eminn avatar emrahkocaman avatar emre-aydin avatar enozcan avatar frant-hartm avatar googlielmo avatar hasancelik avatar jackpgreen avatar jerrinot avatar kurtbaker09 avatar kwart avatar lazerion avatar ldziedziul avatar maurizio-lattuada avatar mtyazici avatar nishaatr avatar noctarius avatar orcunc avatar pivovarit avatar ps-jay avatar serdaro avatar seriybg avatar tchughesiv avatar thomasbabtist avatar tomaszgaweda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hazelcast-docker's Issues

Add Jcache to classpath

Hi,

I am running hazelcast/hazelcast:3.8.8 and I get the following error when I try to connect to it using Jcache Client:
Service with name 'hz:impl:cacheService' not found

Upgrading running Kubernetes cluster with no downtime

Hi there,

Is it possible to upgrade a running non enterprise Kubernetes based Hazelcast cluster with no downtime? We're aware of the Rolling Member Upgrades feature released in 3.8 however this seems to be an enterprise feature. Is there any similar feature and / or recommended procedure for non enterprise users?

Thanks!

HAZELCAST_CP_MOUNT parameter

The usage of $HAZELCAST_CP_MOUNT is not clear in start.sh. We already have HZ_DATA which is used for custom configuration.

I think we can have only one external mount point like $HAZELCAST_CP_MOUNT and put all jars and hazelcast.xml in it. They will automatically be part of classpath..

The same needs to be checked in hazelcast-enterprise-kubernetes.

java.io.EOFException: Remote socket Closed

Getting this Exception when trying to connect Mancenter via Host or the Docker Mancenter to Docker Hazelcast:3.5.3...

hazelcast    | Members [1] {
hazelcast    | 	Member [172.19.0.3]:5701 this
hazelcast    | }
hazelcast    | 
hazelcast    | Apr 03, 2017 8:53:05 AM com.hazelcast.core.LifecycleService
hazelcast    | INFO: [172.19.0.3]:5701 [dev] [3.5.3] Address[172.19.0.3]:5701 is STARTED
hazelcast    | Apr 03, 2017 8:53:05 AM com.hazelcast.internal.management.ManagementCenterService
hazelcast    | INFO: [172.19.0.3]:5701 [dev] [3.5.3] Hazelcast will connect to Hazelcast Management Center on address: 
hazelcast    | http://0.0.0.0:7080/mancenter-3.5.2
hazelcast    | Apr 03, 2017 8:53:05 AM com.hazelcast.internal.management.ManagementCenterService
hazelcast    | INFO: [172.19.0.3]:5701 [dev] [3.5.3] Failed to connect to:http://0.0.0.0:7080/mancenter-3.5.2/collector.do
hazelcast    | Apr 03, 2017 8:53:05 AM com.hazelcast.internal.management.ManagementCenterService
hazelcast    | INFO: [172.19.0.3]:5701 [dev] [3.5.3] Failed to pull tasks from management center
hazelcast    | Apr 03, 2017 8:53:05 AM com.hazelcast.partition.InternalPartitionService
hazelcast    | INFO: [172.19.0.3]:5701 [dev] [3.5.3] Initializing cluster partition table first arrangement...
hazelcast    | Apr 03, 2017 8:53:28 AM com.hazelcast.nio.tcp.SocketAcceptor
hazelcast    | INFO: [172.19.0.3]:5701 [dev] [3.5.3] Accepting socket connection from /172.19.0.1:42718
hazelcast    | Apr 03, 2017 8:53:28 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
hazelcast    | INFO: [172.19.0.3]:5701 [dev] [3.5.3] Established socket connection between /172.19.0.3:5701
hazelcast    | Apr 03, 2017 8:53:28 AM com.hazelcast.internal.management.ManagementCenterService
hazelcast    | INFO: [172.19.0.3]:5701 [dev] [3.5.3] Management Center URL has changed. Hazelcast will connect to Management Center on address:
hazelcast    | http://localhost:7080/mancenter-3.5.2
hazelcast    | Apr 03, 2017 8:53:37 AM com.hazelcast.nio.tcp.TcpIpConnection
hazelcast    | INFO: [172.19.0.3]:5701 [dev] [3.5.3] Connection [/172.19.0.1:42718] lost. Reason: java.io.EOFException[Remote socket closed!]

Attached Fille contains my hazelcast.xml config file, where I have enabled the Mancenter...

I am providing following JAVA_OPTS: "-Xms250M -Xmx250M -Dhazelcast.config=/opt/hazelcast/configFolder/hazelcast.xml -Dhazelcast.rest.enabled=true"

And have also exposed 5701 port...

NOTE: I dint know about any mail group for hazelcast problems, so have raised this issue... Any help is appreciated. THANKS

hazelcast.xml.zip

hazelcast (3.7.8) not able to connect to mancenter

I am working on hazelcast version 3.7.8
Ran the docker image for mancenter
docker run -p 8080:8080 hazelcast/management-center:3.7.8
Ran the docker image for hazelcast (specifying the mancenter URL)
docker run -e MANCENTER_URL="http://localhost:8080/mancenter" hazelcast/hazelcast:3.7.8

hazelcast server is not able to connect mancenter

screen shot 2018-07-21 at 12 18 23 am

ip man center not visible

When I execute the following 2 commands to start management center:

docker pull hazelcast/management-center:latest
docker run -ti -p 8080:8080 hazelcast/management-center:latest

It isn't clear to me at which ip address I can access management center. The documentation refers to 'hen open from browser MACHINE_IP:8080/mancenter', but what is the MACHINE_IP

When I use localhost:8080/mancenter, it works fine btw.

So probably it is best to print the ip address when the management center starts?

Can't connect to Hz Cluster in containers from different machine

Hi,
I've got a Hazelcast cluster running in docker containers using the discovery api with zookeeper. This all works fine and the cluster starts up and works as expected. My issue is connecting a client to the cluster from another server.

The cluster is returning 127.0.0.1 and 172.17.0.1 to zookeeper as it's cluster addresses, which means the client works fine running on the same machine but won't connect from a remote machine even with 172.17.0.1 mapped in the client's host file to the Hz cluster's server ip.

I've tried starting the containers with net=host and -h to get it to return an address I can map in the client's hosts file but nothing seems to work. Am I missing something?

Below is the log and stack trace from the client

/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/bin/java -Didea.launcher.port=7535 "-Didea.launcher.bin.path=/Applications/IntelliJ IDEA.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath "/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/JObjC.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/deploy.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/dnsns.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/localedata.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/sunec.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/sunjce_provider.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/sunpkcs11.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/zipfs.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/htmlconverter.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/javaws.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/jfxrt.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/management-agent.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/plugin.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/ant-javafx.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/dt.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/javafx-doclet.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/javafx-mx.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/jconsole.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/sa-jdi.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/tools.jar:/Users/oliverbuckley-salmon/IdeaProjects/sandpit/datagrid/target/classes:/Users/oliverbuckley-salmon/IdeaProjects/sandpit/domain/target/classes:/Users/oliverbuckley-salmon/.m2/repository/com/google/code/gson/gson/2.6.2/gson-2.6.2.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hbase/hbase-client/1.2.3/hbase-client-1.2.3.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hbase/hbase-annotations/1.2.3/hbase-annotations-1.2.3.jar:/Users/oliverbuckley-salmon/.m2/repository/com/github/stephenc/findbugs/findbugs-annotations/1.3.9-1/findbugs-annotations-1.3.9-1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hbase/hbase-common/1.2.3/hbase-common-1.2.3.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/Users/oliverbuckley-salmon/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hbase/hbase-protocol/1.2.3/hbase-protocol-1.2.3.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-codec/commons-codec/1.9/commons-codec-1.9.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/Users/oliverbuckley-salmon/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar:/Users/oliverbuckley-salmon/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/Users/oliverbuckley-salmon/.m2/repository/io/netty/netty-all/4.0.23.Final/netty-all-4.0.23.Final.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar:/Users/oliverbuckley-salmon/.m2/repository/org/slf4j/slf4j-api/1.6.1/slf4j-api-1.6.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/slf4j/slf4j-log4j12/1.6.1/slf4j-log4j12-1.6.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar:/Users/oliverbuckley-salmon/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar:/Users/oliverbuckley-salmon/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/Users/oliverbuckley-salmon/.m2/repository/org/jruby/jcodings/jcodings/1.0.8/jcodings-1.0.8.jar:/Users/oliverbuckley-salmon/.m2/repository/org/jruby/joni/joni/2.1.2/joni-2.1.2.jar:/Users/oliverbuckley-salmon/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-auth/2.5.1/hadoop-auth-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/httpcomponents/httpclient/4.2.5/httpclient-4.2.5.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/httpcomponents/httpcore/4.2.4/httpcore-4.2.4.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/directory/server/apacheds-kerberos-codec/2.0.0-M15/apacheds-kerberos-codec-2.0.0-M15.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/directory/server/apacheds-i18n/2.0.0-M15/apacheds-i18n-2.0.0-M15.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/directory/api/api-asn1-api/1.0.0-M20/api-asn1-api-1.0.0-M20.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/directory/api/api-util/1.0.0-M20/api-util-1.0.0-M20.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-common/2.5.1/hadoop-common-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-annotations/2.5.1/hadoop-annotations-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/commons/commons-math3/3.1.1/commons-math3-3.1.1.jar:/Users/oliverbuckley-salmon/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/avro/avro/1.7.4/avro-1.7.4.jar:/Users/oliverbuckley-salmon/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/Users/oliverbuckley-salmon/.m2/repository/org/xerial/snappy/snappy-java/1.0.4.1/snappy-java-1.0.4.1.jar:/Users/oliverbuckley-salmon/.m2/repository/com/jcraft/jsch/0.1.42/jsch-0.1.42.jar:/Users/oliverbuckley-salmon/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.5.1/hadoop-mapreduce-client-core-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.5.1/hadoop-yarn-common-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-yarn-api/2.5.1/hadoop-yarn-api-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/javax/xml/bind/jaxb-api/2.2.2/jaxb-api-2.2.2.jar:/Users/oliverbuckley-salmon/.m2/repository/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar:/Users/oliverbuckley-salmon/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/Users/oliverbuckley-salmon/.m2/repository/io/netty/netty/3.6.2.Final/netty-3.6.2.Final.jar:/Users/oliverbuckley-salmon/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/Users/oliverbuckley-salmon/.m2/repository/com/hazelcast/hazelcast-all/3.7.2/hazelcast-all-3.7.2.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/curator/curator-framework/2.10.0/curator-framework-2.10.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/curator/curator-client/2.10.0/curator-client-2.10.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/curator/curator-x-discovery/2.10.0/curator-x-discovery-2.10.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/curator/curator-recipes/2.10.0/curator-recipes-2.10.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/curator/curator-test/2.10.0/curator-test-2.10.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/commons/commons-math/2.2/commons-math-2.2.jar:/Users/oliverbuckley-salmon/.m2/repository/com/hazelcast/hazelcast-zookeeper/3.6.1/hazelcast-zookeeper-3.6.1.jar:/Users/oliverbuckley-salmon/.m2/repository/joda-time/joda-time/2.9.4/joda-time-2.9.4.jar:/Applications/IntelliJ IDEA.app/Contents/lib/idea_rt.jar" com.intellij.rt.execution.application.AppMain com.example.datagrid.cachewarmer.CacheReader
2016-11-28 20:51:35 INFO  TradeMapStore:64 - Trying to connect to HBase
2016-11-28 20:51:35 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-11-28 20:51:37.473 java[25412:2402206] Unable to load realm info from SCDynamicStore
2016-11-28 20:51:38 INFO  RecoverableZooKeeper:120 - Process identifier=hconnection-0x53443251 connecting to ZooKeeper ensemble=138.68.147.208:2181
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:host.name=172.20.10.2
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:java.version=1.7.0_45
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:java.vendor=Oracle Corporation
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:java.class.path=/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/JObjC.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/deploy.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/dnsns.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/localedata.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/sunec.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/sunjce_provider.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/sunpkcs11.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/ext/zipfs.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/htmlconverter.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/javaws.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/jfxrt.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/management-agent.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/plugin.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/ant-javafx.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/dt.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/javafx-doclet.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/javafx-mx.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/jconsole.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/sa-jdi.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/lib/tools.jar:/Users/oliverbuckley-salmon/IdeaProjects/sandpit/datagrid/target/classes:/Users/oliverbuckley-salmon/IdeaProjects/sandpit/domain/target/classes:/Users/oliverbuckley-salmon/.m2/repository/com/google/code/gson/gson/2.6.2/gson-2.6.2.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hbase/hbase-client/1.2.3/hbase-client-1.2.3.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hbase/hbase-annotations/1.2.3/hbase-annotations-1.2.3.jar:/Users/oliverbuckley-salmon/.m2/repository/com/github/stephenc/findbugs/findbugs-annotations/1.3.9-1/findbugs-annotations-1.3.9-1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hbase/hbase-common/1.2.3/hbase-common-1.2.3.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/Users/oliverbuckley-salmon/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hbase/hbase-protocol/1.2.3/hbase-protocol-1.2.3.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-codec/commons-codec/1.9/commons-codec-1.9.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/Users/oliverbuckley-salmon/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar:/Users/oliverbuckley-salmon/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/Users/oliverbuckley-salmon/.m2/repository/io/netty/netty-all/4.0.23.Final/netty-all-4.0.23.Final.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar:/Users/oliverbuckley-salmon/.m2/repository/org/slf4j/slf4j-api/1.6.1/slf4j-api-1.6.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/slf4j/slf4j-log4j12/1.6.1/slf4j-log4j12-1.6.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar:/Users/oliverbuckley-salmon/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar:/Users/oliverbuckley-salmon/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/Users/oliverbuckley-salmon/.m2/repository/org/jruby/jcodings/jcodings/1.0.8/jcodings-1.0.8.jar:/Users/oliverbuckley-salmon/.m2/repository/org/jruby/joni/joni/2.1.2/joni-2.1.2.jar:/Users/oliverbuckley-salmon/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-auth/2.5.1/hadoop-auth-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/httpcomponents/httpclient/4.2.5/httpclient-4.2.5.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/httpcomponents/httpcore/4.2.4/httpcore-4.2.4.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/directory/server/apacheds-kerberos-codec/2.0.0-M15/apacheds-kerberos-codec-2.0.0-M15.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/directory/server/apacheds-i18n/2.0.0-M15/apacheds-i18n-2.0.0-M15.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/directory/api/api-asn1-api/1.0.0-M20/api-asn1-api-1.0.0-M20.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/directory/api/api-util/1.0.0-M20/api-util-1.0.0-M20.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-common/2.5.1/hadoop-common-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-annotations/2.5.1/hadoop-annotations-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/commons/commons-math3/3.1.1/commons-math3-3.1.1.jar:/Users/oliverbuckley-salmon/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/Users/oliverbuckley-salmon/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/avro/avro/1.7.4/avro-1.7.4.jar:/Users/oliverbuckley-salmon/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/Users/oliverbuckley-salmon/.m2/repository/org/xerial/snappy/snappy-java/1.0.4.1/snappy-java-1.0.4.1.jar:/Users/oliverbuckley-salmon/.m2/repository/com/jcraft/jsch/0.1.42/jsch-0.1.42.jar:/Users/oliverbuckley-salmon/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.5.1/hadoop-mapreduce-client-core-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.5.1/hadoop-yarn-common-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/hadoop/hadoop-yarn-api/2.5.1/hadoop-yarn-api-2.5.1.jar:/Users/oliverbuckley-salmon/.m2/repository/javax/xml/bind/jaxb-api/2.2.2/jaxb-api-2.2.2.jar:/Users/oliverbuckley-salmon/.m2/repository/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar:/Users/oliverbuckley-salmon/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/Users/oliverbuckley-salmon/.m2/repository/io/netty/netty/3.6.2.Final/netty-3.6.2.Final.jar:/Users/oliverbuckley-salmon/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/Users/oliverbuckley-salmon/.m2/repository/com/hazelcast/hazelcast-all/3.7.2/hazelcast-all-3.7.2.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/curator/curator-framework/2.10.0/curator-framework-2.10.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/curator/curator-client/2.10.0/curator-client-2.10.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/curator/curator-x-discovery/2.10.0/curator-x-discovery-2.10.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/curator/curator-recipes/2.10.0/curator-recipes-2.10.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/curator/curator-test/2.10.0/curator-test-2.10.0.jar:/Users/oliverbuckley-salmon/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/Users/oliverbuckley-salmon/.m2/repository/org/apache/commons/commons-math/2.2/commons-math-2.2.jar:/Users/oliverbuckley-salmon/.m2/repository/com/hazelcast/hazelcast-zookeeper/3.6.1/hazelcast-zookeeper-3.6.1.jar:/Users/oliverbuckley-salmon/.m2/repository/joda-time/joda-time/2.9.4/joda-time-2.9.4.jar:/Applications/IntelliJ IDEA.app/Contents/lib/idea_rt.jar
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:java.library.path=/Users/oliverbuckley-salmon/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:java.io.tmpdir=/var/folders/sx/g9vbcw9d3j54gtj89n57g1fw0000gn/T/
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:java.compiler=<NA>
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:os.name=Mac OS X
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:os.arch=x86_64
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:os.version=10.11.6
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:user.name=oliverbuckley-salmon
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:user.home=/Users/oliverbuckley-salmon
2016-11-28 20:51:38 INFO  ZooKeeper:100 - Client environment:user.dir=/Users/oliverbuckley-salmon/IdeaProjects/sandpit
2016-11-28 20:51:38 INFO  ZooKeeper:438 - Initiating client connection, connectString=138.68.147.208:2181 sessionTimeout=90000 watcher=hconnection-0x534432510x0, quorum=138.68.147.208:2181, baseZNode=/hbase
2016-11-28 20:51:38 INFO  ClientCnxn:975 - Opening socket connection to server 138.68.147.208/138.68.147.208:2181. Will not attempt to authenticate using SASL (unknown error)
2016-11-28 20:51:38 INFO  ClientCnxn:852 - Socket connection established to 138.68.147.208/138.68.147.208:2181, initiating session
2016-11-28 20:51:38 INFO  ClientCnxn:1235 - Session establishment complete on server 138.68.147.208/138.68.147.208:2181, sessionid = 0x1583b2a4dbb0183, negotiated timeout = 90000
2016-11-28 20:51:38 INFO  TradeMapStore:68 - Connected to HBase
2016-11-28 20:51:38 INFO  CacheReader:32 - Connecting to Hz cluster
Nov 28, 2016 8:51:38 PM com.hazelcast.config.AbstractXmlConfigHelper
WARNING: Name of the hazelcast schema location incorrect using default
Nov 28, 2016 8:51:39 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [kappa-serving-layer] [3.7.2] HazelcastClient 3.7.2 (20161004 - 540b01c) is STARTING
2016-11-28 20:51:39 INFO  CuratorFrameworkImpl:235 - Starting
2016-11-28 20:51:39 INFO  ZooKeeper:438 - Initiating client connection, connectString=138.68.172.212:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@644fa139
2016-11-28 20:51:39 INFO  ClientCnxn:975 - Opening socket connection to server 138.68.172.212/138.68.172.212:2181. Will not attempt to authenticate using SASL (unknown error)
2016-11-28 20:51:39 INFO  ClientCnxn:852 - Socket connection established to 138.68.172.212/138.68.172.212:2181, initiating session
2016-11-28 20:51:39 INFO  ClientCnxn:1235 - Session establishment complete on server 138.68.172.212/138.68.172.212:2181, sessionid = 0x15830869abd0077, negotiated timeout = 40000
2016-11-28 20:51:39 INFO  ConnectionStateManager:228 - State change: CONNECTED
Nov 28, 2016 8:51:40 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [kappa-serving-layer] [3.7.2] HazelcastClient 3.7.2 (20161004 - 540b01c) is STARTED
Nov 28, 2016 8:51:56 PM com.hazelcast.client.spi.impl.ClusterListenerSupport
WARNING: hz.client_0 [kappa-serving-layer] [3.7.2] Unable to get alive cluster connection, try in 0 ms later, attempt 1 of 2.
2016-11-28 20:52:07 INFO  ClientCnxn:1096 - Client session timed out, have not heard from server in 26671ms for sessionid 0x15830869abd0077, closing socket connection and attempting reconnect
2016-11-28 20:52:07 INFO  ConnectionStateManager:228 - State change: SUSPENDED
2016-11-28 20:52:09 INFO  ClientCnxn:975 - Opening socket connection to server 138.68.172.212/138.68.172.212:2181. Will not attempt to authenticate using SASL (unknown error)
2016-11-28 20:52:09 INFO  ClientCnxn:852 - Socket connection established to 138.68.172.212/138.68.172.212:2181, initiating session
2016-11-28 20:52:09 INFO  ClientCnxn:1235 - Session establishment complete on server 138.68.172.212/138.68.172.212:2181, sessionid = 0x15830869abd0077, negotiated timeout = 40000
2016-11-28 20:52:09 INFO  ConnectionStateManager:228 - State change: RECONNECTED
Nov 28, 2016 8:52:25 PM com.hazelcast.client.spi.impl.ClusterListenerSupport
WARNING: hz.client_0 [kappa-serving-layer] [3.7.2] Unable to get alive cluster connection, try in 0 ms later, attempt 2 of 2.
Nov 28, 2016 8:52:25 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [kappa-serving-layer] [3.7.2] HazelcastClient 3.7.2 (20161004 - 540b01c) is SHUTTING_DOWN
2016-11-28 20:52:25 INFO  CuratorFrameworkImpl:821 - backgroundOperationsLoop exiting
2016-11-28 20:52:25 INFO  ZooKeeper:684 - Session: 0x15830869abd0077 closed
2016-11-28 20:52:25 INFO  ClientCnxn:512 - EventThread shut down
Nov 28, 2016 8:52:25 PM com.hazelcast.core.LifecycleService
INFO: hz.client_0 [kappa-serving-layer] [3.7.2] HazelcastClient 3.7.2 (20161004 - 540b01c) is SHUTDOWN
Exception in thread "main" java.lang.IllegalStateException: Unable to connect to any address in the config! The following addresses were tried:[localhost/127.0.0.1:5703, /172.17.0.1:5701, /172.17.0.1:5702, /172.17.0.1:5703, localhost/127.0.0.1:5702, localhost/127.0.0.1:5701]
	at com.hazelcast.client.spi.impl.ClusterListenerSupport.connectToCluster(ClusterListenerSupport.java:175)
	at com.hazelcast.client.spi.impl.ClientClusterServiceImpl.start(ClientClusterServiceImpl.java:191)
	at com.hazelcast.client.impl.HazelcastClientInstanceImpl.start(HazelcastClientInstanceImpl.java:379)
	at com.hazelcast.client.HazelcastClientManager.newHazelcastClient(HazelcastClientManager.java:78)
	at com.hazelcast.client.HazelcastClient.newHazelcastClient(HazelcastClient.java:72)
	at com.example.datagrid.cachewarmer.CacheReader.main(CacheReader.java:34)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

Process finished with exit code 1

and the following is the command I'm using to launch the container
docker run -p 5701:5701 -ti --add-host=a826d5422c4d:138.68.147.208 --net=host -h muhz1 --name muhz1 -d olibs/kappahz:v0.1
The add host is to map the Zookeeper for Hbase which I'm using as an underlying database for Hz, this works fine.

Thanks in advance for your help, any hints or tips gratefully received.
Oliver

Smaller image size?

The current image size of the oss version is 259 MB, it's unacceptable to keep the current size just because a not-well optimized base image, that's mostly because it's using java:7.

Consider convenience for debugging

It would be nice to add some convenience for easy debugging.

My idea is to pass a parameter to the image to start JVM with JPDA port open + also expose this port somehow.

Then it would be easy to run the image locally and attach a debugger running on my host machine.

Wrong bind request from [172.20.215.37]:5701! This node is not the requested endpoint: [10.68.144.200]:5701

[root@cicd-cmp2-k8s-01 ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hazelcast NodePort 10.68.144.200 5701:29634/TCP 55s
[root@cicd-cmp2-k8s-01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
hazelcast-55fsv 1/1 Running 0 26m
hazelcast-8cjr9 1/1 Running 0 26m
hazelcast-jk9w2 1/1 Running 0 26m
[root@cicd-cmp2-k8s-01 ~]# kubectl logs hazelcast-55fsv
Kubernetes Namespace: default
Kubernetes Service DNS: hazelcast
########################################

RUN_JAVA=

JAVA_OPTS=

CLASSPATH=/:/opt/hazelcast/:/opt/hazelcast/external/*:

########################################
Checking custom configuration
no custom configuration found
Jan 16, 2018 1:21:30 AM com.hazelcast.config.XmlConfigLocator
INFO: Loading 'hazelcast.xml' from working directory.
Jan 16, 2018 1:21:31 AM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [dev] [3.9.2] Prefer IPv4 stack is true.
Jan 16, 2018 1:21:31 AM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [dev] [3.9.2] Picked [172.20.28.160]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
Jan 16, 2018 1:21:31 AM com.hazelcast.system
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Hazelcast 3.9.2 (20180103 - 17e4ec3) starting at [172.20.28.160]:5701
Jan 16, 2018 1:21:31 AM com.hazelcast.system
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Copyright (c) 2008-2017, Hazelcast, Inc. All Rights Reserved.
Jan 16, 2018 1:21:31 AM com.hazelcast.system
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Configured Hazelcast Serialization version: 1
Jan 16, 2018 1:21:31 AM com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Backpressure is disabled
Jan 16, 2018 1:21:31 AM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Kubernetes Discovery properties: { service-dns: hazelcast, service-dns-timeout: 5, service-name: null, service-label: null, service-label-value: true, namespace: default, resolve-not-ready-addresses: false, kubernetes-master: https://kubernetes.default.svc}
Jan 16, 2018 1:21:31 AM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Kubernetes Discovery activated resolver: DnsEndpointResolver
Jan 16, 2018 1:21:32 AM com.hazelcast.instance.Node
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Activating Discovery SPI Joiner
Jan 16, 2018 1:21:32 AM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Starting 16 partition threads and 9 generic threads (1 dedicated for priority tasks)
Jan 16, 2018 1:21:32 AM com.hazelcast.internal.diagnostics.Diagnostics
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
Jan 16, 2018 1:21:32 AM com.hazelcast.core.LifecycleService
INFO: [172.20.28.160]:5701 [dev] [3.9.2] [172.20.28.160]:5701 is STARTING
Jan 16, 2018 1:21:32 AM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connecting to /10.68.144.200:5701, timeout: 0, bind-any: true
Jan 16, 2018 1:21:32 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Established socket connection between /172.20.28.160:59385 and /10.68.144.200:5701
Jan 16, 2018 1:21:33 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connection[id=1, /172.20.28.160:59385->/10.68.144.200:5701, endpoint=[10.68.144.200]:5701, alive=false, type=MEMBER] closed. Reason: Connection closed by the other side
Jan 16, 2018 1:21:33 AM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connecting to /10.68.144.200:5701, timeout: 0, bind-any: true
Jan 16, 2018 1:21:33 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Established socket connection between /172.20.28.160:48719 and /10.68.144.200:5701
Jan 16, 2018 1:21:33 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connection[id=2, /172.20.28.160:48719->/10.68.144.200:5701, endpoint=[10.68.144.200]:5701, alive=false, type=MEMBER] closed. Reason: Connection closed by the other side
Jan 16, 2018 1:21:34 AM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Accepting socket connection from /172.20.215.37:55949
Jan 16, 2018 1:21:34 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Established socket connection between /172.20.28.160:5701 and /172.20.215.37:55949
Jan 16, 2018 1:21:34 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
WARNING: [172.20.28.160]:5701 [dev] [3.9.2] Wrong bind request from [172.20.215.37]:5701! This node is not the requested endpoint: [10.68.144.200]:5701
Jan 16, 2018 1:21:34 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connection[id=3, /172.20.28.160:5701->/172.20.215.37:55949, endpoint=null, alive=false, type=MEMBER] closed. Reason: Wrong bind request from [172.20.215.37]:5701! This node is not the requested endpoint: [10.68.144.200]:5701
Jan 16, 2018 1:21:34 AM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connecting to /10.68.144.200:5701, timeout: 0, bind-any: true
Jan 16, 2018 1:21:34 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Established socket connection between /172.20.28.160:57914 and /10.68.144.200:5701
Jan 16, 2018 1:21:34 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connection[id=4, /172.20.28.160:57914->/10.68.144.200:5701, endpoint=[10.68.144.200]:5701, alive=false, type=MEMBER] closed. Reason: Connection closed by the other side
Jan 16, 2018 1:21:35 AM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connecting to /10.68.144.200:5701, timeout: 0, bind-any: true
Jan 16, 2018 1:21:35 AM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Accepting socket connection from /10.3.32.211:51823
Jan 16, 2018 1:21:35 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Established socket connection between /172.20.28.160:5701 and /10.3.32.211:51823
Jan 16, 2018 1:21:35 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Established socket connection between /172.20.28.160:51823 and /10.68.144.200:5701
Jan 16, 2018 1:21:35 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
WARNING: [172.20.28.160]:5701 [dev] [3.9.2] Wrong bind request from [172.20.28.160]:5701! This node is not the requested endpoint: [10.68.144.200]:5701
Jan 16, 2018 1:21:35 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connection[id=5, /172.20.28.160:5701->/10.3.32.211:51823, endpoint=null, alive=false, type=MEMBER] closed. Reason: Wrong bind request from [172.20.28.160]:5701! This node is not the requested endpoint: [10.68.144.200]:5701
Jan 16, 2018 1:21:35 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connection[id=6, /172.20.28.160:51823->/10.68.144.200:5701, endpoint=[10.68.144.200]:5701, alive=false, type=MEMBER] closed. Reason: Connection closed by the other side
Jan 16, 2018 1:21:36 AM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connecting to /10.68.144.200:5701, timeout: 0, bind-any: true
Jan 16, 2018 1:21:36 AM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Accepting socket connection from /10.3.32.211:37234
Jan 16, 2018 1:21:36 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Established socket connection between /172.20.28.160:37234 and /10.68.144.200:5701
Jan 16, 2018 1:21:36 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Established socket connection between /172.20.28.160:5701 and /10.3.32.211:37234
Jan 16, 2018 1:21:36 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
WARNING: [172.20.28.160]:5701 [dev] [3.9.2] Wrong bind request from [172.20.28.160]:5701! This node is not the requested endpoint: [10.68.144.200]:5701
Jan 16, 2018 1:21:36 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connection[id=8, /172.20.28.160:5701->/10.3.32.211:37234, endpoint=null, alive=false, type=MEMBER] closed. Reason: Wrong bind request from [172.20.28.160]:5701! This node is not the requested endpoint: [10.68.144.200]:5701
Jan 16, 2018 1:21:36 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connection[id=7, /172.20.28.160:37234->/10.68.144.200:5701, endpoint=[10.68.144.200]:5701, alive=false, type=MEMBER] closed. Reason: Connection closed by the other side
Jan 16, 2018 1:21:37 AM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Accepting socket connection from /172.20.215.37:52217
Jan 16, 2018 1:21:37 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Established socket connection between /172.20.28.160:5701 and /172.20.215.37:52217
Jan 16, 2018 1:21:37 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
WARNING: [172.20.28.160]:5701 [dev] [3.9.2] Wrong bind request from [172.20.215.37]:5701! This node is not the requested endpoint: [10.68.144.200]:5701
Jan 16, 2018 1:21:37 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Connection[id=9, /172.20.28.160:5701->/172.20.215.37:52217, endpoint=null, alive=false, type=MEMBER] closed. Reason: Wrong bind request from [172.20.215.37]:5701! This node is not the requested endpoint: [10.68.144.200]:5701
Jan 16, 2018 1:21:37 AM com.hazelcast.system
INFO: [172.20.28.160]:5701 [dev] [3.9.2] Cluster version set to 3.9
Jan 16, 2018 1:21:37 AM com.hazelcast.internal.cluster.ClusterService
INFO: [172.20.28.160]:5701 [dev] [3.9.2]

Members {size:1, ver:1} [
Member [172.20.28.160]:5701 - 6f5267d6-250f-4b46-834d-2e54ff3cbada this
]

Jan 16, 2018 1:21:37 AM com.hazelcast.core.LifecycleService
INFO: [172.20.28.160]:5701 [dev] [3.9.2] [172.20.28.160]:5701 is STARTED
^C[root@cicd-cmp2-k8s-01 ~]#

WARN No appenders could be found for logger

[root@silversurfer tmp]# docker run -ti -p 8080:8080 hazelcast/management-center:latest
Hazelcast Management Center starting on port 8080 at path : /mancenter
2015-10-07 08:39:02.275:INFO:oejs.Server:jetty-8.y.z-SNAPSHOT
2015-10-07 08:39:02.473:INFO:oejw.WebInfConfiguration:Extract jar:file:/opt/hazelcast/hazelcast-3.5.2/mancenter/mancenter-3.5.2.war!/ to /tmp/jetty-0.0.0.0-8080-mancenter-3.5.2.war-_mancenter-any-/webapp
2015-10-07 08:39:05.032:INFO:/mancenter:Initializing Spring root WebApplicationContext
log4j:WARN No appenders could be found for logger (org.springframework.web.context.ContextLoader).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Oct 07, 2015 8:39:07 AM com.hazelcast.webmonitor.model.AbstractAlertFilter
INFO: Management Center 3.5.2
2015-10-07 08:39:08.199:INFO:oejs.AbstractConnector:Started [email protected]:8080
Hazelcast Management Center successfully started

Connection closed by the other side

I have followed the steps provided in the readme file and I am able to deploy a hazelcast cluster to my kubernetes cluster.

The members are connecting to the master node but after a few seconds I start getting the following messages on each node:

Jan 06, 2018 5:23:05 AM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Accepting socket connection from /10.233.66.58:46162
Jan 06, 2018 5:23:05 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Established socket connection between /10.233.66.58:5701 and /10.233.66.58:46162
Jan 06, 2018 5:23:05 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Connection[id=11, /10.233.66.58:5701->/10.233.66.58:46162, endpoint=null, alive=false, type=REST_CLIENT] closed. Reason: Connection closed by the other side
Jan 06, 2018 5:23:15 AM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Accepting socket connection from /10.233.66.58:46176
Jan 06, 2018 5:23:15 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Established socket connection between /10.233.66.58:5701 and /10.233.66.58:46176
Jan 06, 2018 5:23:15 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Connection[id=12, /10.233.66.58:5701->/10.233.66.58:46176, endpoint=null, alive=false, type=REST_CLIENT] closed. Reason: Connection closed by the other side
Jan 06, 2018 5:23:25 AM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Accepting socket connection from /10.233.66.58:46190
Jan 06, 2018 5:23:25 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Established socket connection between /10.233.66.58:5701 and /10.233.66.58:46190
Jan 06, 2018 5:23:25 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Connection[id=13, /10.233.66.58:5701->/10.233.66.58:46190, endpoint=null, alive=false, type=REST_CLIENT] closed. Reason: Connection closed by the other side
Jan 06, 2018 5:23:35 AM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Accepting socket connection from /10.233.66.58:46200
Jan 06, 2018 5:23:35 AM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Established socket connection between /10.233.66.58:5701 and /10.233.66.58:46200
Jan 06, 2018 5:23:35 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [10.233.66.58]:5701 [dev] [3.9.1] Connection[id=14, /10.233.66.58:5701->/10.233.66.58:46200, endpoint=null, alive=false, type=REST_CLIENT] closed. Reason: Connection closed by the other side

I'm not sure if this is an issue or an expected behavior.

I would appreciate any guidance on this.
Thanks!

cannot get endpoints in the namespace "default"

SEVERE: [172.20.215.2]:5701 [dev] [3.9.2] Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/hazelcast. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. endpoints "hazelcast" is forbidden: User "system:serviceaccount:default:default" cannot get endpoints in the namespace "default".
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/hazelcast. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. endpoints "hazelcast" is forbidden: User "system:serviceaccount:default:default" cannot get endpoints in the namespace "default".
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:470)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:407)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:379)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:787)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184)
at com.hazelcast.kubernetes.ServiceEndpointResolver.resolve(ServiceEndpointResolver.java:81)
at com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy.discoverNodes(HazelcastKubernetesDiscoveryStrategy.java:102)
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.discoverNodes(DefaultDiscoveryService.java:74)
at com.hazelcast.internal.cluster.impl.DiscoveryJoiner.getPossibleAddresses(DiscoveryJoiner.java:70)
at com.hazelcast.internal.cluster.impl.DiscoveryJoiner.getPossibleAddressesForInitialJoin(DiscoveryJoiner.java:59)
at com.hazelcast.cluster.impl.TcpIpJoiner.joinViaPossibleMembers(TcpIpJoiner.java:131)
at com.hazelcast.cluster.impl.TcpIpJoiner.doJoin(TcpIpJoiner.java:90)
at com.hazelcast.internal.cluster.impl.AbstractJoiner.join(AbstractJoiner.java:134)
at com.hazelcast.instance.Node.join(Node.java:690)
at com.hazelcast.instance.Node.start(Node.java:390)
at com.hazelcast.instance.HazelcastInstanceImpl.(HazelcastInstanceImpl.java:133)
at com.hazelcast.instance.HazelcastInstanceFactory.constructHazelcastInstance(HazelcastInstanceFactory.java:195)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:174)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:124)
at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:58)
at com.hazelcast.core.server.StartServer.main(StartServer.java:46)

Jan 15, 2018 3:36:53 AM com.hazelcast.instance.Node
SEVERE: [172.20.215.2]:5701 [dev] [3.9.2] Could not join cluster in 300000 ms. Shutting down now!

user code deployment to server for non-java clients

For Non-Java clients, it is difficult for a user to deploy server-side code.

For example:

The user wants to use EntryProcessor. He needs to implement a processor and its factory on the Java server side. We decided a use case like that:

  • The user will download hazelcast from https://hazelcast.org/download/
  • Extract the zip
  • Add his IdentifiedEntryProcessor.java and IdentifiedEntryProcessorFactory.java to extra(maybe a greater name) folder
  • Update the hazelcast.xml in the bin folder like:
<serialization>
        <data-serializable-factories>
            <data-serializable-factory factory-id="5">
                extra.IdentifiedEntryProcessorFactory
            </data-serializable-factory>
        </data-serializable-factories>
    </serialization>
...
  • Running sh start.sh

So, the user can deploy code to server easily. We should provide this feature for hazelcast-docker too.

Simple multicast in docker without --net:host

Hello Hazelcast team,

I am using your product on 3 of my app so far, very happy with performance during these years! I am now moving to docker and try to migrate.
I could make a demo with your basic image running on two separate servers. But when I remove this option --net:host, I am unable to find the right config to make it works. Can you tell me if this setup is possible?
I saw many posts with auto discovery in kubernetes, aws, or simple unicast. Is multicast definively not possible on physical host running simple containers (without any service discovery, other that the default HZ multicast)? I am open to options.

My two servers:
Clup98:

  • Pub add: 10.109.0.35
  • Docker add: 172.21.0.2
  • Port: 6801
  • Multicast: 224.2.2.5:44327

Clup99:

  • Pub add: 10.109.0.36
  • Docker add: 172.21.0.2
  • Port: 6801
  • Multicast: 224.2.2.5:44327

docker-compose.yml

version: '2'
services:
  hazel:
    container_name: hazel
    image: hazelcast/hazelcast
    ports:
      - "6801:6801"
      - "44327:44327/udp"
    volumes:
      - ./hazelcast.xml:/opt/hazelcast/hazelcast.xml
    #network_mode: "host"
    command: ["sh","-c","/sbin/ip address && ./server.sh"]

hazelcast.xml (Clup98)

<?xml version="1.0" encoding="UTF-8"?>
<!--
  ~ Copyright (c) 2008-2017, Hazelcast, Inc. All Rights Reserved.
  ~
  ~ Licensed under the Apache License, Version 2.0 (the "License");
  ~ you may not use this file except in compliance with the License.
  ~ You may obtain a copy of the License at
  ~
  ~ http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
  -->

<!--
    The default Hazelcast configuration. This is used when no hazelcast.xml is present.
    Please see the schema for how to configure Hazelcast at https://hazelcast.com/schema/config/hazelcast-config-3.8.xsd
    or the documentation at https://hazelcast.org/documentation/
-->
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.8.xsd"
           xmlns="http://www.hazelcast.com/schema/config"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <group>
        <name>dev</name>
        <password>dev-pass</password>
    </group>
    <management-center enabled="false">http://localhost:8081/mancenter</management-center>
    <properties><property name="hazelcast.local.localAddress">172.21.0.2</property></properties>
    <network>
        <public-address>10.109.0.35</public-address>
        <port auto-increment="true" port-count="100">6801</port>
        <outbound-ports>
            <!--
            Allowed port range when connecting to other nodes.
            0 or * means use system provided port.
            -->
            <ports>0</ports>
        </outbound-ports>
        <join>
            <multicast enabled="true">
                <multicast-group>224.2.2.5</multicast-group>
                <multicast-port>44327</multicast-port>
                <!-- trusted-interfaces>
                   <interface>10.109.0.*</interface>
                </trusted-interfaces -->   
            </multicast>
            <tcp-ip enabled="false">
                <interface>*.*.*.*</interface>
                <member-list>
                    <member>*.*.*.*</member>
                </member-list>
            </tcp-ip>
            <aws enabled="false">
                <access-key>my-access-key</access-key>
                <secret-key>my-secret-key</secret-key>
                <!--optional, default is us-east-1 -->
                <region>us-west-1</region>
                <!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
                <host-header>ec2.amazonaws.com</host-header>
                <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
                <security-group-name>hazelcast-sg</security-group-name>
                <tag-key>type</tag-key>
                <tag-value>hz-nodes</tag-value>
            </aws>
            <discovery-strategies>
            </discovery-strategies>
        </join>
        <interfaces enabled="true">
            <interface>172.*.*.*</interface>
        </interfaces>
        <ssl enabled="false"/>
        <socket-interceptor enabled="false"/>
        <symmetric-encryption enabled="false">
            <!--
               encryption algorithm such as
               DES/ECB/PKCS5Padding,
               PBEWithMD5AndDES,
               AES/CBC/PKCS5Padding,
               Blowfish,
               DESede
            -->
            <algorithm>PBEWithMD5AndDES</algorithm>
            <!-- salt value to use when generating the secret key -->
            <salt>thesalt</salt>
            <!-- pass phrase to use when generating the secret key -->
            <password>thepass</password>
            <!-- iteration count to use when generating the secret key -->
            <iteration-count>19</iteration-count>
        </symmetric-encryption>
    </network>
    <partition-group enabled="false"/>
    <executor-service name="default">
        <pool-size>16</pool-size>
        <!--Queue capacity. 0 means Integer.MAX_VALUE.-->
        <queue-capacity>0</queue-capacity>
    </executor-service>
    <queue name="default">
        <!--
            Maximum size of the queue. When a JVM's local queue size reaches the maximum,
            all put/offer operations will get blocked until the queue size
            of the JVM goes down below the maximum.
            Any integer between 0 and Integer.MAX_VALUE. 0 means
            Integer.MAX_VALUE. Default is 0.
        -->
        <max-size>0</max-size>
        <!--
            Number of backups. If 1 is set as the backup-count for example,
            then all entries of the map will be copied to another JVM for
            fail-safety. 0 means no backup.
        -->
        <backup-count>1</backup-count>

        <!--
            Number of async backups. 0 means no backup.
        -->
        <async-backup-count>0</async-backup-count>

        <empty-queue-ttl>-1</empty-queue-ttl>
    </queue>
    <map name="default">
        <!--
           Data type that will be used for storing recordMap.
           Possible values:
           BINARY (default): keys and values will be stored as binary data
           OBJECT : values will be stored in their object forms
           NATIVE : values will be stored in non-heap region of JVM
        -->
        <in-memory-format>BINARY</in-memory-format>

        <!--
            Number of backups. If 1 is set as the backup-count for example,
            then all entries of the map will be copied to another JVM for
            fail-safety. 0 means no backup.
        -->
        <backup-count>1</backup-count>
        <!--
            Number of async backups. 0 means no backup.
        -->
        <async-backup-count>0</async-backup-count>
        <!--
			Maximum number of seconds for each entry to stay in the map. Entries that are
			older than <time-to-live-seconds> and not updated for <time-to-live-seconds>
			will get automatically evicted from the map.
			Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
		-->
        <time-to-live-seconds>0</time-to-live-seconds>
        <!--
			Maximum number of seconds for each entry to stay idle in the map. Entries that are
			idle(not touched) for more than <max-idle-seconds> will get
			automatically evicted from the map. Entry is touched if get, put or containsKey is called.
			Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
		-->
        <max-idle-seconds>0</max-idle-seconds>
        <!--
            Valid values are:
            NONE (no eviction),
            LRU (Least Recently Used),
            LFU (Least Frequently Used).
            NONE is the default.
        -->
        <eviction-policy>NONE</eviction-policy>
        <!--
            Maximum size of the map. When max size is reached,
            map is evicted based on the policy defined.
            Any integer between 0 and Integer.MAX_VALUE. 0 means
            Integer.MAX_VALUE. Default is 0.
        -->
        <max-size policy="PER_NODE">0</max-size>
        <!--
            `eviction-percentage` property is deprecated and will be ignored when it is set.

            As of version 3.7, eviction mechanism changed.
            It uses a probabilistic algorithm based on sampling. Please see documentation for further details
        -->
        <eviction-percentage>25</eviction-percentage>
        <!--
            `min-eviction-check-millis` property is deprecated  and will be ignored when it is set.

            As of version 3.7, eviction mechanism changed.
            It uses a probabilistic algorithm based on sampling. Please see documentation for further details
        -->
        <min-eviction-check-millis>100</min-eviction-check-millis>
        <!--
            While recovering from split-brain (network partitioning),
            map entries in the small cluster will merge into the bigger cluster
            based on the policy set here. When an entry merge into the
            cluster, there might an existing entry with the same key already.
            Values of these entries might be different for that same key.
            Which value should be set for the key? Conflict is resolved by
            the policy set here. Default policy is PutIfAbsentMapMergePolicy

            There are built-in merge policies such as
            com.hazelcast.map.merge.PassThroughMergePolicy; entry will be overwritten if merging entry exists for the key.
            com.hazelcast.map.merge.PutIfAbsentMapMergePolicy ; entry will be added if the merging entry doesn't exist in the cluster.
            com.hazelcast.map.merge.HigherHitsMapMergePolicy ; entry with the higher hits wins.
            com.hazelcast.map.merge.LatestUpdateMapMergePolicy ; entry with the latest update wins.
        -->
        <merge-policy>com.hazelcast.map.merge.PutIfAbsentMapMergePolicy</merge-policy>

        <!--
           Control caching of de-serialized values. Caching makes query evaluation faster, but it cost memory.
           Possible Values:
                        NEVER: Never cache deserialized object
                        INDEX-ONLY: Caches values only when they are inserted into an index.
                        ALWAYS: Always cache deserialized values.
        -->
        <cache-deserialized-values>INDEX-ONLY</cache-deserialized-values>

    </map>

    <multimap name="default">
        <backup-count>1</backup-count>
        <value-collection-type>SET</value-collection-type>
    </multimap>

    <list name="default">
        <backup-count>1</backup-count>
    </list>

    <set name="default">
        <backup-count>1</backup-count>
    </set>

    <jobtracker name="default">
        <max-thread-size>0</max-thread-size>
        <!-- Queue size 0 means number of partitions * 2 -->
        <queue-size>0</queue-size>
        <retry-count>0</retry-count>
        <chunk-size>1000</chunk-size>
        <communicate-stats>true</communicate-stats>
        <topology-changed-strategy>CANCEL_RUNNING_OPERATION</topology-changed-strategy>
    </jobtracker>

    <semaphore name="default">
        <initial-permits>0</initial-permits>
        <backup-count>1</backup-count>
        <async-backup-count>0</async-backup-count>
    </semaphore>

    <reliable-topic name="default">
        <read-batch-size>10</read-batch-size>
        <topic-overload-policy>BLOCK</topic-overload-policy>
        <statistics-enabled>true</statistics-enabled>
    </reliable-topic>

    <ringbuffer name="default">
        <capacity>10000</capacity>
        <backup-count>1</backup-count>
        <async-backup-count>0</async-backup-count>
        <time-to-live-seconds>0</time-to-live-seconds>
        <in-memory-format>BINARY</in-memory-format>
    </ringbuffer>

    <serialization>
        <portable-version>0</portable-version>
    </serialization>

    <services enable-defaults="true"/>

    <lite-member enabled="false"/>

</hazelcast>

hazelcast.xml (Clup99)

<?xml version="1.0" encoding="UTF-8"?>
<!--
  ~ Copyright (c) 2008-2017, Hazelcast, Inc. All Rights Reserved.
  ~
  ~ Licensed under the Apache License, Version 2.0 (the "License");
  ~ you may not use this file except in compliance with the License.
  ~ You may obtain a copy of the License at
  ~
  ~ http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
  -->

<!--
    The default Hazelcast configuration. This is used when no hazelcast.xml is present.
    Please see the schema for how to configure Hazelcast at https://hazelcast.com/schema/config/hazelcast-config-3.8.xsd
    or the documentation at https://hazelcast.org/documentation/
-->
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.8.xsd"
           xmlns="http://www.hazelcast.com/schema/config"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <group>
        <name>dev</name>
        <password>dev-pass</password>
    </group>
    <management-center enabled="false">http://localhost:8081/mancenter</management-center>
    <properties><property name="hazelcast.local.localAddress">172.21.0.2</property></properties>
    <network>
        <public-address>10.109.0.36</public-address>
        <port auto-increment="true" port-count="100">6801</port>
        <outbound-ports>
            <!--
            Allowed port range when connecting to other nodes.
            0 or * means use system provided port.
            -->
            <ports>0</ports>
        </outbound-ports>
        <join>
            <multicast enabled="true">
                <multicast-group>224.2.2.5</multicast-group>
                <multicast-port>44327</multicast-port>
                <!-- trusted-interfaces>
                   <interface>10.109.0.*</interface>
                </trusted-interfaces -->   
            </multicast>
            <tcp-ip enabled="false">
                <interface>*.*.*.*</interface>
                <member-list>
                    <member>*.*.*.*</member>
                </member-list>
            </tcp-ip>
            <aws enabled="false">
                <access-key>my-access-key</access-key>
                <secret-key>my-secret-key</secret-key>
                <!--optional, default is us-east-1 -->
                <region>us-west-1</region>
                <!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
                <host-header>ec2.amazonaws.com</host-header>
                <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
                <security-group-name>hazelcast-sg</security-group-name>
                <tag-key>type</tag-key>
                <tag-value>hz-nodes</tag-value>
            </aws>
            <discovery-strategies>
            </discovery-strategies>
        </join>
        <interfaces enabled="true">
            <interface>172.*.*.*</interface>
        </interfaces>
        <ssl enabled="false"/>
        <socket-interceptor enabled="false"/>
        <symmetric-encryption enabled="false">
            <!--
               encryption algorithm such as
               DES/ECB/PKCS5Padding,
               PBEWithMD5AndDES,
               AES/CBC/PKCS5Padding,
               Blowfish,
               DESede
            -->
            <algorithm>PBEWithMD5AndDES</algorithm>
            <!-- salt value to use when generating the secret key -->
            <salt>thesalt</salt>
            <!-- pass phrase to use when generating the secret key -->
            <password>thepass</password>
            <!-- iteration count to use when generating the secret key -->
            <iteration-count>19</iteration-count>
        </symmetric-encryption>
    </network>
    <partition-group enabled="false"/>
    <executor-service name="default">
        <pool-size>16</pool-size>
        <!--Queue capacity. 0 means Integer.MAX_VALUE.-->
        <queue-capacity>0</queue-capacity>
    </executor-service>
    <queue name="default">
        <!--
            Maximum size of the queue. When a JVM's local queue size reaches the maximum,
            all put/offer operations will get blocked until the queue size
            of the JVM goes down below the maximum.
            Any integer between 0 and Integer.MAX_VALUE. 0 means
            Integer.MAX_VALUE. Default is 0.
        -->
        <max-size>0</max-size>
        <!--
            Number of backups. If 1 is set as the backup-count for example,
            then all entries of the map will be copied to another JVM for
            fail-safety. 0 means no backup.
        -->
        <backup-count>1</backup-count>

        <!--
            Number of async backups. 0 means no backup.
        -->
        <async-backup-count>0</async-backup-count>

        <empty-queue-ttl>-1</empty-queue-ttl>
    </queue>
    <map name="default">
        <!--
           Data type that will be used for storing recordMap.
           Possible values:
           BINARY (default): keys and values will be stored as binary data
           OBJECT : values will be stored in their object forms
           NATIVE : values will be stored in non-heap region of JVM
        -->
        <in-memory-format>BINARY</in-memory-format>

        <!--
            Number of backups. If 1 is set as the backup-count for example,
            then all entries of the map will be copied to another JVM for
            fail-safety. 0 means no backup.
        -->
        <backup-count>1</backup-count>
        <!--
            Number of async backups. 0 means no backup.
        -->
        <async-backup-count>0</async-backup-count>
        <!--
			Maximum number of seconds for each entry to stay in the map. Entries that are
			older than <time-to-live-seconds> and not updated for <time-to-live-seconds>
			will get automatically evicted from the map.
			Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
		-->
        <time-to-live-seconds>0</time-to-live-seconds>
        <!--
			Maximum number of seconds for each entry to stay idle in the map. Entries that are
			idle(not touched) for more than <max-idle-seconds> will get
			automatically evicted from the map. Entry is touched if get, put or containsKey is called.
			Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
		-->
        <max-idle-seconds>0</max-idle-seconds>
        <!--
            Valid values are:
            NONE (no eviction),
            LRU (Least Recently Used),
            LFU (Least Frequently Used).
            NONE is the default.
        -->
        <eviction-policy>NONE</eviction-policy>
        <!--
            Maximum size of the map. When max size is reached,
            map is evicted based on the policy defined.
            Any integer between 0 and Integer.MAX_VALUE. 0 means
            Integer.MAX_VALUE. Default is 0.
        -->
        <max-size policy="PER_NODE">0</max-size>
        <!--
            `eviction-percentage` property is deprecated and will be ignored when it is set.

            As of version 3.7, eviction mechanism changed.
            It uses a probabilistic algorithm based on sampling. Please see documentation for further details
        -->
        <eviction-percentage>25</eviction-percentage>
        <!--
            `min-eviction-check-millis` property is deprecated  and will be ignored when it is set.

            As of version 3.7, eviction mechanism changed.
            It uses a probabilistic algorithm based on sampling. Please see documentation for further details
        -->
        <min-eviction-check-millis>100</min-eviction-check-millis>
        <!--
            While recovering from split-brain (network partitioning),
            map entries in the small cluster will merge into the bigger cluster
            based on the policy set here. When an entry merge into the
            cluster, there might an existing entry with the same key already.
            Values of these entries might be different for that same key.
            Which value should be set for the key? Conflict is resolved by
            the policy set here. Default policy is PutIfAbsentMapMergePolicy

            There are built-in merge policies such as
            com.hazelcast.map.merge.PassThroughMergePolicy; entry will be overwritten if merging entry exists for the key.
            com.hazelcast.map.merge.PutIfAbsentMapMergePolicy ; entry will be added if the merging entry doesn't exist in the cluster.
            com.hazelcast.map.merge.HigherHitsMapMergePolicy ; entry with the higher hits wins.
            com.hazelcast.map.merge.LatestUpdateMapMergePolicy ; entry with the latest update wins.
        -->
        <merge-policy>com.hazelcast.map.merge.PutIfAbsentMapMergePolicy</merge-policy>

        <!--
           Control caching of de-serialized values. Caching makes query evaluation faster, but it cost memory.
           Possible Values:
                        NEVER: Never cache deserialized object
                        INDEX-ONLY: Caches values only when they are inserted into an index.
                        ALWAYS: Always cache deserialized values.
        -->
        <cache-deserialized-values>INDEX-ONLY</cache-deserialized-values>

    </map>

    <multimap name="default">
        <backup-count>1</backup-count>
        <value-collection-type>SET</value-collection-type>
    </multimap>

    <list name="default">
        <backup-count>1</backup-count>
    </list>

    <set name="default">
        <backup-count>1</backup-count>
    </set>

    <jobtracker name="default">
        <max-thread-size>0</max-thread-size>
        <!-- Queue size 0 means number of partitions * 2 -->
        <queue-size>0</queue-size>
        <retry-count>0</retry-count>
        <chunk-size>1000</chunk-size>
        <communicate-stats>true</communicate-stats>
        <topology-changed-strategy>CANCEL_RUNNING_OPERATION</topology-changed-strategy>
    </jobtracker>

    <semaphore name="default">
        <initial-permits>0</initial-permits>
        <backup-count>1</backup-count>
        <async-backup-count>0</async-backup-count>
    </semaphore>

    <reliable-topic name="default">
        <read-batch-size>10</read-batch-size>
        <topic-overload-policy>BLOCK</topic-overload-policy>
        <statistics-enabled>true</statistics-enabled>
    </reliable-topic>

    <ringbuffer name="default">
        <capacity>10000</capacity>
        <backup-count>1</backup-count>
        <async-backup-count>0</async-backup-count>
        <time-to-live-seconds>0</time-to-live-seconds>
        <in-memory-format>BINARY</in-memory-format>
    </ringbuffer>

    <serialization>
        <portable-version>0</portable-version>
    </serialization>

    <services enable-defaults="true"/>

    <lite-member enabled="false"/>

</hazelcast>

Each host can see only itself:

Attaching to hazel
hazel    | 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
hazel    |     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
hazel    |     inet 127.0.0.1/8 scope host lo
hazel    |        valid_lft forever preferred_lft forever
hazel    |     inet6 ::1/128 scope host 
hazel    |        valid_lft forever preferred_lft forever
hazel    | 242: eth0@if243: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
hazel    |     link/ether 02:42:ac:15:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
hazel    |     inet 172.21.0.3/16 scope global eth0
hazel    |        valid_lft forever preferred_lft forever
hazel    |     inet6 fe80::42:acff:fe15:3/64 scope link tentative 
hazel    |        valid_lft forever preferred_lft forever
hazel    | ########################################
hazel    | # RUN_JAVA=
hazel    | # JAVA_OPTS=
hazel    | # starting now....
hazel    | ########################################
hazel    | Process id 9 for hazelcast instance is written to location:  /opt/hazelcast/hazelcast_instance.pid
hazel    | Sep 22, 2017 4:05:17 PM com.hazelcast.config.XmlConfigLocator
hazel    | INFO: Loading 'hazelcast.xml' from working directory.
hazel    | Sep 22, 2017 4:05:17 PM com.hazelcast.instance.DefaultAddressPicker
hazel    | INFO: [LOCAL] [dev] [3.8.3] Picking address configured by property 'hazelcast.local.localAddress'
hazel    | Sep 22, 2017 4:05:17 PM com.hazelcast.instance.DefaultAddressPicker
hazel    | INFO: [LOCAL] [dev] [3.8.3] Picked [172.21.0.2]:6801, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=6801], bind any local is true
hazel    | Sep 22, 2017 4:05:17 PM com.hazelcast.instance.DefaultAddressPicker
hazel    | INFO: [LOCAL] [dev] [3.8.3] Using public address: [10.109.0.36]:6801
hazel    | Sep 22, 2017 4:05:17 PM com.hazelcast.system
hazel    | INFO: [10.109.0.36]:6801 [dev] [3.8.3] Hazelcast 3.8.3 (20170704 - 10e1449) starting at [10.109.0.36]:6801
hazel    | Sep 22, 2017 4:05:17 PM com.hazelcast.system
hazel    | INFO: [10.109.0.36]:6801 [dev] [3.8.3] Copyright (c) 2008-2016, Hazelcast, Inc. All Rights Reserved.
hazel    | Sep 22, 2017 4:05:17 PM com.hazelcast.system
hazel    | INFO: [10.109.0.36]:6801 [dev] [3.8.3] Configured Hazelcast Serialization version : 1
hazel    | Sep 22, 2017 4:05:17 PM com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator
hazel    | INFO: [10.109.0.36]:6801 [dev] [3.8.3] Backpressure is disabled
hazel    | Sep 22, 2017 4:05:18 PM com.hazelcast.instance.Node
hazel    | WARNING: [10.109.0.36]:6801 [dev] [3.8.3] Cannot assign requested address (Error setting socket option)
hazel    | java.net.SocketException: Cannot assign requested address (Error setting socket option)
hazel    | 	at java.net.PlainDatagramSocketImpl.socketSetOption0(Native Method)
hazel    | 	at java.net.PlainDatagramSocketImpl.socketSetOption(PlainDatagramSocketImpl.java:74)
hazel    | 	at java.net.AbstractPlainDatagramSocketImpl.setOption(AbstractPlainDatagramSocketImpl.java:309)
hazel    | 	at java.net.MulticastSocket.setInterface(MulticastSocket.java:471)
hazel    | 	at com.hazelcast.internal.cluster.impl.MulticastService.createMulticastService(MulticastService.java:98)
hazel    | 	at com.hazelcast.instance.Node.<init>(Node.java:212)
hazel    | 	at com.hazelcast.instance.HazelcastInstanceImpl.createNode(HazelcastInstanceImpl.java:159)
hazel    | 	at com.hazelcast.instance.HazelcastInstanceImpl.<init>(HazelcastInstanceImpl.java:127)
hazel    | 	at com.hazelcast.instance.HazelcastInstanceFactory.constructHazelcastInstance(HazelcastInstanceFactory.java:218)
hazel    | 	at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:176)
hazel    | 	at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:126)
hazel    | 	at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:58)
hazel    | 	at com.hazelcast.core.server.StartServer.main(StartServer.java:46)
hazel    | 
hazel    | Sep 22, 2017 4:05:18 PM com.hazelcast.instance.Node
hazel    | INFO: [10.109.0.36]:6801 [dev] [3.8.3] Creating MulticastJoiner
hazel    | Sep 22, 2017 4:05:18 PM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl
hazel    | INFO: [10.109.0.36]:6801 [dev] [3.8.3] Starting 2 partition threads
hazel    | Sep 22, 2017 4:05:18 PM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl
hazel    | INFO: [10.109.0.36]:6801 [dev] [3.8.3] Starting 3 generic threads (1 dedicated for priority tasks)
hazel    | Sep 22, 2017 4:05:18 PM com.hazelcast.core.LifecycleService
hazel    | INFO: [10.109.0.36]:6801 [dev] [3.8.3] [10.109.0.36]:6801 is STARTING
hazel    | Sep 22, 2017 4:05:21 PM com.hazelcast.system
hazel    | INFO: [10.109.0.36]:6801 [dev] [3.8.3] Cluster version set to 3.8
hazel    | Sep 22, 2017 4:05:21 PM com.hazelcast.internal.cluster.impl.MulticastJoiner
hazel    | INFO: [10.109.0.36]:6801 [dev] [3.8.3] 
hazel    | 
hazel    | 
hazel    | Members [1] {
hazel    | 	Member [10.109.0.36]:6801 - f15f8d4f-e217-49e0-9de7-1a1d656d9b9a this
hazel    | }
hazel    | 
hazel    | Sep 22, 2017 4:05:21 PM com.hazelcast.core.LifecycleService
hazel    | INFO: [10.109.0.36]:6801 [dev] [3.8.3] [10.109.0.36]:6801 is STARTED

Not sure how to troubleshoot this issue.

Thank you for any help on this matter & good weekend!
Regards,
Greg.

not able to provide custom config file hazelcast.xml file

I am trying to run docker image with the following command
docker run -e JAVA_OPTS="-Dhazelcast.config=/Users/dhruvbansal/hazelcast.xml" hazelcast/hazelcast

Getting following exception:

+ exec java -server -Djava.net.preferIPv4Stack=true -Dhazelcast.config=/Users/dhruvbansal/Documents/cafu/installers/hazelcast-3.10.4/bin com.hazelcast.core.server.StartServer
########################################
# JAVA_OPTS=-Djava.net.preferIPv4Stack=true -Dhazelcast.config=/Users/dhruvbansal/Documents/cafu/installers/hazelcast-3.10.4/bin
# CLASSPATH=/opt/hazelcast:/opt/hazelcast/*
# starting now....
########################################
Aug 23, 2018 7:55:18 AM com.hazelcast.config.XmlConfigLocator
INFO: Loading configuration /Users/dhruvbansal/Documents/cafu/installers/hazelcast-3.10.4/bin from System property 'hazelcast.config'
Aug 23, 2018 7:55:18 AM com.hazelcast.config.XmlConfigLocator
INFO: Using configuration file at /Users/dhruvbansal/Documents/cafu/installers/hazelcast-3.10.4/bin
Exception in thread "main" com.hazelcast.core.HazelcastException: com.hazelcast.core.HazelcastException: Config file at '/Users/dhruvbansal/hazelcast.xml' doesn't exist.
	at com.hazelcast.config.XmlConfigLocator.<init>(XmlConfigLocator.java:68)
	at com.hazelcast.config.XmlConfigBuilder.<init>(XmlConfigBuilder.java:179)
	at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:122)
	at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:57)
	at com.hazelcast.core.server.StartServer.main(StartServer.java:46)
Caused by: com.hazelcast.core.HazelcastException: Config file at '/Users/dhruvbansal/Documents/cafu/installers/hazelcast-3.10.4/bin' doesn't exist.
	at com.hazelcast.config.XmlConfigLocator.loadSystemPropertyFileResource(XmlConfigLocator.java:160)
	at com.hazelcast.config.XmlConfigLocator.loadFromSystemProperty(XmlConfigLocator.java:148)
	at com.hazelcast.config.XmlConfigLocator.<init>(XmlConfigLocator.java:54)
	... 4 more

File at /Users/dhruvbansal/hazelcast.xml is present and valid.

logback not working

I am trying to customize the logging to disable logging coming from kubernetes liveness
so I added the below to classpath
logback.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n</pattern>
        </encoder>
    </appender>
    <logger name="com.hazelcast" level="INFO">
      <appender-ref ref="STDOUT"/>
    </logger>
    <logger name="com.hazelcast.nio.tcp.SocketAcceptorThread" level="WARN">
      <appender-ref ref="STDOUT"/>
    </logger>
    <logger name="com.hazelcast.nio.tcp.TcpIpConnectionManager" level="WARN">
      <appender-ref ref="STDOUT"/>
    </logger>
    <logger name="com.hazelcast.nio.tcp.TcpIpConnection" level="WARN">
      <appender-ref ref="STDOUT"/>
    </logger>
    <root level="INFO">
        <appender-ref ref="STDOUT"/>
    </root>
</configuration>

hazelcast.xml

<?xml version="1.0" encoding="UTF-8"?>
<!--
  ~ Copyright (c) 2008-2015, Hazelcast, Inc. All Rights Reserved.
  ~
  ~ Licensed under the Apache License, Version 2.0 (the "License");
  ~ you may not use this file except in compliance with the License.
  ~ You may obtain a copy of the License at
  ~
  ~ http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
  -->

<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.9.xsd"
           xmlns="http://www.hazelcast.com/schema/config"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">


  <management-center enabled="true">
      http://10.109.72.150:8080/hazelcast-mancenter
  </management-center>




  <properties>
    <property name="hazelcast.discovery.enabled">true</property>
    <property name="hazelcast.logging.type">slf4j</property>
  </properties>

  <network>
    <join>
      <multicast enabled="false"/>
      <tcp-ip enabled="false" />

      <discovery-strategies>
        <discovery-strategy enabled="true" class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
        </discovery-strategy>
      </discovery-strategies>
    </join>
  </network>
  <map name="cdr-import-carrier">
    <backup-count>1</backup-count>
    <time-to-live-seconds>300</time-to-live-seconds>
    <max-idle-seconds>180</max-idle-seconds>
    <eviction-policy>LFU</eviction-policy>
  </map>
</hazelcast>

The logging pattern did change but as it is not reading from my logback.xml file because it do show some DBEUG and INFO from classes i set logging to WARN in .

Is their a default logback.xml that i should override ?

17:20:46.309 [hz._hzInstance_1_dev.IO.thread-Acceptor] INFO com.hazelcast.nio.tcp.TcpIpAcceptor - [10.36.0.0]:5701 [dev] [3.10.2] Accepting socket connection from /10.36.0.1:54460

Template hazelcast.xml with Management Center URL

Currently, in order to use Management Center, it's necessary to create a separate Docker image with the custom hazelcast.xml or use a volume with the custom hazelcast.xml. Running Hazelcast with Management Center should be as simple as executing:

$ docker run hazelcast/hazelcast -e JAVA_OPTS="-Dmancenter.url=<url>"

EC2 discovery error

Hi,

I'm trying to boot 2 dockers on 2 Coreos nodes with aws auto-discovery but I get this stacktrace :

g 11 09:05:18 coreos-worker-01 docker[1213]: Digest: sha256:38838ae69d4ca5cae6d5e156ba1ed525f8a76ecd4e9a1e57b86946e943785125
Aug 11 09:05:19 coreos-worker-01 systemd[1]: Started Hazelcast OSS.
Aug 11 09:05:22 coreos-worker-01 docker[1342]: Aug 11, 2015 9:05:22 AM com.hazelcast.config.XmlConfigLocator
Aug 11 09:05:22 coreos-worker-01 docker[1342]: INFO: Loading 'hazelcast.xml' from working directory.
Aug 11 09:05:22 coreos-worker-01 docker[1342]: Aug 11, 2015 9:05:22 AM com.hazelcast.instance.DefaultAddressPicker
Aug 11 09:05:23 coreos-worker-01 docker[1342]: INFO: [LOCAL] [dev] [3.5] Prefer IPv4 stack is true.
Aug 11 09:05:23 coreos-worker-01 docker[1342]: Aug 11, 2015 9:05:23 AM com.hazelcast.instance.DefaultAddressPicker
Aug 11 09:05:23 coreos-worker-01 docker[1342]: INFO: [LOCAL] [dev] [3.5] Picked Address[172.32.78.2]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
Aug 11 09:05:23 coreos-worker-01 docker[1342]: Aug 11, 2015 9:05:23 AM com.hazelcast.spi.OperationService
Aug 11 09:05:23 coreos-worker-01 docker[1342]: INFO: [172.32.78.2]:5701 [dev] [3.5] Backpressure is disabled
Aug 11 09:05:23 coreos-worker-01 docker[1342]: Aug 11, 2015 9:05:23 AM com.hazelcast.spi.impl.operationexecutor.classic.ClassicOperationExecutor
Aug 11 09:05:23 coreos-worker-01 docker[1342]: INFO: [172.32.78.2]:5701 [dev] [3.5] Starting with 2 generic operation threads and 2 partition operation threads.
Aug 11 09:05:23 coreos-worker-01 docker[1342]: Aug 11, 2015 9:05:23 AM com.hazelcast.system
Aug 11 09:05:23 coreos-worker-01 docker[1342]: INFO: [172.32.78.2]:5701 [dev] [3.5] Hazelcast 3.5 (20150617 - 4270dc6) starting at Address[172.32.78.2]:5701
Aug 11 09:05:23 coreos-worker-01 docker[1342]: Aug 11, 2015 9:05:23 AM com.hazelcast.system
Aug 11 09:05:23 coreos-worker-01 docker[1342]: INFO: [172.32.78.2]:5701 [dev] [3.5] Copyright (c) 2008-2015, Hazelcast, Inc. All Rights Reserved.
Aug 11 09:05:23 coreos-worker-01 docker[1342]: Aug 11, 2015 9:05:23 AM com.hazelcast.instance.Node
Aug 11 09:05:23 coreos-worker-01 docker[1342]: INFO: [172.32.78.2]:5701 [dev] [3.5] Creating AWSJoiner
Aug 11 09:05:23 coreos-worker-01 docker[1342]: Exception in thread "main" com.hazelcast.core.HazelcastException: java.lang.ClassNotFoundException: com.hazelcast.cluster.impl.TcpIpJoinerOverAWS
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at com.hazelcast.util.ExceptionUtil.rethrow(ExceptionUtil.java:67)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at com.hazelcast.instance.Node.createJoiner(Node.java:568)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at com.hazelcast.instance.DefaultNodeContext.createJoiner(DefaultNodeContext.java:35)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at com.hazelcast.instance.Node.<init>(Node.java:171)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at com.hazelcast.instance.HazelcastInstanceImpl.<init>(HazelcastInstanceImpl.java:120)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at com.hazelcast.instance.HazelcastInstanceFactory.constructHazelcastInstance(HazelcastInstanceFactory.java:152)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:135)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:111)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:58)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at com.hazelcast.core.server.StartServer.main(StartServer.java:36)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: Caused by: java.lang.ClassNotFoundException: com.hazelcast.cluster.impl.TcpIpJoinerOverAWS
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at java.security.AccessController.doPrivileged(Native Method)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at java.lang.Class.forName0(Native Method)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at java.lang.Class.forName(Class.java:191)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: at com.hazelcast.instance.Node.createJoiner(Node.java:564)
Aug 11 09:05:23 coreos-worker-01 docker[1342]: ... 8 more

The aws part in my hazelcast.xml:

            <aws enabled="true">
                <access-key>AKIAIXXX</access-key>
                <secret-key>XXX</secret-key>
                <!--optional, default is us-east-1 -->
                <region>us-east-1</region>
                <!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
              <!--  <host-header>ec2.amazonaws.com</host-header> -->
                <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
              <!--  <security-group-name>default</security-group-name> -->
            </aws>

and the complete xml: https://gist.github.com/arthur-c/e3ed7d9b8f90aeb12a4e

My Dockerfile:

FROM hazelcast/hazelcast:latest
# Add your custom hazelcast.xml
ADD hazelcast.xml $HZ_HOME
# Run hazelcast
CMD java -cp $HZ_HOME/hazelcast-$HZ_VERSION.jar com.hazelcast.core.server.StartServer

I'm not sure if there is a relation with this bug : hazelcast/hazelcast#5653

Hazelcast Client on Host fails to work with Hazelcast Cluster in Docker

I have a simple use case where I want to spin up a Hazelcast Cluster in Docker on my laptop but I want to connect to that cluster with a Hazelcast Client I am coding on my local IDE on my host O/S.

Simple enough to reproduce, start 2 Hazelcast Docker cluster instances...

docker run -it -p 5701:5701 hazelcast/hazelcast
docker run -it -p 5702:5702 hazelcast/hazelcast

Start a client on the host O/S and connect on 127.0.0.1:5701 and reports back…

Members [2] {
	Member [172.17.0.2]:5701 - a126f9e6-16fe-443a-b9a0-ee5fe04949b3
	Member [172.17.0.3]:5702 - 353a41ef-6bc0-489a-8944-0e1de7c39f34
}

Any other operations afterwards just timeouts eventually reporting....

Caused by: java.io.IOException: No available connection to address [172.17.0.3]:5702

eval JAVA_OPTS in start.sh

In a kubernetes environment one may want to

  • set an environmental variable via helm, etc.
  • consume that environmental variable as part of JAVA_OPTS

An eval should allow the JAVA_OPTS to resolve the env var

          env:
          - name: KEYSTORE_PASSWORD
            value: {{ .Values.keystorePassword}}
          - name: JAVA_OPTS
            value: "-DkeystorePassword=${KEYSTORE_PASSWORD}"

As an example, this is a convenience that is currently used in OpenLiberty

https://github.com/OpenLiberty/open-liberty/blob/master/dev/com.ibm.ws.kernel.boot.ws-server/publish/bin/server#L826

Print a warning when British spelling of "License" is used

I was banging my head against a keyboard because my Hazelcast docker image refused to start.
It took me to a while to realize I set the HZ_LICENCE_KEY env. property instead of HZ_LICENSE_KEY.

In the UK "license" is used as a verb while a noun is spelled as "licence".
I think a warning here would be nice.

Right now a misspelled key is silently ignored.

mancenter docker image is not running

when I put the command, it runs
docker run -ti -p 8080:8080 hazelcast/management-center:latest

and throws this error

########################################
# RUN_JAVA=
# JAVA_OPTS=
# starting now....
########################################
Error: Unable to access jarfile mancenter-.war

this env var in the docker file is not used in the .sh file
ENV HZ_VERSION 3.8.2

also java and java_opts are not printed out.

REST inteface does not seem to work

curl -vvv "http://192.168.99.100:5701/hazelcast/rest/cluster"

  • Trying 192.168.99.100...
  • Connected to 192.168.99.100 (192.168.99.100) port 5701 (#0)

    GET /hazelcast/rest/cluster HTTP/1.1
    Host: 192.168.99.100:5701
    User-Agent: curl/7.43.0
    Accept: /

  • Empty reply from server
  • Connection #0 to host 192.168.99.100 left intact
    curl: (52) Empty reply from server

Hazelcast logs
hazelcast | INFO: [172.17.0.5]:5701 [dev] [3.6.2] Established socket connection between /172.17.0.5:5701 and /192.168.99.1:55400
hazelcast | May 10, 2016 7:30:25 AM com.hazelcast.nio.tcp.TcpIpConnection
hazelcast | INFO: [172.17.0.5]:5701 [dev] [3.6.2] Connection [/192.168.99.1:55400] lost. Reason: Socket explicitly closed

CLASSPATH usage is misleading

see line below
export CLASSPATH=$HAZELCAST_HOME/hazelcast-all-$HZ_VERSION.jar:$CLASSPATH/*

CLASSPATH is considered as a folder which is not true all the time. : separated values can be classpath variable as well.

export CLASSPATH=$EXT_CP_DIR/*:$EXT_CP_DIR:$HAZELCAST_HOME/hazelcast-all-$HZ_VERSION.jar that is a good solution` could be better option.

$EXT_CP_DIR/* --> for jar files

$EXT_CP_DIR --> for hazelcast.xml ( we can also try to detect hazelcast.xml and pass it via hazelcast.config instead of adding one more EXT_CP_DIR)

Improve out of the box Docker experience

Currently users have to configure tcp-ip configuration for each hazelcast instance in docker.

To overcome this tackle, we should utilize discovery spi mechanisms in our docker images.

Users should be able to run a hazelcast cluster with just configuring the URL of the discovery mechanism.

Consider providing slim down version of Hazelcast images

Topic for discussion - follow up to #34.

Current hazelcast/hazelcast image size is 321MB, but only 9MB of it takes Hazelcast JAR itself. The image could be reduced by using a different base image. Then either only the reduced version would be provided by Hazelcast or both the full and the reduced.

The openjdk Docker repository offers *-jre-slim and/or *-jre-alpine images which could be used to reduce the size. The quick test shows following sizes of the final images:

  • 285MB - hazelcast on the top of openjdk:8-jre
  • 216MB - hazelcast on the top of openjdk:8-jre-slim
  • 95MB - hazelcast on the top of openjdk:8-jre-alpine (with bash package added to the image)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.