autopilotpattern / consul Goto Github PK
View Code? Open in Web Editor NEWImplementation of the Autopilot Pattern for HashiCorp's Consul
License: Other
Implementation of the Autopilot Pattern for HashiCorp's Consul
License: Other
I used cloudapi.example.com and dockerapi.example.com for my installation, is there a better way to do this check?
# make sure Docker client is pointed to the same place as the Triton client
local docker_user=$(docker info 2>&1 | awk -F": " '/SDCAccount:/{print $2}')
local docker_dc=$(echo $DOCKER_HOST | awk -F"/" '{print $3}' | awk -F'.' '{print $1}')
TRITON_USER=$(triton profile get | awk -F": " '/account:/{print $2}')
TRITON_DC=$(triton profile get | awk -F"/" '/url:/{print $3}' | awk -F'.' '{print $1}')
TRITON_ACCOUNT=$(triton account get | awk -F": " '/id:/{print $2}')
if [ ! "$docker_user" = "$TRITON_USER" ] || [ ! "$docker_dc" = "$TRITON_DC" ]; then
echo
tput rev # reverse
tput bold # bold
echo 'Error! The Triton CLI configuration does not match the Docker CLI configuration.'
tput sgr0 # clear
Hard Coded the domain name:
# setup environment file
if [ ! -f "_env" ]; then
echo '# Consul bootstrap via Triton CNS' >> _env
echo CONSUL=consul.svc.${TRITON_ACCOUNT}.${TRITON_DC}.cns.joyent.com >> _env
echo >> _env
@jacobloveless writes...
I think the health check on the agent needs to be a bit more robust. If I read this correctly, this will mark the agent as healthy even if it isn't yet in sync on raft which might cause issues. Perhaps something like...
if [ $(consul info | awk '/members/{print$3}') == 1 ]; then
_log "No peers in raft"
consul join ${CONSUL}
fi
The documentation for the existing Consul server health check matched the behavior when I investigated previous versions. Here's the existing check and documentation for note...
#
# Check if a member of a raft. If consul info returns an error we'll pipefail
# and exit for a failed health check.
#
# If we have no peers then try to join the raft via the CNS svc record. Once a
# node is connected to at least one other peer it'll get the rest of the raft
# via the Consul LAN gossip.
#
# If we end up joining ourselves we just retry on the next health check until
# we've got the whole cluster together.
#
health() {
if [ $(consul info | awk '/num_peers/{print$3}') == 0 ]; then
_log "No peers in raft"
consul join ${CONSUL}
fi
}
I tried running ./start.sh which spawned 3 consul instances each trying to join the other but failing to do so. Here is a log dump for you reference.
Successfully joined cluster by contacting 1 nodes.
2016/06/23 17:33:59 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 17:33:59 [ERR] agent: failed to sync remote state: No cluster leader
2016/06/23 17:34:08 [INFO] agent.rpc: Accepted client: 127.0.0.1:62022
2016-06-23 17:34:08 containerpilot: No peers in raft
2016-06-23 17:34:08 containerpilot: Bootstrapping raft with self
2016/06/23 17:34:09 [INFO] agent.rpc: Accepted client: 127.0.0.1:61514
2016/06/23 17:34:09 [INFO] agent: (LAN) joining: [192.168.128.18]
2016/06/23 17:34:09 [INFO] agent: (LAN) joined: 1 Err:
Successfully joined cluster by contacting 1 nodes.
2016/06/23 17:34:09 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 17:34:14 [ERR] agent: coordinate update error: No cluster leader
What could be causing the issue?
Once the TTL for a service has expired, is there any expectation that the service will come back with Triton (scaled down: no, service updated: no, hardware failure: maybe, os update: yes)?
Would it make sense for a cleanup process to exist within this container (once a service has been down say 30 mins or whatever)?
Error from /test/node ./raft-test.js
Using Joyent us-east2 as docker host.
{ Error: (HTTP code 500) server error - invalid filters: Error: invalid filter "0" - expected an array or object, got: "{\"label\":[\"com.docker.compose.service=consul\"]}" (XXXXXXX-XXXX-XXXXX-XX) at /apconsul/test/node_modules/docker-modem/lib/modem.js:229:17 at getCause (/apconsuledgemesh-deployment/test/node_modules/docker-modem/lib/modem.js:259:7) at Modem.buildPayload (/apconsul/test/node_modules/docker-modem/lib/modem.js:228:5) at IncomingMessage.<anonymous> (/apconsul/test/node_modules/docker-modem/lib/modem.js:204:14) at emitNone (events.js:91:20) at IncomingMessage.emit (events.js:185:7) at endReadableNT (_stream_readable.js:974:12) at _combinedTickCallback (internal/process/next_tick.js:74:11) at process._tickCallback (internal/process/next_tick.js:98:9) reason: 'server error', statusCode: 500, json: 'invalid filters: Error: invalid filter "0" - expected an array or object, got: "{\\"label\\":[\\"com.docker.compose.service=consul\\"]}" (XXXXXXX-XXXX-XXXXX-XX)' }
This is likely not an issue with this code but I sometimes have a situation where the consul raft fails to elect a leader so writing the nginx template loop never succeeds. This seems to most often happen in the Amsterdam datacenter. Do you have any idea what would cause this. I understand that it can take a while to elect the leader but I have let it run as long as 5 minutes with no success. This does not always occur but it does fairly regularly in ams.
We occasionally see this behavior in Triton:
➜ consul git:(master) ✗ ./start.sh
Starting a Triton trusted Consul service
Pulling the most recent images
WARNING: The DOCKER_CLIENT_TIMEOUT environment variable is deprecated. Please use COMPOSE_HTTP_TIMEOUT instead.
Pulling consul (autopilotpattern/consul:latest)...
latest: Pulling from autopilotpattern/consul (req 270197e0-12e7-11e6-92bb-d5e1dd6ff0dc)
830450f9a5ce: Already exists
739eb5ccedb4: Already exists
1cc32231595a: Already exists
d992bfbda305: Already exists
9d710148acd0: Already exists
aa6a17e27f5d: Already exists
54bb751ceeaf: Already exists
7b395b5811af: Already exists
bea864120e95: Already exists
d6f8ca12a3d3: Already exists
0ad10db1c67b: Already exists
4858d6a328c1: Already exists
2f045885aafd: Already exists
fe7fdb07dbcb: Already exists
b51065e66662: Already exists
Digest: sha256:3dd3f730381654bd26b844b44675cf67a7d169c88257f3d7a53de2d149a03855
Status: Image is up to date for autopilotpattern/consul:latest
Starting containers
WARNING: The DOCKER_CLIENT_TIMEOUT environment variable is deprecated. Please use COMPOSE_HTTP_TIMEOUT instead.
Starting consul_consul_1
Waiting for the bootstrap instance.Error response from daemon: problem executing command (32dda270-12e7-11e6-92bb-d5e1dd6ff0dc)
...............................
When you check the request id against the server logs:
HTTP/1.1 500 Internal Server Error
content-type: text/plain
content-length: 64
date: Thu, 05 May 2016 17:31:29 GMT
x-request-id: 32dda270-12e7-11e6-92bb-d5e1dd6ff0dc
x-response-time: 863
server: Triton/1.9.0 (linux)
problem executing command (32dda270-12e7-11e6-92bb-d5e1dd6ff0dc)
--
DockerError: problem executing command; caused by InternalError: posting task to cn-agent: {"error":"docker_exec: VM is not running, cannot exec","details":{"restCode":"VmNotRunning"}}
at DockerError._DockerBaseError (/opt/smartdc/docker/lib/errors.js:173:15)
at new DockerError (/opt/smartdc/docker/lib/errors.js:194:22)
at Object.cnapiErrorWrap (/opt/smartdc/docker/lib/errors.js:547:20)
at _execCb (/opt/smartdc/docker/lib/backends/sdc/containers.js:3989:36)
at /opt/smartdc/docker/node_modules/sdc-clients/lib/restifyclient.js:174:20
at parseResponse (/opt/smartdc/docker/node_modules/sdc-clients/node_modules/restify/lib/clients/json_client.js:84:9)
at IncomingMessage.done (/opt/smartdc/docker/node_modules/sdc-clients/node_modules/restify/lib/clients/string_client.js:151:17)
at IncomingMessage.g (events.js:180:16)
at IncomingMessage.emit (events.js:117:20)
at _stream_readable.js:944:16
Caused by: InternalError: posting task to cn-agent: {"error":"docker_exec: VM is not running, cannot exec","details":{"restCode":"VmNotRunning"}}
at parseResponse (/opt/smartdc/docker/node_modules/sdc-clients/node_modules/restify/lib/clients/json_client.js:67:23)
at IncomingMessage.done (/opt/smartdc/docker/node_modules/sdc-clients/node_modules/restify/lib/clients/string_client.js:151:17)
at IncomingMessage.g (events.js:180:16)
at IncomingMessage.emit (events.js:117:20)
at _stream_readable.js:944:16
at process._tickDomainCallback (node.js:492:13)
--
req.timers: {
"handler-0": 29,
"bunyan": 48,
"handler-2": 161,
"checkReadonlyMode": 13,
"checkServices": 16,
"reqAuth": 950,
"reqClientApiVersion": 56,
"reqParamsId": 11,
"getVm": 41262,
"readBody": 543,
"parseBody": 146,
"containerExec": 820443
}
The command that is failing is: https://github.com/autopilotpattern/consul/blob/master/start.sh#L36
Essentially, what the server error is saying is that the instance hasn't been started yet. This may be technically a bug with Triton, but it would be worthwhile to still put a sleep or some retry for this command because if the exec command doesn't succeed (even if it was a network blip), the rest of the commands will fail.
https://github.com/autopilotpattern/consul/blob/master/etc/containerpilot.json5#L33
in the configuration above,
preStop
is used as a source, but it is not defined as a job in the configuration. Could anyone let me know what preStop
means in the configuration file?
As a result of the current implementation of #48, RPC traffic is exposed publicly and should therefore be encrypted. Since consul provides mechanisms to do so we should include a way to inject certs into the containers before consul can start in a similar fashion to how autopilotpattern/vault uses docker exec
to bootstrap.
The proposed design is to check for CONSUL_TLS_PATH
during preStart and if present, wait for a file to appear the the specified path. Gossip key configuration can be done by specifying CONSUL_ENCRYPT_PATH
or CONSUL_ENCRYPT_BASE64
.
While standing up a raft for working on TritonDataCenter/containerpilot#162 I ran into an issue where the nodes were not being marked healthy because the health check was stalling at the call to consul info
. The process tree looked like this:
$ docker exec -it consul_consul_3 ps -ef
PID USER TIME COMMAND
51798 root 0:00 {consul-heartbea} /bin/bash /bin/consul-heartbeat
0 root 0:00 zsched
44535 root 0:14 /bin/consul agent -server -config-dir=/config -ui-dir /ui -bootstrap-expect 3
73393 root 0:00 {busybox} ps -ef
1 root 0:00 /opt/containerpilot/containerpilot -config file:///etc/containerpilot.json /bin/consul agent -server -config-dir=/config -ui-dir /ui -bootstrap-expect 3
51828 root 0:00 consul info
It's not consistent -- sometimes the health check will hang and sometimes it won't.
Hi, I just have a dumb question:
What's the logic for when to put something in /bin
like /bin/consul
versus /usr/local/bin
like /usr/local/bin/containerpilot
in this case?
Is there an opinion behind it? Just curious about the organizations, thanks.
Branching off of #23, users might need to be able to point consul servers in one datacenter to join nodes in another. The configuration file format makes this kind of kludgy to implement as simply but I've got a POC for one way to do it that is relatively straightforward. I'll be opening a PR with my patch and an example "multi-DC" compose file to get the discussion going.
https://www.consul.io/docs/agent/encryption.html describes enabling encryption for gossip communications in the LAN and over the WAN:
gossip between nodes is done over UDP and is secured using a symmetric key
The key is 16-bytes, Base64 encoded.
The included consul keygen
command can create a suitable key (example: cg8StVXbQJ0gPvMd9o7yrg==
), but that key must be distributed to every Consul instance. For this to work, the key must be generated outside the instance and provided at the time the containers are scheduled.
If this key is provided via an optional environment var, it could be injected in the consul.json
config file at preStart
using a mechanism similar to how the advertise IP is set:
if [ ! -z "$CONSUL_GOSSIP_KEY" ]; then
sed -i "s/\{/\[\n\"encrypt\": \"${CONSUL_GOSSIP_KEY},\"/" /etc/consul/consul.json
fi
As a practical matter, alternatives to consul keygen
will likely be needed to make it easy to inject the key into environment variable configuration for the application (see example in autopilotpattern/wordpress).
bin
to /usr/local/bin
instead of just /bin
config/consul.json and
containerpilot.json) to
etc` in this repoIf you have 3 nodes in local docker development, how do you expose the consul web ui?
The example does not expose a port on the host:
Changing that to 8500:8500
results in two of the three nodes erroring when scaling up:
Bind for 0.0.0.0:8500 failed: port is already allocated
So, is it not possible to expose the consul web ui when clustering?
This early 2015 mail list item addresses the question of how to secure Consul for use over a WAN:
Security on public addresses: Consul can be run securely over the WAN if you enable all the encryption features. This means
-encrypt
for gossip, and the TLS settings withverify_incoming
andverify_outgoing
. See this page: https://consul.io/docs/agent/encryption.html
#24 addresses adding support for gossip encryption, this ticket addresses TLS.
If
verify_outgoing
is set, agents verify the authenticity of Consul for outgoing connections. Server nodes must present a certificate signed by the certificate authority present on all agents, set via the agent'sca_file
option. All server nodes must have an appropriate key pair set usingcert_file
andkey_file
.If
verify_incoming
is set, the servers verify the authenticity of all incoming connections. All clients must have a valid key pair set usingcert_file
andkey_file
. Servers will also disallow any non-TLS connections. To force clients to use TLS,verify_outgoing
must also be set.
The configuration options seem simple enough, but further work must be done to define a workflow in which this can be used and operated automatically. It's possible that this can use Let's Encrypt to generate keys automatically (also see autopilotpattern/nginx#25), or that the keys can be completely optional and provided in the environment configuration as described in #24 (edit: we can't coordinate Let's Encrypt without having Consul working, so any keys must be provided to the scheduler for this to work, rather than generated at run-time).
For example, if you are running Consul in a network with multiple private IPs associated with NICs on the Consul instance, when Consul boots up, it will be unable to auto-select the network it should advertise on.
When using Consul on multiple private networks without Docker, this is a simple fix - you just specify the IP of the private network that you want to advertise on. However, when you use Docker, specifying the IP as part of the startup parameters gets you into a chicken and egg problem.
Ideally, ContainerPilot could support templating in the available IPs into the run command executed.
I'm getting the following error response from the consul container:
fork/exec /usr/local/bin/consul-manage: no such file or directory
I dont see why the execution cannot find the consul-manage file.
This is the Dockerfile I'm working with
FROM alpine:3.5
RUN apk --no-cache add curl bash ca-certificates
ENV CONSUL_VERSION=0.7.3
RUN export CONSUL_CHECKSUM=901a3796b645c3ce3853d5160080217a10ad8d9bd8356d0b73fcd6bc078b7f82 \
&& export archive=consul_${CONSUL_VERSION}_linux_amd64.zip \
&& curl -Lso /tmp/${archive} https://releases.hashicorp.com/consul/${CONSUL_VERSION}/${archive} \
&& echo "${CONSUL_CHECKSUM} /tmp/${archive}" | sha256sum -c \
&& cd /bin \
&& unzip /tmp/${archive} \
&& chmod +x /bin/consul \
&& rm /tmp/${archive}
RUN export CONSUL_UI_CHECKSUM=52b1bb09b38eec522f6ecc0b9bf686745bbdc9d845be02bd37bf4b835b0a736e \
&& export archive=consul_${CONSUL_VERSION}_web_ui.zip \
&& curl -Lso /tmp/${archive} https://releases.hashicorp.com/consul/${CONSUL_VERSION}/${archive} \
&& echo "${CONSUL_UI_CHECKSUM} /tmp/${archive}" | sha256sum -c \
&& mkdir /ui \
&& cd /ui \
&& unzip /tmp/${archive} \
&& rm /tmp/${archive}
ENV CONTAINERPILOT_VERSION 2.6.1
ENV CONTAINERPILOT file:///etc/containerpilot.json
RUN export CONTAINERPILOT_CHECKSUM=2bb7f4ba5044ac2377540f0fa7cf7daf240e6292 \
&& export archive=containerpilot-${CONTAINERPILOT_VERSION}.tar.gz \
&& curl -Lso /tmp/${archive} \
"https://github.com/joyent/containerpilot/releases/download/${CONTAINERPILOT_VERSION}/${archive}" \
&& echo "${CONTAINERPILOT_CHECKSUM} /tmp/${archive}" | sha1sum -c \
&& tar zxf /tmp/${archive} -C /usr/local/bin \
&& rm /tmp/${archive}
COPY containerpilot.json etc/
COPY consul.json etc/consul/
COPY consul-manage /usr/local/bin/
VOLUME ["/data"]
EXPOSE 8300 8301 8301/udp 8302 8302/udp 8400 8500 53 53/udp
ENV SHELL /bin/bash
I started a new service with three replicas using this command
docker service create \
--name auto_consul \
--publish 8750:8500 \
--constraint 'node.role == manager' \
--replicas 3 \
auto_consul \
/usr/local/bin/containerpilot \
/bin/consul agent -server \
-bootstrap-expect 3 \
-config-dir=/etc/consul \
-ui-dir /ui
Also tried starting a single container but got the same response.
This is my docker environment information
Containers: 21
Running: 6
Paused: 0
Stopped: 15
Images: 28
Server Version: 1.13.0
Storage Driver: devicemapper
Pool Name: docker-253:0-101637473-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 2.957 GB
Data Space Total: 107.4 GB
Data Space Available: 9.188 GB
Metadata Space Used: 5.792 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.142 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2016-06-09)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: xtclacttqznphswtuu7n0pux4
Is Manager: true
ClusterID: yv8hnhdo6hnxb75dhgnlp1y3x
Managers: 3
Nodes: 6
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 10.1.232.240
Manager Addresses:
10.1.232.240:2377
10.1.232.241:2377
10.1.232.242:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-327.36.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.797 GiB
Name: FBWMSCENTOS01
ID: 54PE:IUGI:IQYE:62MY:3ZNM:Y6WA:KQW3:BSZ2:TW4H:NMNW:YKWF:QGNY
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
10.1.232.226:18444
127.0.0.0/8
Live Restore Enabled: false
While following the example in the README.md file, after all three consul containers are up and running, the logs are full of the following messages and are repeated indefinitely:
consul_3 | 2018-04-18T22:48:57.476823000Z 2018/04/18 22:48:57 /usr/local/bin/consul-manage: line 19: [: ==: unary operator expected
consul_3 | 2018-04-18T22:48:57.483049000Z 2018/04/18 22:48:57 Service registration failed: Put http://127.0.0.1:8500/v1/agent/service/register: dial tcp 127.0.0.1:8500: connect: connection refused
consul_1 | 2018-04-18T22:49:01.817519000Z 2018/04/18 22:49:01 /usr/local/bin/consul-manage: line 19: [: ==: unary operator expected
consul_1 | 2018-04-18T22:49:01.822914000Z 2018/04/18 22:49:01 Service registration failed: Put http://127.0.0.1:8500/v1/agent/service/register: dial tcp 127.0.0.1:8500: getsockopt: connection refused
consul_2 | 2018-04-18T22:49:02.884224000Z 2018/04/18 22:49:02 /usr/local/bin/consul-manage: line 19: [: ==: unary operator expected
consul_2 | 2018-04-18T22:49:02.886316000Z 2018/04/18 22:49:02 Service registration failed: Put http://127.0.0.1:8500/v1/agent/service/register: dial tcp 127.0.0.1:8500: connect: connection refused
Let me know if I can provide any additional information.
https://hub.docker.com/r/autopilotpattern/consul/tags/ shows some tags we can't account for, specifically, tags that don't have a release version in https://github.com/autopilotpattern/consul/releases. Further, the tag pattern doesn't identify what version of Consul is present in the container.
<consul version>-r<implementation version>
, a pattern that's used in other Autopilot Pattern implementationsSee: https://github.com/autopilotpattern/consul/blob/master/docker-compose.yml#L25
It appears that ContainerPilot is being pulled: https://github.com/autopilotpattern/consul/blob/master/Dockerfile#L29
But it doesn't exist on the image itself. Could it be that the image deb34f909abf isn't the latest and the Dockerfile in github hasn't been published?
This implementation depends on orchestration via start.sh
that starts one Consul instance, gets its IP, then starts the remaining Consul instances. That external orchestration is incompatible with the Autopilot Pattern, though this is, admittedly, a special case.
I believe this can be fixed with DNS-based discovery of raft peers and a preStart
script that digs for raft members to join with.
From #1 (comment)
To eliminate the need to download a glibc from the internets, this project should build a version of Consul based on Musl. How to do this in a way that minimizes the harm of `go get and in a separate build container to eliminate the unneeded bulk from the Golang build environment.
how to mount a volume where consul save KV store?
Consul's has built-in support for operation in multiple data centers. These data centers can be named.
The data center name can be set in the consul.json
with the addition of a "datacenter": "<name>"
key, *or with the addition of a -datacenter
command line argument.
The Triton data center name can be found using mdata-get
:
/native/usr/sbin/mdata-get sdc:datacenter_name
The name can be injected in the config file at preStart
using a mechanism similar to how the advertise IP is set:
if [ -f "/native/usr/sbin/mdata-get" ]; then
DATACENTER_NAME=$(/native/usr/sbin/mdata-get sdc:datacenter_name)
fi
sed -i "s/DATACENTER_NAME/${DATACENTER_NAME:-dc1}/" /etc/consul/consul.json
Hi, I've built this container from master but can't seem to get it running locally (boot2docker). Nothing jumps out as Triton specific so as far as I can tell it should work, can you confirm?
Here are the docker logs, scaling to 3 nodes doesn't help, they all get the same error. I'm using docker-compose.yml
from this repo.
Attaching to tritonconsul_consul_1
consul_1 | ==> WARNING: Expect Mode enabled, expecting 3 servers
consul_1 | ==> Starting raft data migration...
consul_1 | ==> Starting Consul agent...
consul_1 | ==> Starting Consul agent RPC...
consul_1 | ==> Consul agent running!
consul_1 | Node name: 'a7854adf73a2'
consul_1 | Datacenter: 'dc1'
consul_1 | Server: true (bootstrap: false)
consul_1 | Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400)
consul_1 | Cluster Addr: 172.17.0.37 (LAN: 8301, WAN: 8302)
consul_1 | Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
consul_1 | Atlas: <disabled>
consul_1 |
consul_1 | ==> Log data will now stream in as it occurs:
consul_1 |
consul_1 | 2015/12/06 21:19:14 [INFO] raft: Node at 172.17.0.37:8300 [Follower] entering Follower state
consul_1 | 2015/12/06 21:19:14 [INFO] serf: EventMemberJoin: a7854adf73a2 172.17.0.37
consul_1 | 2015/12/06 21:19:14 [INFO] consul: adding server a7854adf73a2 (Addr: 172.17.0.37:8300) (DC: dc1)
consul_1 | 2015/12/06 21:19:14 [INFO] serf: EventMemberJoin: a7854adf73a2.dc1 172.17.0.37
consul_1 | 2015/12/06 21:19:14 [INFO] consul: adding server a7854adf73a2.dc1 (Addr: 172.17.0.37:8300) (DC: dc1)
consul_1 | 2015/12/06 21:19:14 [ERR] agent: failed to sync remote state: No cluster leader
consul_1 | 2015/12/06 21:19:16 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
consul_1 | 2015/12/06 21:19:24 [INFO] agent.rpc: Accepted client: 127.0.0.1:35695
consul_1 | 2015-12-06 21:19:24 containerbuddy: No peers in raft
consul_1 | 2015-12-06 21:19:24 containerbuddy: Bootstrapping raft with self
consul_1 | 2015/12/06 21:19:24 [INFO] agent.rpc: Accepted client: 127.0.0.1:35696
consul_1 | 2015/12/06 21:19:24 [INFO] agent: (LAN) joining: [172.17.0.37]
consul_1 | 2015/12/06 21:19:24 [INFO] agent: (LAN) joined: 1 Err: <nil>
consul_1 | Successfully joined cluster by contacting 1 nodes.
consul_1 | 2015/12/06 21:19:24 [ERR] http: Request /v1/agent/check/pass/consul-a7854adf73a2?note=ok, error: CheckID does not have associate
d TTL
consul_1 | 2015/12/06 21:19:24 Unexpected response code: 500 (CheckID does not have associated TTL)
consul_1 | Service not registered, registering...
consul_1 | 2015/12/06 21:19:24 [ERR] agent: failed to sync changes: No cluster leader
consul_1 | 2015/12/06 21:19:24 [ERR] agent: failed to sync changes: No cluster leader
consul_1 | 2015/12/06 21:19:34 [INFO] agent.rpc: Accepted client: 127.0.0.1:35703
consul_1 | 2015-12-06 21:19:34 containerbuddy: No peers in raft
consul_1 | 2015-12-06 21:19:34 containerbuddy: Bootstrapping raft with self
consul_1 | 2015/12/06 21:19:34 [INFO] agent.rpc: Accepted client: 127.0.0.1:35704
consul_1 | 2015/12/06 21:19:34 [INFO] agent: (LAN) joining: [172.17.0.37]
consul_1 | 2015/12/06 21:19:34 [INFO] agent: (LAN) joined: 1 Err: <nil>
consul_1 | Successfully joined cluster by contacting 1 nodes.
consul_1 | 2015/12/06 21:19:34 [ERR] agent: failed to sync changes: No cluster leader
consul_1 | 2015/12/06 21:19:42 [ERR] agent: failed to sync remote state: No cluster leader
consul_1 | 2015/12/06 21:19:44 [INFO] agent.rpc: Accepted client: 127.0.0.1:35710
consul_1 | 2015-12-06 21:19:44 containerbuddy: No peers in raft
consul_1 | 2015-12-06 21:19:44 containerbuddy: Bootstrapping raft with self
consul_1 | 2015/12/06 21:19:44 [INFO] agent.rpc: Accepted client: 127.0.0.1:35711
consul_1 | 2015/12/06 21:19:44 [INFO] agent: (LAN) joining: [172.17.0.37]
consul_1 | 2015/12/06 21:19:44 [INFO] agent: (LAN) joined: 1 Err: <nil>
consul_1 | Successfully joined cluster by contacting 1 nodes.
consul_1 | 2015/12/06 21:19:44 [ERR] agent: failed to sync changes: No cluster leader
consul_1 | 2015/12/06 21:19:54 [INFO] agent.rpc: Accepted client: 127.0.0.1:35717
consul_1 | 2015-12-06 21:19:54 containerbuddy: No peers in raft
...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.