Coder Social home page Coder Social logo

tiangolo / dockerswarm.rocks Goto Github PK

View Code? Open in Web Editor NEW
1.1K 31.0 126.0 1.15 MB

Docker Swarm mode rocks! Ideas, tools and recipes. Get a production-ready, distributed, HTTPS served, cluster in minutes, not weeks.

Home Page: https://dockerswarm.rocks/

Shell 100.00%
docker docker-swarm traefik letsencrypt https linux

dockerswarm.rocks's Introduction

Hey! I'm @tiangolo (Sebastián Ramírez) 👋

I'm a software developer from Colombia. 🇨🇴

I currently live in Berlin, Germany. 🇩🇪

I have been building APIs and tools for Machine Learning and data systems, in Latin America, the Middle East, and now Europe, with different teams and organizations. 🌎

I created FastAPI, Typer and a bunch of other open source tools. 🚀

I like to build things with Deep Learning/Machine Learning, distributed systems, SQL and NoSQL databases, Docker, Python, TypeScript (and JavaScript), modern backend APIs, and modern frontend frameworks. 🤖

I'm currently dedicating a high percentage of my time to FastAPI, Typer, and my other open source projects. At the same time, I'm also helping a limited number of teams and organizations as an external consultant. If you would like to have my help with your team and product, feel free to contact me. 🤓

If my open source projects are useful for your product/company you can also sponsor my work on them. ☕

You can find me on:

dockerswarm.rocks's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dockerswarm.rocks's Issues

--advertise-addr shall go to private address

On https://github.com/tiangolo/dockerswarm.rocks/blame/master/docs/index.md#L163

you recommend to use public IP for --advertise-addr.

My understanding is, it must be an IP visible to all other nodes, but if possible (e.g. on virtual machine with VPN), private network address is more secure as the swarm leader will not have to expose it's ports to public internet.

I would recommend following wording:

...select the IP 10.19.0.5, and run the command again with --advertise-addr, e.g.:
docker swarm init --advertise-addr 10.19.0.5

Traefik dropping nodes?

During swarm node discovery (by default every 15 seconds), traefik recreates all frontends and backends, which leads to service outages (404s) for a couple of seconds.

Also during recreation fronts and backs (besides traefik dash front/back) dissaper from dash and reaper couple seconds later.

Reproducible with project's config. Just in case attaching my config, maybe I've missed something.

version: '3.7'

services:
  init:
    image: traefik:1.7-alpine
    networks:
      - traefik-public
    command:
      - storeconfig
      - --api
      - --logLevel=DEBUG
      - --accessLog
      - --docker
      - --docker.endPoint=http://dockersocket:2375
      - --docker.swarmMode
      - --docker.domain=traefik
      - --docker.network=proxy
      - --docker.watch
      - --consul
      - --consul.endpoint=consul:8500
      - --consul.prefix=traefik
      - --defaultentrypoints=http,https
      - --entryPoints='Name:https Address::443 TLS'
      - --entryPoints='Name:http Address::80'
      - --acme
      - --acme.email=${EMAIL?Variable EMAIL not set}
      - --acme.httpchallenge
      - --acme.httpchallenge.entrypoint=http
      - --acme.onhostrule=true
      - --acme.entrypoint=https
      - --acme.storage=traefik/acme/account
      - --acme.acmelogging
      - --constraints=tag==traefik-public
    deploy:
      restart_policy:
        condition: on-failure
    depends_on:
      - consul

  traefik:
    image: traefik:1.7-alpine
    networks:
      - traefik-public
      - traefik-docker
    ports:
      - target: 80
        published: 80
        protocol: tcp
        mode: host
      - target: 443
        published: 443
        protocol: tcp
        mode: host
      - target: 8080
        published: 8080
        protocol: tcp
        mode: ingress # traefik dashboard
    command:
      - --consul
      - --consul.endpoint=consul:8500
      - --consul.prefix=traefik
    deploy:
      mode: global
      labels:
        - traefik.enable=true
        - traefik.port=8080
        - traefik.tags=traefik-public
        - traefik.docker.network=traefik-public
        - traefik.redirect.frontend.entryPoints=http
        - traefik.redirect.frontend.redirect.entryPoint=https
        - traefik.redirect.frontend.entryPoints=https
        - traefik.frontend.rule=Host:traefik.${DOMAIN?Variable DOMAIN not set}
        - traefik.frontend.auth.basic.users=${USERNAME?Variable USERNAME not set}:${HASHED_PASSWORD?Variable HASHED_PASSWORD not set}
      placement:
        constraints: [node.role == manager]
      restart_policy:
        condition: on-failure
    depends_on:
      - consul

  consul:
    image: consul
    command: agent -server -bootstrap-expect=1 -ui
    networks:
      - traefik-public
    volumes:
      - consul:/consul/data
    environment:
      - 'CONSUL_LOCAL_CONFIG={"server":true, "leave_on_terminate": true}'
      - CONSUL_BIND_INTERFACE=eth0
      - CONSUL_CLIENT_INTERFACE=eth0
    deploy:
      labels:
        - traefik.frontend.rule=Host:consul.${DOMAIN?Variable DOMAIN not set}
        - traefik.enable=true
        - traefik.port=8500
        - traefik.tags=${TRAEFIK_PUBLIC_TAG:-traefik-public}
        - traefik.docker.network=traefik-public
        - traefik.redirectorservice.frontend.entryPoints=http
        - traefik.redirectorservice.frontend.redirect.entryPoint=https
        - traefik.webservice.frontend.entryPoints=https
        - traefik.frontend.auth.basic.users=${USERNAME?Variable USERNAME not set}:${HASHED_PASSWORD?Variable HASHED_PASSWORD not set}
      restart_policy:
        condition: on-failure

# this custom haproxy allows us to move traefik to worker nodes
# while this container listens on managers and only allows
# traefik to connect, read-only, to limited docker api calls
# https://github.com/Tecnativa/docker-socket-proxy
  dockersocket:
    image: tecnativa/docker-socket-proxy
    networks:
      - traefik-docker
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      # CONTAINERS: 1
      NETWORKS: 1
      SERVICES: 1
      # SWARM: 1
      TASKS: 1
    deploy:
      mode: global
      placement:
        constraints: [node.role == manager]
      restart_policy:
        condition: on-failure

volumes:
  consul:

networks:
  traefik-docker:
  traefik-public:
    external: true

Example log

level=info msg="Skipping same configuration for provider consul"
level=debug msg="Cannot get key traefik/alias Key not found in store"
level=debug msg="Setting traefik/alias to default: traefik"
level=debug msg="Cannot list keys under \"traefik/backends/\": Key not found in store"
level=debug msg="Cannot list keys under \"traefik/frontends/\": Key not found in store"
level=debug msg="Cannot list keys under \"traefik/tls/\": Key not found in store"
level=debug msg="Configuration received from provider consul: {}"
level=info msg="Skipping same configuration for provider consul"
level=debug msg="originLabelsmap[traefik.enable:true traefik.frontend.auth.basic.users:admin:$apr1$urYfOtau$Gjbmr1.qKmCYudUKbC/Za/ traefik.port:8080 traefik.redirect.frontend.redirect.entryPoint:https com.docker.stack.image:traefik:1.7-alpine com.docker.stack.namespace:traefik traefik.docker.network:traefik-public traefik.frontend.rule:Host:traefik.eye.middle-earth.io traefik.redirect.frontend.entryPoints:https traefik.tags:traefik-public]"
level=debug msg="allLabelsmap[:map[traefik.enable:true traefik.frontend.auth.basic.users:admin:$apr1$urYfOtau$Gjbmr1.qKmCYudUKbC/Za/ traefik.port:8080 traefik.tags:traefik-public traefik.docker.network:traefik-public traefik.frontend.rule:Host:traefik.eye.middle-earth.io] redirect:map[traefik.frontend.redirect.entryPoint:https traefik.frontend.entryPoints:https]]"
level=debug msg="originLabelsmap[com.docker.stack.image:consul com.docker.stack.namespace:traefik traefik.docker.network:traefik-public traefik.enable:true traefik.port:8500 traefik.redirectorservice.frontend.entryPoints:http traefik.frontend.auth.basic.users:admin:$apr1$urYfOtau$Gjbmr1.qKmCYudUKbC/Za/ traefik.frontend.rule:Host:consul.eye.middle-earth.io traefik.redirectorservice.frontend.redirect.entryPoint:https traefik.tags:traefik-public traefik.webservice.frontend.entryPoints:https]"
level=debug msg="allLabelsmap[:map[traefik.frontend.auth.basic.users:admin:$apr1$urYfOtau$Gjbmr1.qKmCYudUKbC/Za/ traefik.frontend.rule:Host:consul.eye.middle-earth.io traefik.docker.network:traefik-public traefik.enable:true traefik.port:8500 traefik.tags:traefik-public] redirectorservice:map[traefik.frontend.redirect.entryPoint:https traefik.frontend.entryPoints:http] webservice:map[traefik.frontend.entryPoints:https]]"
level=debug msg="originLabelsmap[com.docker.stack.namespace:logstash traefik.metrics.port:9600 traefik.redirectorservice.frontend.redirect.permanent:true traefik.tags:traefik-public traefik.redirectorservice.frontend.redirect.entryPoint:https com.docker.stack.image:docker.io/elfenlaid/logstash swarmpit.service.deployment.autoredeploy:true traefik.docker.network:traefik-public traefik.enable:true traefik.metrics.frontend.entryPoints:https traefik.metrics.frontend.rule:Host:logstash.eye.middle-earth.io traefik.redirectorservice.frontend.entryPoints:http traefik.redirectorservice.frontend.rule:Host:logstash.eye.middle-earth.io]"
level=debug msg="allLabelsmap[:map[traefik.tags:traefik-public traefik.docker.network:traefik-public traefik.enable:true] metrics:map[traefik.port:9600 traefik.frontend.entryPoints:https traefik.frontend.rule:Host:logstash.eye.middle-earth.io] redirectorservice:map[traefik.frontend.redirect.permanent:true traefik.frontend.entryPoints:http traefik.frontend.redirect.entryPoint:https traefik.frontend.rule:Host:logstash.eye.middle-earth.io]]"
level=debug msg="Filtering container without port, logstash_app.1: port label is missing, please use traefik.port as default value or define port label for all segments ('traefik.<segment_name>.port')"
level=debug msg="originLabelsmap[com.docker.stack.image:tecnativa/docker-socket-proxy com.docker.stack.namespace:traefik]"
level=debug msg="allLabelsmap[:map[]]"
level=debug msg="Filtering container without port, traefik_dockersocket.rqv33kakj4swvmqik2y35hqne: port label is missing, please use traefik.port as default value or define port label for all segments ('traefik.<segment_name>.port')"
level=debug msg="originLabelsmap[traefik.enable:true traefik.frontend.auth.basic.users:admin:$apr1$urYfOtau$Gjbmr1.qKmCYudUKbC/Za/ traefik.port:8080 traefik.redirect.frontend.redirect.entryPoint:https com.docker.stack.image:traefik:1.7-alpine com.docker.stack.namespace:traefik traefik.docker.network:traefik-public traefik.frontend.rule:Host:traefik.eye.middle-earth.io traefik.redirect.frontend.entryPoints:https traefik.tags:traefik-public]"
level=debug msg="allLabelsmap[redirect:map[traefik.frontend.entryPoints:https traefik.frontend.redirect.entryPoint:https] :map[traefik.docker.network:traefik-public traefik.frontend.rule:Host:traefik.eye.middle-earth.io traefik.tags:traefik-public traefik.enable:true traefik.frontend.auth.basic.users:admin:$apr1$urYfOtau$Gjbmr1.qKmCYudUKbC/Za/ traefik.port:8080]]"
level=debug msg="originLabelsmap[com.docker.stack.image:consul com.docker.stack.namespace:traefik traefik.docker.network:traefik-public traefik.enable:true traefik.port:8500 traefik.redirectorservice.frontend.entryPoints:http traefik.frontend.auth.basic.users:admin:$apr1$urYfOtau$Gjbmr1.qKmCYudUKbC/Za/ traefik.frontend.rule:Host:consul.eye.middle-earth.io traefik.redirectorservice.frontend.redirect.entryPoint:https traefik.tags:traefik-public traefik.webservice.frontend.entryPoints:https]"
level=debug msg="allLabelsmap[:map[traefik.frontend.auth.basic.users:admin:$apr1$urYfOtau$Gjbmr1.qKmCYudUKbC/Za/ traefik.frontend.rule:Host:consul.eye.middle-earth.io traefik.tags:traefik-public traefik.docker.network:traefik-public traefik.enable:true traefik.port:8500] redirectorservice:map[traefik.frontend.redirect.entryPoint:https traefik.frontend.entryPoints:http] webservice:map[traefik.frontend.entryPoints:https]]"
level=debug msg="Backend backend-traefik-consul-webservice: no load-balancer defined, fallback to 'wrr' method"
level=debug msg="Backend backend-traefik-traefik-redirect: no load-balancer defined, fallback to 'wrr' method"
level=debug msg="Backend backend-traefik-consul-redirectorservice: no load-balancer defined, fallback to 'wrr' method"
level=debug msg="Configuration received from provider docker: {\"backends\":{\"backend-traefik-consul-redirectorservice\":{\"servers\":{\"server-traefik-consul-1-1f054db510941b8693ab4f9267343456\":{\"url\":\"http://10.0.0.49:8500\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"backend-traefik-consul-webservice\":{\"servers\":{\"server-traefik-consul-1-1f054db510941b8693ab4f9267343456\":{\"url\":\"http://10.0.0.49:8500\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}},\"backend-traefik-traefik-redirect\":{\"servers\":{\"server-traefik-traefik-ip6nkn0m4idzvreqd7uutqx38-d10745fe1c66bc5c76b494485d78c07d\":{\"url\":\"http://10.0.0.27:8080\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}}},\"frontends\":{\"frontend-redirect-traefik-traefik-redirect\":{\"entryPoints\":[\"https\"],\"backend\":\"backend-traefik-traefik-redirect\",\"routes\":{\"route-frontend-redirect-traefik-traefik-redirect\":{\"rule\":\"Host:traefik.eye.middle-earth.io\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null,\"redirect\":{\"entryPoint\":\"https\"},\"auth\":{\"basic\":{}}},\"frontend-redirectorservice-traefik-consul-redirectorservice\":{\"entryPoints\":[\"http\"],\"backend\":\"backend-traefik-consul-redirectorservice\",\"routes\":{\"route-frontend-redirectorservice-traefik-consul-redirectorservice\":{\"rule\":\"Host:consul.eye.middle-earth.io\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null,\"redirect\":{\"entryPoint\":\"https\"},\"auth\":{\"basic\":{}}},\"frontend-webservice-traefik-consul-webservice\":{\"entryPoints\":[\"https\"],\"backend\":\"backend-traefik-consul-webservice\",\"routes\":{\"route-frontend-webservice-traefik-consul-webservice\":{\"rule\":\"Host:consul.eye.middle-earth.io\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null,\"auth\":{\"basic\":{}}}}}"
level=debug msg="Wiring frontend frontend-redirect-traefik-traefik-redirect to entryPoint https"
level=debug msg="Creating backend backend-traefik-traefik-redirect"
level=debug msg="Adding TLSClientHeaders middleware for frontend frontend-redirect-traefik-traefik-redirect"
level=debug msg="Creating load-balancer wrr"
level=debug msg="Creating server server-traefik-traefik-ip6nkn0m4idzvreqd7uutqx38-d10745fe1c66bc5c76b494485d78c07d at http://10.0.0.27:8080 with weight 1"
level=debug msg="Creating route route-frontend-redirect-traefik-traefik-redirect Host:traefik.eye.middle-earth.io"
level=debug msg="Wiring frontend frontend-redirectorservice-traefik-consul-redirectorservice to entryPoint http"
level=debug msg="Creating backend backend-traefik-consul-redirectorservice"
level=debug msg="Creating entry point redirect http -> https"
level=debug msg="Frontend frontend-redirectorservice-traefik-consul-redirectorservice redirect created"
level=debug msg="Adding TLSClientHeaders middleware for frontend frontend-redirectorservice-traefik-consul-redirectorservice"
level=debug msg="Creating load-balancer wrr"
level=debug msg="Creating server server-traefik-consul-1-1f054db510941b8693ab4f9267343456 at http://10.0.0.49:8500 with weight 1"
level=debug msg="Creating route route-frontend-redirectorservice-traefik-consul-redirectorservice Host:consul.eye.middle-earth.io"
level=debug msg="Wiring frontend frontend-webservice-traefik-consul-webservice to entryPoint https"
level=debug msg="Creating backend backend-traefik-consul-webservice"
level=debug msg="Adding TLSClientHeaders middleware for frontend frontend-webservice-traefik-consul-webservice"
level=debug msg="Creating load-balancer wrr"
level=debug msg="Creating server server-traefik-consul-1-1f054db510941b8693ab4f9267343456 at http://10.0.0.49:8500 with weight 1"
level=debug msg="Creating route route-frontend-webservice-traefik-consul-webservice Host:consul.eye.middle-earth.io"
level=info msg="Server configuration reloaded on :443"
level=info msg="Server configuration reloaded on :8080"
level=info msg="Server configuration reloaded on :80"
level=debug msg="LoadCertificateForDomains [consul.eye.middle-earth.io]..."
level=debug msg="Looking for provided certificate to validate [consul.eye.middle-earth.io]..."
level=debug msg="No ACME certificate to generate for domains [\"consul.eye.middle-earth.io\"]."
level=debug msg="LoadCertificateForDomains [traefik.eye.middle-earth.io]..."
level=debug msg="Looking for provided certificate to validate [traefik.eye.middle-earth.io]..."
level=debug msg="No ACME certificate to generate for domains [\"traefik.eye.middle-earth.io\"]."
level=debug msg="Cannot get key traefik/alias Key not found in store"
level=debug msg="Setting traefik/alias to default: traefik"
level=debug msg="Cannot list keys under \"traefik/backends/\": Key not found in store"
level=debug msg="Cannot list keys under \"traefik/frontends/\": Key not found in store"
level=debug msg="Cannot list keys under \"traefik/tls/\": Key not found in store"
level=debug msg="Configuration received from provider consul: {}"
level=info msg="Skipping same configuration for provider consul"

Stop instructing people to write plaintext passwords directly to env

This tutorial ROCKS! The one issue I've noticed is that for no good reason you are instructing people to put their plaintext passwords into their production environment variables. This is not a good security practice.

I skipped that step, and just typed in my plaintext password directly while creating the hashed version. Please update accordingly: remove the ADMIN_PASSWORD stuff from all the tutorials, and update the hashed password instructions to something like:

Choose an admin password and type it in without quotes to set and export a hashed version using openssl. It will be used by Traefik's HTTP Basic Auth for most of the services:

export HASHED_PASSWORD=$(openssl passwd -apr1 typeyourpasswordhere)

etc.

FWIW, I did it this way and got Traefik and Consul up and running just fine.

Also, I'd be happy to make these changes myself in a PR, just want to see if there are any objections first.

Backup

Is there a guide how to backup everything we just set up ?

Traefik couldn't deploy new stacks , 404 error

Hello , I could setup everything(Traefik,Consul,Swarmprom,Swarmpit,Portainer) perfect with this guide . But the problem occurs when I deploy new stacks . For eg., wordpress . I get404 page not found. All the containers and services are running for the stack but traefik UI doesn't show any frontends/backends.

version: "3"

networks:
  traefik-public:
    external: true
  internal:
    external: false

services:
  wordpress:
    image: wordpress:4.9.8-apache
    environment:
      WORDPRESS_DB_PASSWORD:
    labels:
      - traefik.backend=wordpress
      - traefik.enable=true
      - traefik.frontend.rule=Host:wordpress.${DOMAIN}
      - traefik.docker.network=traefik-public
      - traefik.tags=${TRAEFIK_PUBLIC_TAG:-traefik-public}
      - traefik.port=80
     # Traefik service that listens to HTTP
      - traefik.redirectorservice.frontend.entryPoints=http
      - traefik.redirectorservice.frontend.redirect.entryPoint=https
    # Traefik service that listens to HTTPS
      - traefik.webservice.frontend.entryPoints=https
    networks:
      - internal
      - traefik-public
    depends_on:
      - mysql
  mysql:
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD:
    networks:
      - internal
    labels:
      - traefik.enable=false
  adminer:
    image: adminer:4.6.3-standalone
    labels:
      - traefik.backend=adminer
      - traefik.frontend.rule=Host:db-admin.wp.${DOMAIN}
      - traefik.docker.network=traefik-public
      - traefik.port=8080
    networks:
      - internal
      - traefik-public
    depends_on:
      - mysql

Repo for dockerswarm.rocks swarm setup?

The recipes in DockerSwarm.rocks are very useful and educational.
Maybe I'm missing something, but I can't find the repo for the main swarm setup (with consul). For the individual recipes I follow the links for the project generators, but for the main setup I could not find it.

I wanted to look into the compose files etc to find out how Consul is actually used. It is not clear to me as to why you would need Consul. Docker swarm supports distribution of configs and secrets. It would be nice if you could expand on that choice a little bit.

Again, thanks for this awesome resource.

CI/CD Error

I've followed the ci/cd tutorial, but I receive the following error message

Running before_script and script
00:02
 $ docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
 WARNING! Using --password via the CLI is insecure. Use --password-stdin.
 error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on 169.XXX.XXX.XXX:53: no such host

It seems like it cannot connect to docker. I've attached the docker socket both to the gitlab runner container as well as to the gitlab runner binary within the container

Clarify GitLab CI Runner Doc

I'm reading the docs on GitLab CI runner. The following is a bit confusing:

"If you are using GitLab, you can run a GitLab CI "runner" in your Docker Swarm mode cluster to test, build and deploy automatically your code."

Reading further down the doc does not instruct how to install the main GitLab CE Docker image and have it integrated with the runner. It sounds like the doc assumes the user will use GitLab's hosted service. Please clarify the docs if this is the case.

In the following it's not clear if I'm looking at the Traefik dashboard for GitLab, the hosted GitLab dashboard, or dockerized GitLab. Please clarify.

In a web browser, go to the GitLab "Admin Area -> Runners" section.

Swarmpit setup fails - http://db:5984 host unreachable

Thanks to the great guide I have been able to set up traefik and portainer on a single node.

However, swarmpit does not start up successfully for me by following the steps in the guide.

  1. CouchDB is not able to start up due to missing databases related to this issue. Should the setup be updated?
  • The error looks something like this on the couchdb container:

chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards)

  • It can be resolved by running the following in the container and redeploying the stack:
    curl -X PUT http://127.0.0.1:5984/_users
    curl -X PUT http://127.0.0.1:5984/_replicator
    curl -X PUT http://127.0.0.1:5984/_global_changes
  1. The swarmpit app is unable to connect to the db service at http://db:5984
21-01-19 20:04:35 fd9c090c50a2 ERROR [swarmpit.http:71] - Request execution failed! Scope: DB
|> GET http://db:5984/
|> Headers: null
|> Payload: null
|< Message: No route to host (Host unreachable)
|< Data: { }
Jan 19, 2021 8:04:38 PM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.NoRouteToHostException) caught when processing request to {}->http://db:5984: No route to host (Host unreachable)

I have been unable to figure out how to debug or resolve this.

I also have my fullstack fastapi app deployed with a nested Traefik service with Gitlab. Everything else works perfectly and is setup from a clean Ubuntu 20.04 instance with only Docker 20.10.2 (API 1.41, compose 1.27.4).

Docker was installed as directed by their docs. Except I had to modify hosts for the docker/daemon. This was while trying to debug docker connectivity but I wonder if it can somehow play a role?
echo '{"hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2375"], "dns": ["8.8.8.8", "1.1.1.1"]}' > /etc/docker/daemon.json

Using Docker swarm secrets for the Traefik dashboard

In the section https://dockerswarm.rocks/traefik/#preparation, the current guide suggests to store the Basic auth information in environment variables. I found it much easier to handle (and probably safer, too), to use a Docker secret.

You can generate such a file using the Apache utility htpasswd:

mkdir secrets
htpasswd -Bc -C 16 secrets/basic_auth_users.txt admin

Then specify the secret in the docker-compose.yml and let your traefik service use it:

secrets:
  basic_auth_users:
    file: ./secrets/basic_auth_users.txt

services:
  traefik:
    ...
    secrets:
    - basic_auth_users
    ...
      labels:
      ...
      - "traefik.http.routers.traefik.middlewares=dashboard-auth"
      - "traefik.http.middlewares.dashboard-auth.basicauth.usersfile=/run/secrets/basic_auth_users"

That way you can actually add multiple users to the file, too.

I'm happy to make a PR to the docs if desired.

Adding thelounge to swarm results in 404 error

Hello, Sebastián,

we e-mailed a few days ago. Unfortunately I didn't get any further to start thelounge (https://hub.docker.com/r/thelounge/thelounge/), but like colleagues here I get the 404 error from traefik.

My thelounge.yml file looks like this:

version: '3.3'

services:
  thelounge:
    image: thelounge/thelounge:latest
    volumes:
      - thelounge:/data
      #- ~/data/thelounge:/var/opt/thelounge
    ports:
        - "4000:4000"
    labels:
      # - traefik.backend=thelounge
      - traefik.enable=true
      - traefik.frontend.rule=Host:${DOMAIN}
      - traefik.port=4000
      - traefik.docker.network=traefik-public
      - traefik.tags=traefik-public
      # Traefik service that listens to HTTP
      - traefik.redirectorservice.frontend.entryPoints=http
      - traefik.redirectorservice.frontend.redirect.entryPoint=https
      # Traefik service that listens to HTTPS
      - traefik.webservice.frontend.entryPoints=https      
    networks:
      #- web
      - traefik-public

networks:
  traefik-public:
    external: true

volumes:
  thelounge:

What did I do wrong? Can anyone please help me with my problem?

Edit: I also tried:

version: '3.3'

services:
  thelounge:
    image: thelounge/thelounge:latest
    volumes:
      - thelounge:/data
    labels:
      # - traefik.backend=thelounge
      - traefik.enable=true
      - traefik.frontend.rule=Host:${DOMAIN}
      - traefik.port=4000
      - traefik.docker.network=traefik-public
      - traefik.tags=traefik-public
      # Traefik service that listens to HTTP
      - traefik.redirectorservice.frontend.entryPoints=http
      - traefik.redirectorservice.frontend.redirect.entryPoint=https
      # Traefik service that listens to HTTPS
      - traefik.webservice.frontend.entryPoints=https      
    networks:
      #- web
      - traefik-public

networks:
  traefik-public:
    external: true

volumes:
  thelounge:

Also the 404 error appears. :-(

Worker node doesn't have TLS

The worker node does not have TLS enabled when I go to it's hostname/subdomain sub.example.com and traefik.example.com does not bring me to the page described in this section https://dockerswarm.rocks/traefik/#user-interface only sys.traefik.example.com works, with TLS, however.

error="could not find an available IP while allocating VIP"

error="could not find an available IP while allocating VIP"

I get this error every day on my dev single server environment because all my "public" services(about 100 services) use traefik-public.

docker/ip-util-check script say that:

Network traefik-public/n37oijkbobyw has an IP address capacity of 253 and uses 220 addresses spanning over 1 nodes
WARNING: network is using more than the 75% of the total space. Remaining only 32 IPs after upgrade

What can I do in this situation?

Thank you

@tiangolo I came here to thank you for your work and your site. I started playing with docker swarm a little while ago and you have the best site out there, I was about to study the portainer to implement it as well because I wanted to see the features of it and you have it all ready on your site, as well as the traefik which I was about to do it all on my own and you have it all chewed and I'm just copying and pasting because the swarm is so awesome that it has it for you to really just copy and paste a docker-compose that's what I saw as the nature of it and you have it setup beautifully.

You're the man, thank you!

Best wishes.

Address persistence more clearly

Many of the steps in Docker Swarm Rocks manuals assume that some part of the management architecture or even ingress (Traefik) is deployed on the single host, usually one the management nodes, due to volume persistence.

This effectively makes this node a SPOF, what Docker Swarm is especially designed to avoid.

I do understand that data persistence is a cursed topic, but shouldn't this at least be discussed somewhere in the documentation?

HealthChecks

Hello,

I've found docker swarm very helpful when it comes to learning about docker and docker swarm. Your stack was the beginning of most of my home automation back in 2019. One thing that seems to be missing though is health checks. When the traefik UI container shut down for instance it stayed down until I manually had to restart it. Not really knowing much about docker-compose and docker outside of what you have here, is there a way to do health checks in the compose files?

On redeploy traefik deletes route

Hello!
When running docker stack deploy ... the second time, traefik deletes its route to url.
For example, I deployed first-time nginx server on host example.com, after redeploy, example.com shows 404 page.

How to fix this issue?

my yaml file:

version: '3.3'

volumes:
    web_data: {}

networks:
  net:
    driver: overlay
    attachable: true
  traefik-public:
    external: true

services:
  web:
    image: *****
    networks:
      - net
      - default
      - traefik-public
    environment:
      - DB_TYPE=mysql
      - DB_HOST=mysql
      - DB_DATABASE=****
      - DB_USERNAME=***
      - DB_PASSWORD=****
    volumes:
      - web_data:/var/www/html/storage/app
    deploy:
      mode: replicated
      replicas: 3
      labels:
        - traefik.frontend.rule=Host:example.com
        - traefik.enable=true
        - traefik.tags=${TRAEFIK_PUBLIC_TAG:-traefik-public}
        - traefik.docker.network=traefik-public
        # Traefik service that listens to HTTP
        - traefik.redirectorservice.frontend.entryPoints=http
        - traefik.redirectorservice.frontend.redirect.entryPoint=https
        # Traefik service that listens to HTTPS
        - traefik.webservice.frontend.entryPoints=https

Using Socat instead of exposing :/var/run/docker.sock in Traefik

Hi,
You made an amazing job here! I think your best practices are on target.

About exposing the docker socket. I read in the past that it's best practice to use socat to expose the docker socket. Before writing a PR, would you consider using socat?

This is how I do it in my stack:

Also, if you feel this is useless, I'd like to know your opinions about it.

Cheers!

Not getting ssl

I'm trying to setup wordpress stack behind the traefik in docker swarm but not allways get ssl error and can't access website.

here is my compose file

version: '3'

networks:
  traefik-public:
    external: true
  internalwp:
    external: false

services:
 wordpress:
  image: wordpress:5.4.1-php7.2-apache
  depends_on:
  - mariadb
  volumes:
  - orgwerimawp:/var/www/html/wp-content
  environment:
   WORDPRESS_DB_HOST: xxxxxxxxxxxxxxxxxxxxxxxxxx
   WORDPRESS_DB_PASSWORD: xxxxxxxxxxxxxxxxxx
   WORDPRESS_DB_USER: xxxxxxxxxxxxxxxxxxxxx
   WORDPRESS_DB_NAME: xxxxxxxxxxxxxxxxxxxxxxxxx
   WORDPRESS_CONFIG_EXTRA: |
        /* Multisite */
        define('WP_ALLOW_MULTISITE', true );
        define('MULTISITE', true);
        define('SUBDOMAIN_INSTALL', true);
        define('DOMAIN_CURRENT_SITE', 'exempl.com'); // TODO: change to actual domain when deploying
        define('PATH_CURRENT_SITE', '/');
        define('SITE_ID_CURRENT_SITE', 1);
        define('BLOG_ID_CURRENT_SITE', 1);
 
  deploy:
      placement:
        constraints:
          - node.role == worker
  labels:
        - traefik.frontend.rule=Host: exempl.com
        - traefik.enable=true
        - traefik.port=80
        - traefik.tags=traefik-public
        - traefik.docker.network=traefik-public
        - traefik.frontend.entryPoints=http,https
        - traefik.frontend.redirect.entryPoint=https
  networks:
    - traefik-public
    - internalwp


 mariadb:
  image: mariadb:10.5.3
  volumes:
  - orgdb:/var/lib/mysql
  environment:
   MYSQL_ROOT_PASSWORD: xxxxxxxxxxxxxxxxxx
   MYSQL_DATABASE: xxxxxxxxxxxxxxxxx
   MYSQL_USER: xxxxxxxxxxxxxxx
   MYSQL_PASSWORD: xxxxxxxxxxxxxxxxxx  
  restart: always
  deploy:
      placement:
        constraints:
          - node.role == worker    
  networks:
    - internalwp
  labels:
      - traefik.enable=false

volumes:
   orgwp: 
   orgdb:

Consul

Hi team, I have one doubt with your implementation of traefik with Consul in HA, we are seeing you are using the following command argument for the traefik proxy:

--consul.endpoint="consul-leader:8500"

As I understand, it must use the following definition:

--consul.endpoint="consul-replica:8500"

If not, I don't understand the fole of the consul-replica... and if the consul-leader goes down, it will take some seconds to restart or start in another node....

Thanks!

Thanks!

Multi domains

Hello I followed all your guides but there is something that I can not understand how to do in the variables I declare a domain type xxx.aaaaaaa.com
and it works fine but I want that bringfik to manage other domains like xxx.aaaaa.net and also domains that I have working on my local network like xxx.aaa.lan, how should I do this?
Thank you

Not working with Ubuntu 20.04LTS

Seems that /etc/hostname is immutable in Ubuntu 20.04, so the command echo $USE_HOSTNAME > /etc/hostname gives permission denied (even with sudo). Tried sudo chattr +i /etc/hostname but also didn't work.

Traefic service deployed twice on manager nodes on 6 node cluster (3+3)

Hello, I greatly appreciate your guides and the technical details. I have found them very helpful on my research on swarm.

I have a simple question. As the title describes when using

export TRAEFIK_REPLICAS=$(docker node ls -q | wc -l)

on a 6 node swarm cluster with 3 managers and 3 workers because of this constraint

deploy:
      replicas: ${TRAEFIK_REPLICAS:-3}
      placement:
        constraints:
          - node.role == manager
        preferences:
          - spread: node.id

results on 2 traefic containers per manager node (total 6 as the number of replicas). This is an expected behavior as I can understood but I cannot answer if the correct is to have one replica per node so the "manager" constraint has to be removed, or 3 replicas running only on manager nodes are enough.

With the current deployment of 6 replicas (2 per manager node) everything seems to work and by adding all my 6 nodes to a round robin dns I get correct behavior but I don't know if this is ideal due to the question regarding the placement and the number of the traefic replicas.

Maybe I'm missing something but if you can clarify more I will be grateful. Thanks in advance.

Add a section "Brevity plus" section

The text is very well written and is using proper level of brevity.

Anyway, in many cases there are good reasons to provide more details and to explain other considerations. Samples are issue #20 or #10 (I am sure, there will be more).

How to keep the main text easy to follow and at the same time provide necessary space for extra explanation?

I would recommend adding new section called "Brevity plus" which would introduce set of short chapters, each:

  • explaining simple topic
  • having easy to understand title
  • possibly using some easy to remember tag (like [envpswd], [advertise-addr] etc.)

The section would be one global one and would have introduction explaining, that this is the place to explain some concepts in more details, but not aiming to cover given topics completely and perfectly.

Sample content could be:

Brevity plus

Brevity is key to making the text usable. But it has also some limits.

This page describes selected topics in a bit more detail to allow main text to be succinct and refer to extended information by simple link to relevant topic here.

On the other hand, this page does not try to cover given topics completely, it aims only on providing basic alternative and extended information.

[hashed_pswd] Hashed passwords without environmental variable

If you need to create hashed password, you may omit using env. variable and type it directly into prompt as follows:

        export HASHED_PASSWORD=$(openssl passwd -apr1)
        Password: <enter it here>
        Verifying - Password: <re-enter it here>

[advertise-addr] Proper IP for docker swarm init --advertise-addr <IP>

When initializing docker swarm, command sometime asks for clarification, which IP address to use.

The key requirement is, given IP address must be visible to other nodes to be able to join.

In case you are using VPN (and all potential nodes are connected to it), it is often better to use IP from this VPN as it does not force docker swarm publishing sensitive ports to external Internet.

Question on GitLab deployment

Hi @tiangolo thank you for this guide, I've gotten a lot better at Docker configurations since reading it!

On the page, GitLab CI runner for CI/CD, I can't get the deploy stage to work for my stack. The build stage works fine, however I receive the error:

this node is not a swarm manager. Use "docker swarm init" or "docker swarm join"
to connect this node to swarm and try again

My .gitlab-ci.yml:

image: docker:19.03.12

services:
    - docker:19.03.12-dind

before_script:
  - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
  - export IMAGE_NAME=$CI_REGISTRY_IMAGE:latest

variables:
    IMAGE_NAME: $CI_REGISTRY_IMAGE:latest

stages:
  - build
  - deploy

build-prod:
  stage: build
  script:
    - cd ./frontend
    - docker build -t $IMAGE_NAME -f Dockerfile.prod .
    - docker push $IMAGE_NAME
  only:
    - nextjs

deploy-prod:
  stage: deploy
  script:
    - docker stack deploy -c nextjs.yml --with-registry-auth nextjs
  only:
    - nextjs

Would it help by using your image, tiangolo/docker-with-compose? Thank you for your help.

can't get docs/portainer.yml (or any other) to show up in the traefik console ... 404 error when I try to go to the traefik.frontend.rule


version: '3.3'

services:
  agent:
    image: portainer/agent
    environment:
      AGENT_CLUSTER_ADDR: tasks.agent
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/lib/docker/volumes:/var/lib/docker/volumes
    networks:
      - agent-network
    deploy:
      mode: global
      placement:
        constraints:
          - node.platform.os == linux

  portainer:
    image: portainer/portainer
    command: -H tcp://tasks.agent:9001 --tlsskipverify
    volumes:
      - portainer-data:/data
    networks:
      - agent-network
      - traefik-public
    deploy:
      placement:
        constraints:
          - node.role == manager
          - node.labels.portainer.portainer-data == true
      labels:
        - traefik.frontend.rule=Host:portainer.${DOMAIN?Variable DOMAIN not set}
        - traefik.enable=true
        - traefik.port=9000
        - traefik.tags=traefik-public
        - traefik.docker.network=traefik-public
        # Traefik service that listens to HTTP
        - traefik.redirectorservice.frontend.entryPoints=http
        - traefik.redirectorservice.frontend.redirect.entryPoint=https
        # Traefik service that listens to HTTPS
        - traefik.webservice.frontend.entryPoints=https

networks:
  agent-network:
    attachable: true
  traefik-public:
    external: true

volumes:
  portainer-data:

The only thing that I changed was the traefik.frontend.rule by prepending portainer. to the $DOMAIN.

ID                  NAME                            MODE                REPLICAS            IMAGE                        PORTS
aazm9ndssbh7        api-ping_api-ping-service       replicated          1/1                 api-ping:latest
f6vzmenif3rc        portainer_agent                 global              5/5                 portainer/agent:latest
klgw08aczjxh        portainer_portainer             replicated          0/1                 portainer/portainer:latest
9k75eq2u7nqn        traefik-consul_consul-leader    replicated          1/1                 consul:latest
8gfpc019us10        traefik-consul_consul-replica   replicated          3/3                 consul:latest
9s3qu9v6in25        traefik-consul_traefik          replicated          3/5                 traefik:v1.7

The portainer service doesn't seem to be running any replicas. The output from docker service inspect is here:


       "UpdatedAt": "2019-08-24T22:15:09.315478723Z",
        "Spec": {
            "Name": "portainer_portainer",
            "Labels": {
                "com.docker.stack.image": "portainer/portainer",
                "com.docker.stack.namespace": "portainer",
                "traefik.docker.network": "traefik-public",
                "traefik.enable": "true",
                "traefik.frontend.rule": "Host:portainer.rhlab.io",
                "traefik.port": "9000",
                "traefik.redirectorservice.frontend.entryPoints": "http",
                "traefik.redirectorservice.frontend.redirect.entryPoint": "https",
                "traefik.tags": "traefik-public",
                "traefik.webservice.frontend.entryPoints": "https"
            },

docker service logs portainer_portainer is empty.

Is a label constraint strictly necessary?

In the docs for the traefik service it is suggested to set a label constraint. The idea being that in stacks with multiple services, only one of them would have said label.

From the traefik.yml:

      # Add a constraint to only use services with the label "traefik.constraint-label=traefik-public"
      - --providers.docker.constraints=Label(`traefik.constraint-label`, `traefik-public`)
      # Do not expose all Docker services, only the ones explicitly exposed
      - --providers.docker.exposedbydefault=false

I wonder if that is really necessary since services are not exposed by default. So traefik needs to be enabled explicitly on a service. Additionally, there is the traefik-public external network and as is shown, e.g., for the portainer docker-compose.yml, only the exposed service would be on that network.

The node based constraints for the volumes make a lot of sense, of course.

Consul - Replication - Failover

Hi,

Can you make a documentation about how to recover the Consul cluster after the leader is stopped/damaged? Something happened to my Consul cluster because I can't start it now.

Now, if I launch the leader my log is:
2020-03-19T23:18:04.575Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:18:06.755Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:18:06.787Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:13.924Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:18:13.956Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:00:47.139Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:00:47.395Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:18:16.995Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:00:54.311Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:18:17.027Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:00:54.565Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:00:57.379Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:00:57.635Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:00:57.908Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:00:58.177Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:18:17.575Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:01:00.451Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:01:00.707Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:01:07.621Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:01:07.877Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:01:10.691Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:17.767Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:01:10.947Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:01:11.443Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:01:11.790Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:01:13.763Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:01:14.019Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:01:21.017Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:18:20.067Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:01:21.189Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:01:24.067Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:20.099Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:01:24.259Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:01:24.770Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:01:25.133Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:01:27.139Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:27.323Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:01:27.331Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:18:27.354Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:01:34.310Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:18:30.403Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:01:34.583Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:01:37.379Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:01:37.635Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:18:30.435Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:31.291Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:01:38.336Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:01:38.361Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:01:38.713Z [ERROR] agent.server: failed to reconcile member: member="{a71217dd3a4b 10.0.7.245 8301 map[acls:0 bootstrap:1 build:1.7.2:9ea1a204 dc:dc1 id:46c3d011-333a-6d9e-be06-d3d01b93cfbe port:8300 raft_vsn:3 role:consul segment: vsn:2 vsn_max:3 vsn_min:2 wan_join_port:8302] alive 1 5 2 2 5 4}" error="error removing server with duplicate ID "46c3d011-333a-6d9e-be06-d3d01b93cfbe": Need at least one voter in configuration: {[{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300} {Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}]}", 2020-03-21T11:01:40.451Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:31.321Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:01:40.707Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:18:33.475Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:01:47.621Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:01:47.878Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:01:50.691Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:01:50.947Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:01:51.682Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:18:33.507Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:01:51.691Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:18:40.644Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:01:53.763Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:40.676Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:01:54.019Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:02:00.933Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:02:01.189Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:02:04.003Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:02:04.259Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:18:43.715Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:02:04.786Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:02:04.999Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:02:07.075Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:02:07.331Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:02:14.324Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:02:14.571Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:18:43.751Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:02:17.379Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:43.919Z [ERROR] agent.server: failed to reconcile member: member="{ff5f48876b6b 10.0.7.244 8301 map[acls:0 bootstrap:1 build:1.7.2:9ea1a204 dc:dc1 id:46c3d011-333a-6d9e-be06-d3d01b93cfbe port:8300 raft_vsn:3 role:consul segment: vsn:2 vsn_max:3 vsn_min:2 wan_join_port:8302] alive 1 5 2 2 5 4}" error="error removing server with duplicate ID "46c3d011-333a-6d9e-be06-d3d01b93cfbe": Need at least one voter in configuration: {[{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300} {Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}]}", 2020-03-21T11:02:17.635Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:02:18.020Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:02:18.269Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:02:20.451Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:44.437Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:02:20.711Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:02:27.624Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:02:27.877Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:02:30.691Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:44.544Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:02:30.947Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:18:46.787Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:02:31.584Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:02:31.887Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:02:33.763Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:46.823Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:02:34.019Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:18:54.026Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:02:38.714Z [ERROR] agent.server: failed to reconcile member: member="{a71217dd3a4b 10.0.7.245 8301 map[acls:0 bootstrap:1 build:1.7.2:9ea1a204 dc:dc1 id:46c3d011-333a-6d9e-be06-d3d01b93cfbe port:8300 raft_vsn:3 role:consul segment: vsn:2 vsn_max:3 vsn_min:2 wan_join_port:8302] alive 1 5 2 2 5 4}" error="error removing server with duplicate ID "46c3d011-333a-6d9e-be06-d3d01b93cfbe": Need at least one voter in configuration: {[{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300} {Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}]}", 2020-03-19T23:18:54.043Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:02:41.032Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:02:41.251Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:02:44.099Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:02:44.323Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:02:44.567Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:18:57.091Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:02:44.816Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:02:47.171Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:02:47.395Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:02:54.340Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:18:57.123Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:02:54.565Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:02:57.411Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:57.588Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:02:57.635Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:02:58.134Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:02:58.568Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:03:00.483Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:18:57.761Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:03:00.707Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:19:00.163Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:03:07.730Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:03:07.966Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:03:10.787Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:19:00.195Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:03:11.043Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:03:11.625Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:03:11.788Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:03:13.859Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:03:14.115Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:19:07.332Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:03:21.029Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:03:21.285Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:03:24.099Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:03:24.355Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-21T11:03:24.760Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-21T11:03:24.997Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-21T11:03:27.171Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.245:0->10.0.7.219:8300: connect: no route to host", 2020-03-21T11:03:27.427Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.245:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:19:07.364Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:19:10.407Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:19:10.435Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:19:11.312Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:19:11.364Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:19:13.475Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:19:13.507Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:19:20.648Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:19:20.676Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:19:23.715Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:19:23.747Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:19:24.333Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:19:24.350Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:19:26.787Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:19:26.819Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:19:34.015Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:19:34.051Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:19:37.091Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:19:37.123Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:19:37.770Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:19:37.991Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:19:40.167Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:19:40.195Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:19:43.919Z [ERROR] agent.server: failed to reconcile member: member="{ff5f48876b6b 10.0.7.244 8301 map[acls:0 bootstrap:1 build:1.7.2:9ea1a204 dc:dc1 id:46c3d011-333a-6d9e-be06-d3d01b93cfbe port:8300 raft_vsn:3 role:consul segment: vsn:2 vsn_max:3 vsn_min:2 wan_join_port:8302] alive 1 5 2 2 5 4}" error="error removing server with duplicate ID "46c3d011-333a-6d9e-be06-d3d01b93cfbe": Need at least one voter in configuration: {[{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300} {Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}]}", 2020-03-19T23:19:47.332Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:19:47.364Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:19:50.403Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:19:50.435Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:19:51.213Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:19:51.343Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:19:53.479Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:19:53.507Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:20:00.645Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:20:00.676Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:20:03.715Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:20:03.747Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:20:04.437Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:20:04.630Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:20:06.787Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:20:06.819Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:20:14.012Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:20:14.078Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:20:14.228Z [INFO] agent: Caught: signal=terminated, 2020-03-19T23:20:14.228Z [INFO] agent: Gracefully shutting down agent..., 2020-03-19T23:20:14.228Z [INFO] agent.server: server starting leave, 2020-03-19T23:20:14.229Z [INFO] agent.server.serf.wan: serf: EventMemberLeave: ff5f48876b6b.dc1 10.0.7.244, 2020-03-19T23:20:14.229Z [INFO] agent.server: Handled event for server in area: event=member-leave server=ff5f48876b6b.dc1 area=wan, 2020-03-19T23:20:14.229Z [INFO] agent.server.router.manager: shutting down, 2020-03-19T23:20:17.091Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 4304cc1e-9b82-6dfa-a6a7-6783efb8a605 10.0.7.218:8300}" error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:20:17.155Z [ERROR] agent.server.raft: failed to appendEntries to: peer="{Nonvoter 6a68833f-4b17-d99d-769b-b94bd63bef90 10.0.7.219:8300}" error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:20:17.231Z [INFO] agent.server.serf.lan: serf: EventMemberLeave: ff5f48876b6b 10.0.7.244, 2020-03-19T23:20:17.232Z [INFO] agent.server: Removing LAN server: server="ff5f48876b6b (Addr: tcp/10.0.7.244:8300) (DC: dc1)", 2020-03-19T23:20:17.233Z [WARN] agent.server: deregistering self should be done by follower: name=ff5f48876b6b, 2020-03-19T23:20:17.731Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=4304cc1e-9b82-6dfa-a6a7-6783efb8a605 fallback=10.0.7.218:8300 error="Could not find address for server id 4304cc1e-9b82-6dfa-a6a7-6783efb8a605", 2020-03-19T23:20:17.846Z [WARN] agent.server.raft: unable to get address for sever, using fallback address: id=6a68833f-4b17-d99d-769b-b94bd63bef90 fallback=10.0.7.219:8300 error="Could not find address for server id 6a68833f-4b17-d99d-769b-b94bd63bef90", 2020-03-19T23:20:18.000Z [ERROR] agent.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found", 2020-03-19T23:20:20.000Z [ERROR] agent.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found", 2020-03-19T23:20:20.163Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.218:8300 error="dial tcp 10.0.7.244:0->10.0.7.218:8300: connect: no route to host", 2020-03-19T23:20:20.227Z [ERROR] agent.server.raft: failed to heartbeat to: peer=10.0.7.219:8300 error="dial tcp 10.0.7.244:0->10.0.7.219:8300: connect: no route to host", 2020-03-19T23:20:20.233Z [INFO] agent.server: Waiting to drain RPC traffic: drain_time=5s, 2020-03-19T23:20:22.000Z [ERROR] agent.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found", 2020-03-19T23:20:24.000Z [ERROR] agent.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found", 2020-03-19T23:20:24.000Z [ERROR] agent.server.autopilot: Error promoting servers: error="error getting server raft protocol versions: No servers found",

I have not launched the replicas, because after I launched them, the whole cluster breaks down and nothing worked. Now the leader is full with errors in logs, but the system looks "half-functional". Right now, the certifications on my sites are from the Traefik not from Letsencrypt, so they are not valid.

Thank you!

Guide for removing swarm

Love the website and easy to remember name. I'm currently not very well versed with all the aspects of using swarm and trying to learn more.

One thing I wish to do now is to remove my swarm and then rebuild it. I want to do it again to make sure I understand what I am doing and prepare for a production deployment but I'm sure there are other use cases for removing a swarm.

My problem is I'm not sure if docker swarm leave will return everything to the pre-swarm state. For example, after leaving swarm I see networks in docker network ls that appear to be from the swarm but I'm not sure. I see a {MY_PROJECT_NAME}_default network which could be from compose or something else.

Thank you.

CI/CD Error

I have disabled the shared runners. I set up gitlab-runner inside docker container on my cloud server. I can see the runner listed in settings -> CI/CD -> Runners.
But it is giving error:- error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on 213.133.100.100:53: no such host

Docker Compose Template for Traefik Has Issues

Great website, tutorial etc...! Thank you for providing this.

I'd like to document my issues using the guide for setting up Traefik at https://dockerswarm.rocks/traefik/ to try and help others...

Deploying the template exactly as described and having followed all the steps:

- invalid interpolation format for services.traefik.deploy.labels.[]: "required variable HASHED_PASSWORD is missing a value: Variable not set". You may need to escape any $ with another $.

According to this stack overflow answer, extended shell-style features, such as ${VARIABLE-default} and ${VARIABLE/foo/bar}, are not supported. So I changed the ${HASHED_PASSWORD?Variable not set} and other related values containing the ?Variable not set addendum to ${HASHED_PASSWORD}.

Next issue: Cannot login--incorrect password.

For those running ubuntu they may find that export USERNAME=admin doesn't do anything--their $USERNAME will still be ubuntu. It took me forever to find out why my login never worked! So log in with username ubuntu and the password they set.

Perhaps updating the docs would be helpful? I'm happy to make a PR and update, but I wanted to check here first to make sure I wasn't missing something elementary before suggesting changes. Thank you again for the great tutorial!

Service placement on different node makes it unreachable

I installed the basic traefik stack on 1 node and I was pleased with how it works.
Then I installed a custom application based on this config https://dockerswarm.rocks/thelounge/
All working well, dashboard available at https://traefik.mydomain.com, app available at https://app.mydomain.com

Then I added one more node to the swarm, and the next stack update placed the app container on the worker node. Traefik still runs on master node according to placement constraints, and the dashboard still runs fine and reports no errors.
However, trying to access the app doesn't work and the browser gets a Bad Gateway message (probably from traefik?)

The log line traefik produces for every request to access the app at https://app.mydomain.com

2021-01-25T18:16:07.965245081Z **client-ip** - - [25/Jan/2021:18:16:04 +0000] "GET / HTTP/2.0" 502 11 "-" "-" 419 "app-https@docker" "http://10.0.1.8:8000" 3057ms

Again, I didn't change anything except that I added a worker node and redeployed the stack.

Expected configuration for stacks behind the traefik-public proxy

I have a stack that is deployed using the ideas from this site and uses the template from FastAPI Postgres to deploy the stack behind this public traefik router. The FastAPI domain does not have SSL certificates but the traefik.sys.example.com domain does.

What should be the labels or commands present on the FastAPI stack for ssl certificates from lets encrypt?

Thank you!

Add T.A.D.S. to Project Generators section

Hi!
I love your website: it is clear, the name is fun, it catches the attention, and most of all: it is precious learning material! It's good to see good practices promoted like that.

However, during my past experience with Docker Swarm as small team leader, I noticed:

  • There is a lack of good examples of well-organized Infrastructure as Code repositories
  • It is hard to make developers use Docker, especially in a complex microservices environment
  • It is quite impossible to make a dev able to reproduce production env

That's why I created the T.A.D.S. boilerplate project: https://github.com/Thomvaill/tads-boilerplate
It integrates Terraform, Ansible and Docker Swarm together to do full Infrastructure as Code.
It uses the same processes and configs to deploy locally and in production. It also lets developers test their Swarm cluster in a production-like environment with Vagrant (3 nodes).

In this project, of course, I promote Docker Swarm Rocks principles. For the moment there is only traefik which is implemented in the example, but Swarmprom and portainer will follow soon (thomvaill/tads-boilerplate#6).

I think this project could totally appear in your "Project Generators" section. What do you think?
Moreover, I would love to hear your feedback on my project!

Thanks!

What would be the purpose of having Portainer when we have Swarmpit

I've followed your amazing guide on setting-up a docker swarm environment. Now I have a cluster with traefik and swarmpit. I noticed that you have a guide on Portainer, too. I'm happy with the stack (except for some instability that I occasionally get from swarmpit). So I'm curious, what would be the benefit of installing Portainer when swarmpit is already installed? Does it offer any more features? Or do you recommend replacing it with that?

Use Docker Secrets

Docker Secrets is a service that works exclusively with Docker Swarm and handles secrets management across a given swarm. The tutorial could be updated to use Docker Secrets where appropriate.

This would subsume #10 and the password-related part of #21.

Happy to furnish a PR.

CC: @tiangolo @vlcinsky

error localhost

I can't access local domains



> `version: '3'
> services:
>     inventario:
>         image: nahuelbaglietto/php7.4-apache:latest
>         networks:
>           - datos
>           - traefik-public
>         volumes:
>           - /mnt/www/html/inventario:/var/www/html
>     deploy:
>       labels:
>         - "traefik.enable=true"
>         - "traefik.http.routers.inventario.rule=Host(`inventario.espaciomemoria.lan`)"
>         - "traefik.http.routers.inventario.entrypoints=http"
>         - "traefik.http.services.inventario.loadbalancer.server.port=80"
> networks:
>   traefik-public:
>     external: true
>   datos:
>     external: true`
> 

when entering http: //inventario.espaciomemoria.lan =

The inventory.space.lan page has rejected the connection.
Try to:

Check connection
Check proxy and firewall
ERR_CONNECTION_REFUSED

Environment variables reset if calling docker via sudo

@tiangolo the main landing page has the following note about how to call docker commands using sudo:

Note: If you are not a root user, you might need to add sudo to these commands. The shell will tell you when you don't have enough permissions.

What it doesn't say is that when you make calls via sudo, the environment is reset (well, at least in ubuntu), so the environment variable substitutions in traefik.yml actually are replaced with blanks. Unfortunately, this doesn't cause obvious errors in the docker stack launch, so it was hard for me to get to the bottom of what was wrong.

This can be fixed by just adding the -E flag to the sudo command to pass the environment variables:

sudo -E docker stack deploy -c traefik-host.yml traefik-consul

worked for me.

I think it would be a good idea to add this to the warning there (it would have saved me a couple hours of frustration). I'll make a pull request with some suggested phrasing but feel free to ignore or place the warning elsewhere as you see fit.

Adding additional services to swarmprom and Prometheus

I have successfully got the stack up and running. I've added some additional services to the stack,

docker stack deploy -c service1.yml service1

This service publishes metrics which can be scraped by Prometheus.

Is the process of adding services outside of those defined in swarmprom.yml documented anywhere?

Has anybody tried?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.