Coder Social home page Coder Social logo

smebberson / docker-alpine Goto Github PK

View Code? Open in Web Editor NEW
596.0 34.0 187.0 672 KB

Docker containers running Alpine Linux and s6 for process management. Solid, reliable containers.

License: MIT License

Shell 27.63% HTML 15.95% Dockerfile 56.41%
service-discovery consul docker-image docker-container nodejs redis nginx rabbitmq apache confd docker alpine-linux

docker-alpine's Introduction

docker-alpine

Highly configurable Docker images running Alpine linux and s6 process management.

Table of contents

Goals

This project has the following goals:

  • To produce small Docker images.
  • To provide Docker images that are easily configurable.
  • To provide Docker images that are highly stable.
  • To quickly enable a microservices architecture using Docker.

To meet these goals, we're using:

  • Alpine linux (very small but capable Linux distribution).
  • s6 for process management for both easily configurable, and highly stable images.

Docker and microservices

Using Docker makes your infrastructure and environment consistent, testable, scalable and repeatable.

What are microservices?

Microservices are isolated components (of a whole) doing one thing, and doing it well. These microservices are pulled together to provide a complete platform. For example, each of the following microservices would exist within one Docker container:

  • A stateless web application running in Node.js.
  • Session data for the Node.js web application provided by Redis.
  • MongoDB as the database for the Node.js web application.
  • Nginx web server proxying requests to Node.js.

How do they talk to each other?

That's called service discovery. We solve that problem using Consul (which is a distributed service discovery tool, with built-in health checks). Each container advertises their service (IP and port) via Consul.

Each container with a service, connects to a member of the Consul cluster. Consul then talks to all members to enable service discovery via HTTP and DNS.

A good example is a container with Nginx that will query Consul for the IP address of the Node.js container so that it can proxy an incoming request through to the Node.js application.

What about scalability?

Setting up your infrastructure using microservices makes scalability a breeze.

There is an example showing Nginx load balancing incoming requests to multiple Node.js containers.

Image design

These images are used heavily in production by a number of different companies. They've been designed as great starting points to easily customize and use within your own platforms.

Core components

The following are common to all images within this repository.

Alpine Linux

Alpine Linux provides the foundation for all images in this repository. It's small, fast and perfect for Docker.

s6

s6 is a unix service supervisor (much like runit and supervisord only better).

While it's true that Docker containers should have one clear focus, more often than not, you'll need to run multiple processes within the container.

Consul is a great example; you need the Consul agent to join a Consul cluster and the primary service such as Nginx or Node.js. Log management is also a key consideration in which another process might need to run.

s6-overlay

s6-overlay is s6 Dockerized! It makes working with Docker and process management via s6 super easy. It also features some really nice extras such as container initialization and finalization stages (run custom scripts during these phases to setup services or tidy up from crashes).

go-dnsmasq

go-dnsmasq has been added into the mix to add support for the DNS search keyword which doesn't come standard in Alpine Linux.

These images work well with the embedded DNS service that comes with Docker 1.10+.

Configuration

Configuration is a key step within a Docker-based microservices architecture. You must configure your services to interact with each other.

All Consul-based images come with consul-template for easy configuration based on changes within Consul.

All non-Consul-based images come with confd.

Consul and service discovery

All Consul-related containers have been design for use in production and development.

These containers have been configured for zero-conf Consul bootstrapping. To achieve this, docker-engine v1.10+ is required. along with a docker-compose.yml file written in the version 2 format (see the example).

To achieve zero-conf Consul bootstrapping, Docker's new embedded DNS server is used within each container to find the IP of a consul container (note: there is a requirement that all Consul server services in docker-compose.yml are called consul for this to work).

Note:: this can work in older versions of Docker however (using service links with the name consul).

Crashes and restarts.

The Consul-based images in this repository have been designed to meet the following requirements:

Containers running Consul in server mode:

  • When Consul dies: kill the container immediately.
  • When Consul starts: remove all data associated with Consul so everything starts afresh.

Containers running Consul in agent mode:

  • When Consul dies: immediately restart it using s6.
  • When Consul starts: it will use the Consul data previously create to quickly rejoin a cluster.
  • When the primary service dies: kill the container immediately.

We've found this to be the most consistent way to effectively run Consul in production. This also supports the development workflow in which you CTRL+C to stop a container from running.

Examples

Most images have an example in the examples folder.

There is also a complete example which demonstrates:

  • Zero-configuration Consul bootstrapping.
  • Stateless application.
  • Load balanced Nginx proxies.
  • Scaled web application.

You can read more about the example.

Images

The following describes the images that are available and the inheritance chain.

Images without Consul

└─ alpine-base
   ├─ alpine-apache
   ├─ alpine-confd
   |  └─ alpine-rabbitmq
   ├─ alpine-nginx
   |  └─ alpine-nginx-nodejs
   ├─ alpine-nodejs
   └─ alpine-redis

Images with Consul

└─ alpine-base
   └─ alpine-consul
      ├─ alpine-consul-ui
      └─ alpine-consul-base
         ├─ alpine-consul-apache
         ├─ alpine-consul-nodejs
         ├─ alpine-consul-nginx
         |  └─ alpine-consul-nginx-nodejs
         └─ alpine-consul-redis

alpine-base

This image is the base for all containers. All other Docker images within this repository inherit from this Container.

Latest version is 3.3.0, or latest.

alpine-apache

This image includes Apache HTTPD with a very basic configuration.

Latest version is 2.0.1, or latest.

alpine-confd

This image adds confd. It should be seen as a base image suitable for heavy customisation.

Latest version is 3.1.0, or latest.

alpine-consul

This image adds Consul.

If you want to create a Docker image to run as a Consul agent in server mode (i.e. part of a cluster) start with this image.

Latest version is 3.2.0, or latest.

alpine-consul-base

This image inherits from alpine-consul and is designed as a base image for other Docker images which will join to a Consul cluster.

If you want to create a Docker image to advertise a service in Consul start with this image.

Latest version is 4.2.0, or latest.

alpine-consul-apache

This image is designed to run Apache within the context of service discovery (via Consul).

It is suited to running Apache as a proxy to another Docker container.

Latest version is 2.0.0, or latest.

alpine-consul-nodejs

This image is designed to run a Node.js application within the context of service discovery (via Consul).

Latest version is 5.11.0, or latest.

alpine-consul-nginx

This image is designed to run Nginx within the context of service discovery (via Consul).

It's suited for running an Nginx proxy or to load balance with Nginx to another container.

Latest version is 3.2.0, or latest.

alpine-consul-nginx-nodejs

This image is designed to run both Nginx and Node.js on the same container. It's a shortcut to running alpine-consul-nodejs via a proxy from alpine-consul-nodejs.

Latest version is 2.0.0, or latest.

alpine-consul-redis

This image has been designed to run Redis within the context of service discovery (via Consul).

Latest version is 2.1.0, or latest.

alpine-consul-ui

This image has been designed to connect to a Consul cluster (from alpine-consul) and make the Consul admin UI accessible.

Latest version is 2.2.0, or latest.

alpine-nginx

This image includes Nginx with a very basic setup.

Latest version is 3.0.0, or latest.

alpine-nginx-nodejs

This image includes both Nginx and Node.js. It's suitable if you want to have Node.js perform configuration for Nginx.

Latest version is 4.4.0, or latest.

alpine-nodejs

This includes Node.js.

Latest version is 8.15.0, or latest.

alpine-rabbitmq

This image inherits from alpine-confd, and includes RabbitMQ with a basic setup.

Latest version is 2.1.1, or latest.

alpine-redis

This image includes Redis.

Latest version is 1.0.0, or latest.

Usage

All containers come with the following features (most of them come from s6-overlay):

  • User hooks for container initialization, ownership and permissions management, and finalization tasks.
  • Easily drop privileges before starting a service.
  • Start and finish scripts for service management.
  • Environment management.

Using CMD

As a general rule, these images eschew the CMD option that Dockerfile provides. These images are oriented towards running multiple services.

However, alpine-base is a minimalist image that can be used with CMD. Your command can be provided in the Dockerfile or at runtime and it will be run under the s6 supervisor. When the command fails or exits, the container will also exit (falling back to Docker restart rules or other platform controls to manage container availability).

For example:

vagrant@ubuntu-14:/vagrant$ docker run -it smebberson/alpine-base with-contenv sh
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 30-resolver: executing...
[cont-init.d] 30-resolver: exited 0.
[cont-init.d] 40-resolver: executing...
[cont-init.d] 40-resolver: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
/ # env
HOSTNAME=63d07768fdc5
SHLVL=1
HOME=/root
GO_DNSMASQ_LOG_FILE=/var/log/go-dnsmasq/go-dnsmasq.log
GODNSMASQ_VERSION=1.0.5
TERM=xterm
S6_OVERLAY_VERSION=v1.19.1.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
with-contenv exited 0
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
vagrant@ubuntu-14:/vagrant$

Using services

s6 provides us a clean and simple way to run services within these images. For example alpine-consul runs both go-dnsmasq and Consul via services, and alpine-consul-nginx adds Nginx, also via services.

To write a service script, create a directory with the name of your service in the /etc/services.d/ directory. In that directory create an executable script named run. In your run script, start your service (be sure to make sure it doesn't background as s6 will attempt to start it again). For example:

# /etc/services.d/nginx/run

#!/usr/bin/with-contenv sh

nginx -g "daemon off;"

s6 will start your service and monitor it. Should your service exit or fail, s6 will execute your run script again to restart the service.

Finish hooks

s6 also provides you a hook to run tasks in between a service exiting or failing, and it being restarted. Simple create an executable script named finish next to your run script.

This script should execute fast. It has about 3 seconds to do what it needs to before it will be killed by s6, and the run script executed. You can also do any tasks as required in your run before you start your long running process.

Bringing down a container

If you'd like to bring down a container if a particular service restarts, use this as the contents of a finish script:

#!/usr/bin/execlineb -S1

# only tell s6 to bring down the entire container, if it isn't already doing so
# http://skarnet.org/software/s6/s6-supervise.html
if { s6-test ${1} -ne 0 }
if { s6-test ${1} -ne 256 }

s6-svscanctl -t /var/run/s6/services

Customization

The docker-alpine images are highly customizable. You can customize them with the following:

  • Setting ENV variables to change settings.
  • Overwrite a configuration file (not all images have these) to alter an application.
  • Overwrite run scripts to change application start process.

They've also been designed so that it's easy to use these images in a non-standard context such as Docker Cloud and Rancher.

The following cover customization that is common to all images. Review the documentation for each image for image-specific customization options.

/usr/bin/host-ip

This file is used in other scripts to determine the IP of the container in which a script is running.

Overwrite it as required to produce a correct value for your environment. Review the file itself for output requirements.

If your environment uses an overlay network (Docker Cloud, Rancher), you should change this file.

/usr/bin/container-find

This file (Consul-only containers) is used within other scripts to determine the IP address(es) of other containers by name.

By default, it uses dig to DNS query for the IP address of a container by name. container-find will DNS query for consul (the default). container-find static will DNS query for static. container-find static.service.consul will hand-off the DNS query to Consul.

Versioning

Each image has it's own version and will be updated independently. All versions follow semver. If a software package or the operating system the image uses is updated, the level of update will be reflected in the new version number, for example:

  • if Alpine Linux is upgraded from 3.2 to 3.3, then the image version will receive a minor increment
  • if Alpine Linux is upgraded from 3.3 to 4, then the image version will receive a major increment
  • if s6-overlay is upgraded to the latest patch upgrade, and nginx is upgraded to the latest minor upgrade, then the image version will will receive a minor increment

FAQ

These images are a little different from your standard Docker images. The following should explain most of the differences.

DNS search

By default, Alpine Linux <= v3.4 doesn't support DNS search. This has been enabled through the use of go-dnsmasq.

Where is Bash?

These images don't contain Bash by default. They're defaulted to use sh.

You can either write your scripts such that they're posix compliant or install Bash with apk add --no-cache bash.

Where is my environment?

By default, sh has no environment. If you need access to environment variables in scripts or when using docker exec you should use:

  • #!/usr/bin/with-contenv sh
  • docker exec -it complete_consul_1 with-contenv

Setting environment variables.

There are two ways to set environment variables. In your Dockerfile like any other containers or from within a custom script using set-contenv for example set-contenv VAR_NAME var_value.

Further reading

If you'd like to read more about operating environment of these images, start here:

Changelog

Read the CHANGELOG.md for a chronological record of events. Each image also has it's own CHANGELOG.

Contributing

We love contributors. Read CONTRIBUTING.md for more information on how to contribute to this project.

Contributors

You can view more information about the contributors here.

docker-alpine's People

Contributors

abh avatar emmetog avatar gaff avatar gitter-badger avatar matthewvalimaki avatar ncornag avatar sandytrinh avatar smebberson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-alpine's Issues

consul-template stopped after consul leads to errors

If consul-template in alpine-consul-base is started the process will be killed after consul has already gone away leading to errors such as

[ERR] (view) "key(ssl/crt)" store key: error fetching: Get http://127.0.0.1:8500/v1/kv/ssl/crt?stale=&wait=60000ms: dial tcp 127.0.0.1:8500: getsockopt: connection refused

Also it seems that s6-svc -h /var/run/s6/services/consul-template does not kill the old process. Actually s6-svc -k has no effect at all. I couldn't kill the process no matter what flag I used.

If I manually kill the consul-template process then error above goes away. There is consul-template -pid-file which I tried having and then using /etc/cont-finish.d/#consul-template (weird name to have it execute before 00-consul) with kill -9 cat /var/run/consul-template.pid`` but apparently it doesn't die quickly enough as sometimes the error above would still show up.

Any ideas?

Support .consul in go-dnsmasq stubzones

Related to #24.

Could we add stubzones support in alpine-consul?
According to https://github.com/janeczku/go-dnsmasq Use different nameservers for specific domains domain[,domain]/host[:port] so I'm thinking the following might work for https://github.com/smebberson/docker-alpine/blob/master/alpine-base/root/etc/services.d/resolver/run:

exec go-dnsmasq --default-resolver --append-search-domains --hostsfile=/etc/hosts --stubzones=.consul/127.0.0.1:8600 >> $GO_DNSMASQ_LOG_FILE 2>&1

s6 service start up improvements

Images based on alpine-consul-base do not seem to specify s6 start order potentially leading to unwanted behavior. If I have understood s6 correctly there is no actual support for setting an order but there is a concept of dependencies, see Readiness notification and dependency management and more specifically The s6-svwait program.

Example use case:

Scenario: I want all services to be discoverable through Consul server.
Given I use `alpine-consul` for service discovery
And `alpine-consul-nginx` for web server
When I start `alpine-consul-nginx`
Then Service `nginx` should be discoverable and `alive` via Consul

However, as it is today there's no guarantee that nginx indeed is up before Consul agent reports "nginx" service. While default Consul configuration for nginx does have health check against /ping, nginx might not even start up. Therefore:

  1. Reporting service early I think is bad practice (why report if it never can be used)
  2. /ping might not be available.

Regarding 1. I do acknowledge that not reporting service can lead to a situation where health monitoring breaks if it depends on information provided by Consul. Perhaps instead of exit in my example below we could do a log entry instead and proceed with exec consul as before.

Change proposals:

  1. Have Consul s6 run script wait for nginx (node etc.) to become available. If it does not become available, never launch Consul.

Note: example below does not contain -t for timeout, probably should be defined to 10s for nginx.

# wait for nginx to be up
s6-svwait -u /var/run/s6/services/nginx/

if [ $? -ne 0 ]; then
  exit $?;
fi

exec consul ...
  1. Replace nginx /ping health check with The s6-svstat program
    s6-svstat /var/run/s6/services/nginx, see exit codes.

Project stagnated (we're getting back on track)

@smebberson I'm concerned that this project has come to a halt. We have outdated base images, old ca-certificates, PRs etc. I do not want to fork either.

I propose creation of an organization or something which allows myself to merge, tag and release (I assume Docker Hub is updated automatically with releases). You would still be the owner.

To further improve common goals and communication I would:

  • Have a road map and change log
  • Do beta releases
  • Implement tests

Incorrect shell set for consul and rabbitmq users

New users are created for alpine-consul and alpine-rabbitmq. While these accounts do work su - consul resulted in su: can't execute ':60:60:mysql:/var/lib/mysq': No such file or directory, see
wrong shell. So new users defaulted to mysql for whatever reason. While I do not want to su - consul for any legitimate reason having a wrong shell is still a problem. This can be easily fixed by adding -s /bin/sh to the adduser command.

Always upgrade packages

I recommend apk upgrade --update to be executed on every image. For example libcrypto, libssl and bind are out of date. While security is responsibility of user providing latest (at the time of build at least) would be good practice.

Also vaguely related: "Clair is an open source project for the static analysis of vulnerabilities in appc and docker containers." quay/clair#12.

alpine-apache doesn't start, could not create PID file

I use the base package alpine-apache to additionally install
php-apache2.

  RUN apk add --update \
          php-apache2 \
          php-sqlite3 \
          php-xml \
          php-curl \
          php-dom \
          php-iconv \
          php-pdo_sqlite \
          php-json \
          php-ctype \
        && rm -rf /var/cache/apk/*

When I try to start the container I get the following errors

Jän 08 22:56:15 <hostname> docker[19307]: [Sun Jan 08 21:56:15.136430 2017] [core:error] [pid 201] (2)No such file or directory: AH00099: could not create /run/apache2/http
Jän 08 22:56:15 <hostname> docker[19307]: [Sun Jan 08 21:56:15.136455 2017] [core:error] [pid 201] AH00100: httpd: could not log pid to file /run/apache2/httpd.pid

Any ideas where this could come from? In the beginning my containers were working, this only happened after I rebuild it in the recent weeks (i think around two weeks ago).

consul cluster bootstrap not working in alpine-consul:3.1.1

When spinning up the complete example in examples/complete the consul cluster fails to bootstrap with the latest build (3.1.1). If I change the version of alpine-consul back to the previous version 'alpine-consul:3.1.0' things appear to work.

It looks like the dns query is not working in the latest.

consul_1 | * Failed to resolve ;;: lookup ;;: invalid domain name
consul_1 | 2017/01/19 19:40:11 [WARN] agent: Join failed: , retrying in 30s
consul_1 | 2017/01/19 19:40:14 [ERR] agent: coordinate update error: No cluster leader
consul_3 | 2017/01/19 19:40:17 [ERR] agent: failed to sync remote state: No cluster leader
consul_2 | 2017/01/19 19:40:18 [ERR] agent: coordinate update error: No cluster leader
consul_3 | 2017/01/19 19:40:23 [ERR] agent: coordinate update error: No cluster leader
consul_2 | 2017/01/19 19:40:34 [ERR] agent: failed to sync remote state: No cluster leader
consul_1 | 2017/01/19 19:40:41 [INFO] agent: (LAN) joining: [;;]
consul_1 | 2017/01/19 19:40:41 [WARN] memberlist: Failed to resolve ;;: lookup ;;: invalid domain name
consul_1 | 2017/01/19 19:40:41 [INFO] agent: (LAN) joined: 0 Err: 1 error(s) occurred:

Something else that looks suspect. In the logs for dnsmasq it looks like the service is being continually restarted.

/var/log/go-dnsmasq/go-dnsmasq.log

time="2017-01-19T19:38:42Z" level=info msg="Starting go-dnsmasq server 1.0.7"
time="2017-01-19T19:38:42Z" level=info msg="Nameservers: [127.0.0.11:53]"
time="2017-01-19T19:38:42Z" level=info msg="Setting host nameserver to 127.0.0.1"
time="2017-01-19T19:38:42Z" level=info msg="Ready for queries on tcp://127.0.0.1:53"
time="2017-01-19T19:38:42Z" level=info msg="Ready for queries on udp://127.0.0.1:53"
time="2017-01-19T19:38:42Z" level=fatal msg="listen udp 127.0.0.1:53: bind: permission denied"
time="2017-01-19T19:38:43Z" level=info msg="Starting go-dnsmasq server 1.0.7"
time="2017-01-19T19:38:43Z" level=info msg="Nameservers: [127.0.0.11:53]"
time="2017-01-19T19:38:43Z" level=info msg="Setting host nameserver to 127.0.0.1"
time="2017-01-19T19:38:43Z" level=info msg="Ready for queries on tcp://127.0.0.1:53"
time="2017-01-19T19:38:43Z" level=info msg="Ready for queries on udp://127.0.0.1:53"
time="2017-01-19T19:38:43Z" level=fatal msg="listen udp 127.0.0.1:53: bind: permission denied"

Testing and automated releases

At present releases work as follows:

  • Develop a fix/feature.
  • Test it locally.
  • Update image README, VERSIONS and project README.
  • Version bump that image specifically.
  • Tag the Git repository with the image name and version (i.e. alpine-redis-v1.0.0).
  • Manually build the latest tag on Docker Hub.
  • Manually create and build the 1.0.0 (or whatever release it is) tag on Docker Hub.

Note: I've turned of "automatic builds when a push happens" in Docker Hub. This is because when I'm pushing for say the alpine-redis image, the alpine-nodejs image also builds. This unnecessarily builds containers I don't intend on getting built and it also updates the timestamp on the :latest tag for other builds which is really frustrating.

Another note: because of the close inheritance of some of the images (which I think is a good thing), you then need to go through and test, bump version numbers, tag and release on Docker Hub downstream containers to take advantage of any updates. If you update alpine-base it's a big job! I haven't been doing this in one commit either so that I can keep the code in master as close as possible to the :latest release of any container. This makes things time consuming and hard to pull in great PRs such as #29.

It's not the greatest process at all.

I've thought of creating a build server and using Docker Hubs web hooks to automate the process a little more, but I just haven't had the time to implement anything and work through those issues above.

Also, I haven't seen much in the way of how to test and verify the state of a Docker image. It would be great to have them running through Travis CI. Especially when updating alpine-base for example, you could see at an instance what you've broken down-stream.

I'd like to create a discussion around this to see if we can streamline these processes a little.

ping: permission denied (are you root?)

Hello!
Is that normal behaviour? what after creating container by command
docker run -d smebberson/docker-alpine
and then getting shell by
docker exec -it [container_id] sh
and send ping 172.17.0.1 i've got a error message

ping: permission denied (are you root?)

how can i solve this problem?

P.S. docker version 1.12.1

thank you

Does go-dnsmasq need root?

Does go-dnsmasq need to be run as root? I do get that it needs to be able to modify -rw-r--r-- 1 root root 239 Mar 14 21:57 /etc/resolv.conf but that could be taken care of with a new group?

I can work on PR if root is not required.

docker-alpine redis how to get into bash

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
714ecf80a71b smebberson/alpine-redis "/init" 39 minutes ago Up 39 minutes 6379/tcp redis
$ docker exec -it redis bash
exec: "bash": executable file not found in $PATH

Standardised approach to logging

We need to move toward a standardised approach to logging. Some of the images do it differently at the moment.

The main factor is, should everything be splurted to stdout and stderr? We need to have a way to turn it off however.

I think in all instances, they should also go to file, so that any file based aggregation can take place.

Any other thoughts?

Consul: replace custom HTTP script check with built-in http/tcp check

With updated Consul we can now use built-in http and tcp checks, see https://www.consul.io/docs/agent/checks.html. While certainly using custom "pings" is not wrong it is customization that should and could be avoided.

Example of HTTP check:

{
  "check": {
    "id": "api",
    "name": "HTTP API on port 5000",
    "http": "http://localhost:5000/health",
    "interval": "10s"
  }
}

Below are images with HTTP check. Perhaps we could just do http://localhost/ as that typically out of the box works?

su user -c fails

Using smebberson/alpine-consul-base (latest) as base and then executing su build -c "" leads to
I've also done apk upgrade --update which lead to BusyBox to upgrade busybox-1.24.2-r9.

So here's everything

/ # su build -c ""
halt: unrecognized option: c
BusyBox v1.24.2 (2016-06-23 08:49:16 GMT) multi-call binary.

Usage: halt [-d DELAY] [-n] [-f]

Halt the system

        -d SEC  Delay interval
        -n      Do not sync
        -f      Force (don't go through init)

I'm not quite sure if this is a smebberson/docker-alpine problem or problem with BusyBox. It didn't use to happen but now after using latest base images I'm getting this and cannot seem to resolve it.

Roadmap and changelog

References #36.

Roadmap
See Wikipedia explanation.

Purpose of a roadmap is to set goals and expectations. Roadmap sets focus and drives excitement. Roadmap does not dictate every single change but rather lists known immediate upcoming changes and tries to define more distant changes - a wishlist in its simplest form. After a release roadmap should almost from word to word be in changelog.

Current state of docker-alpine project is that there is no roadmap. Releases are sporadic, which is fine and understandable considering this is a community project, but more problematic is the what is needed for a release to happen. While a release could happen, especially with automation, after every single change to set goals and expectations might be preferable. Having a roadmap should also alleviate pressure from release master as there are now more clear definition of when to release.

I propose GitHub milestones feature to be used. All issues are to be triaged and then moved to a milestone, comments requested or closed. This process should be as lightweight and quick as possible.

Changelog
Example that I like: https://github.com/hashicorp/consul/blob/master/CHANGELOG.md.

I propose a simple changelog based on roadmap (or more specifically milestones). If changes are issues and issues have meaningful context (so far they have) then I do not see a point to write again what was changed, why etc.

Note: docker-alpine project due to it's multiple release numbers (one per image) does pose a challenge. In order to simply versioning moving to one release number for all, even if no change is done, might be in order. Then in changelog we could just note

NO CHANGE LIST:
alpine-nginx-nodejs
alpine-nginx

I do acknowledge that doing a release with no changes is curious thing, but I believe it simplifying things outweights the negative.

Node user should be supplied

alpine-consul-nginx-nodejs and alpine-nginx-nodejs should provide nodejs user. user-consul-nginx-nodejs and user-nginx-nodejs then both use this user in s6 run scripts. Right now node server is running as root.

Consul join IPs

The actual consul-ip script searches consul IPs to join and returns the first one. I think it is a better solution to give more than one. Sometimes that first consul container is down and the other containers cannot join the cluster.
It could be achieved passing a set of well known IPs, or just everything we found under the consul dns name.
I made the modifications and it works a lot better in situations where the containers go down and up. I will do a pull request for you to review.

Support storage back ends that do not support extended file attributes

When using a storage back end in docker that does not support extended file attributes, the go-dnsmasq resolver is not able to bind to port 53 (or any other port < 1024).

Basically the line https://github.com/smebberson/docker-alpine/blob/master/alpine-base/Dockerfile#L18 has no effect when such a storage back end (aufs, btrfs - see moby/moby#30557) is used, so DNS fails in the container.

I suggest a simple workaround in https://github.com/smebberson/docker-alpine/blob/master/alpine-base/root/etc/services.d/resolver/run like this:

#!/usr/bin/with-contenv sh

RUNAS="go-dnsmasq"

setcap -v CAP_NET_BIND_SERVICE=+eip /bin/go-dnsmasq
status=$?

if [ !$status ];
then
    RUNAS="root"
fi

s6-setuidgid ${RUNAS} go-dnsmasq --default-resolver --ndots "1" --fwd-ndots "0" --hostsfile=/etc/hosts >> $GO_DNSMASQ_LOG_FILE 2>&1

This makes go-dnsmasq run as root (instead of the go-dnsmasq user) if the capability is not set on the binary (which is the case when using a back end that does not support extended file attributes.

Nginx Versions

How would I go about choosing a newer version of nginx using smebberson/alpine-nginx-nodejs?

How to determine the containers IP

With the current method to obtain the container ip: getent hosts $HOSTNAME or even the new one: dig +short $HOSTNAME, there is no way to use an overlay network if it exists.
And that's the case if you are using tools like Rancher to manage your containers.
With Rancher you normally ask the rancher metadata to get that IP: curl -s http://rancher-metadata/latest/self/container/primary_ip.
I think there should be some flexibility to obtain the IP to allow more use cases for the images.

Somehow related, specifying the dns server to obtain the consul ip also breaks the Rancher dns: dig +short consul @127.0.0.11

Add Nomad

Suggesting that we introduce https://www.nomadproject.io/ into the base images to complement Consul more than anything perhaps.

  • Add alpine-consul-nomad using alpine-consul-base (as server)
  • Add user-consul-nomad (as server)
  • Add user-consul-nomad-agent (as agent)

So exactly like alpine-consul.

I left alpine-nomad out on purpose as I do not see a use-case for it but if it should be added then it's pretty much copy & paste minus consul.

My real world use-case is that I need to execute something on servers but I do not have direct access to those servers and I have no idea which server is available.

If Nomad sounds good then I can work on the PR.

Unable to run locally built alpine-base

Steps:

  1. Build alpine base: docker build -t aneeshd16/alpine-base .
  2. Run alpine-base: docker run aneeshd16/alpine-base
    Output:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 30-resolver: executing...
: No such file or directory sh
[cont-init.d] 30-resolver: exited 111.
[cont-init.d] 40-resolver: executing...
: No such file or directory sh
[cont-init.d] 40-resolver: exited 111.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
: No such file or directory sh
: No such file or directory sh
: No such file or directory sh
: No such file or directory sh
: No such file or directory sh
: No such file or directory sh
<continues>

Compare this to docker run smebberson/alpine-base

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 30-resolver: executing...
[cont-init.d] 30-resolver: exited 0.
[cont-init.d] 40-resolver: executing...
[cont-init.d] 40-resolver: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.

Thanks in advance!

Update to latest Consul

Consul 0.7.0 is out now and we should update to it asap :) I went through the changes and while it does have breakage and new features I did not see anything that affects this project.

My biggest pain point which should be resolved now was the inability to decide on cluster leader after failures.

I also recommend we resolve #50 by merging those changes in at the same time.

Add support for consul-template

I would recommend adding consul-template to alpine-consul-base as it is integral part of Consul itself.

  • Templates should be placed in /etc/consul-template/templates/.
  • A default configuration file should be placed in /etc/consul-template/conf.d/ defaulting to consul on 127.0.0.1:8500.
  • Actual template configurations should be placed in /etc/consul-template/conf.d/.
  • s6 service should be provided but with if-clause to check whether /etc/consul-template/conf.d/ contains template configurations. No need to fire up service that does nothing.

This setup should provide 100% automatic way to enable consul-template when proper configuration exists.

user-consul-nginx-nodejs has nginx default.conf which should be updated to utilize consul-template with s6 service definition.

If this proposition is supported I can work on the pull request.

consul dns error

Load average: 0.00 0.01 0.00 1/210 1697
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
260 242 go-dnsma S 10576 1% 0 0% go-dnsmasq --default-resolver --ndots 1 --fwd-ndots 0 --stubzones=.consul/172.17.0.2:8600 --hostsfile=/etc/hosts
590 0 root S 6208 0% 3 0% bash
242 230 root S 1516 0% 2 0% sh ./run
1697 590 root R 1516 0% 0 0% top
259 243 root S 1512 0% 0 0% sh /usr/bin/consul-available
243 231 root S 1512 0% 3 0% sh ./run
1696 259 root S 1508 0% 2 0% sleep 1
1 0 root S 192 0% 3 0% s6-svscan -t0 /var/run/s6/services
231 1 root S 192 0% 2 0% s6-supervise consul
31 1 root S 192 0% 3 0% s6-supervise s6-fdholderd
230 1 root S 192 0% 0 0% s6-supervise resolver
bash-4.3# ls *find
container-find find
bash-4.3# sh -x container-find

  • dig +short consul
    bash-4.3# dig +short consul

dig no record

Replace CRLF with LF

Many, if not all, files contain CRLF instead of LF. While not a huge deal shell scripts are picky about this and currently throw : No such file or directory sh which can be seen with user-consul-nginx-nodejs. This does not seem to affect operations but are a concern and can be fixed by simple conversion.

I recommend all files to be updated from CRLF to LF.

alpine-consul-nginx does not build

 ---> Running in 00612f21dd21
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/community/x86_64/APKINDEX.tar.gz
fetch http://dl-4.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
  nginx-1.10.1-r1:
    breaks: world[nginx=1.8.1-r0]

Problem seems to be with mixture of 3.4 and 3.2. The old repository should be dropped and main should be used instead I think.

A side note and a question: Nginx has stream_module that needs to be built-in for it to be supported. Reason why this matters is if you use Nginx for routing (i.e. database TCP connection from dc1 to dc2). I have custom image where I use Nginx from aports to build it in. Would there be interest to change alpine-nginx and alpine-consul-nginx to this custom build to support Nginx stream?

Incorrect s6 service restart instructions

At least README.md for alpine-nginx instructs the following to restart nginx:

s6-svc -h /etc/services.d/nginx

However this results in:

s6-svc: fatal: unable to control /etc/services.d/nginx: No such file or directory

The correct command to restart nginx is:

s6-svc -h /var/run/s6/services/nginx/

This results in nginx worker process to be restarted.

execlineb complaint when terminating container

When running one of the examples and when I hit ctrl+c to terminate I see:

==> Caught signal: terminated
==> Gracefully shutting down agent...
    2016/01/06 20:45:30 [INFO] consul: client starting leave
execlineb: usage: execlineb [ -p | -P | -S nmin ] [ -q | -w | -W ] [ -c commandline ] script args
[cont-finish.d] executing container finish scripts...

Perhaps related but at the very end I see s6-svscanctl error:

[s6-finish] sending all processes the TERM signal.
    2016/01/06 20:45:30 [ERR] dns: error starting tcp server: accept tcp 127.0.0.1:8600: use of closed network connection
    2016/01/06 20:45:30 [INFO] agent: requesting shutdown
    2016/01/06 20:45:30 [INFO] consul: shutting down client
    2016/01/06 20:45:30 [WARN] serf: Shutdown without a Leave
    2016/01/06 20:45:30 [INFO] agent: shutdown complete
s6-svscanctl: fatal: unable to control /var/run/s6/services: supervisor not listening
[s6-finish] sending all processes the KILL signal and exiting.

alpine-base + openjdk

I'd like to see an openjdk-based image using alpine-base.

My team works with the Play Framework, which is perfectly capable of running in the standard "docker way" - except that the openjdk/alpine image doesn't have the s6 overlay, and thus I believe will suffer from the resolv.conf issue, and thus could benefit from the go-dnsmasq functionality you have here.

If there's interest, I will fork and submit a PR.

alpine-nodejs: Dynamic loading not supported

I'm compiling a native library for OpenZWave and using a node package node-openzwave-shared. The source seems to compile fine but when using the node package it runs process.dlopen() and throws an error Dynamic loading not supported. This is apparently due to using a static binary for node.js?(See this comment)

The suggestion at the linked comment is to remove --fully-static flag. I'm currently using alpine-nodejs but would like to move to using the consul variants at a later stage, any chance of removing the flag or having a variant to allow this?

The issue seems to be described on the nodejs wiki too.

Consul service not running in smebberson/alpine-consul-nodejs

Sorry to open an issue on this but I have been racking my brain and googling and can't figure out what I need to change to get the consul service working.

The documentation states: "This container has been setup to automatically connect to a Consul cluster, created with a service name of consul."

I'm trying to bring this up in a standalone vagrant vm:

docker run -d --name consul-test smebberson/alpine-consul-nodejs
...
Status: Downloaded newer image for smebberson/alpine-consul-nodejs:latest
b19f29cc949e2f65cefbb91509602d124bd5abcc3dcd0e03fb6a595cb461c287

docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b19f29cc949e smebberson/alpine-consul-nodejs "/init" 49 seconds ago Up 48 seconds 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8500/tcp, 8301-8302/udp consul-test

docker exec -t -i b19f29cc949e sh

/ # dig @localhost google.com
google.com. 299 IN A 216.58.218.206

/ # dig @localhost consul.service.consul
;consul.service.consul. IN A

The only log file with anything in it is /var/log/go-dnsmasq/go-dnsmasq.log
time="2017-01-16T22:08:14Z" level=info msg="Starting go-dnsmasq server 1.0.7"
time="2017-01-16T22:08:14Z" level=info msg="Nameservers: [10.0.2.15:53]"
time="2017-01-16T22:08:14Z" level=info msg="Setting host nameserver to 127.0.0.1"
time="2017-01-16T22:08:14Z" level=info msg="Ready for queries on tcp://127.0.0.1:53"
time="2017-01-16T22:08:14Z" level=info msg="Ready for queries on udp://127.0.0.1:53"
time="2017-01-16T22:09:45Z" level=error msg="[64612] Error looking up literal qname 'consul.service.consul.' with upstreams: read udp 172.17.0.4:39666->172.17.0.4:8600: read: connection refused"
time="2017-01-16T22:09:45Z" level=error msg="[65000] Error looking up literal qname 'consul.service.consul.' with upstreams: read udp 172.17.0.4:44608->172.17.0.4:8600: read: connection refused"

The only thing that appears to be listening is DNS:
/ # netstat -ant |grep LISTEN
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN

What i'm hoping to accomplish is that the container boots, connects to consul.service.ourdomain.consul (consul running on different servers) but every time I try to query that domain it tries to connect to the consul service that should be running locally.

Any advice would be appreciated since I'm not seeing anything in the logs.

Consul advertise and client address

If you have multiple IPs in your container, it's necessary to add -advertise $BIND -client 0.0.0.0 to the run script to be able to connect to it.
This modification should not cause any side effect if the containers has only one address, but I think you should test it just in case.

alpine + consul + openjdk image

Hi,

Similar to @rbellamy in issue #62. I'm keen to see a alpine + consul + openjdk image for running jvm based microservices.

I think the s6 overlay + consul plumbing in these images is really neat and hope these images have a long term future. I would be willing to help out on the openjdk image maitenance.

I'll raise a PR that basically takes @rbellamy 's openjdk PR and layers it on top of the alpine-consul-base image.

Discrepancies between Dockerfiles

alpine-nginx specifies specific nginx version while alpine-consul-nginx does not.

Version should be specified to avoid potential breakage caused by version changes. Latest non-breaking versions available should be used for releases.

It would be nice to note package version changes in change log :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.