Coder Social home page Coder Social logo

timdaman / check_docker Goto Github PK

View Code? Open in Web Editor NEW
151.0 12.0 59.0 214 KB

Nagios plugin to check docker containers

License: GNU General Public License v3.0

Python 92.60% Shell 6.32% Dockerfile 1.08%
nagios icinga monitoring nrpe-plugin nrpe nrpe-monitoring docker nagios-plugin

check_docker's Introduction

Build Status Code Climate Test Coverage Downloads

check_docker

Nagios/NRPE compatible plugins for checking docker based services. Currently there are two nagios checks

  • check_docker which checks docker container health
  • check_swarm which checks health of swarm nodes and services

With check_docker can use it to check and alert on

  • memory consumption in absolute units (bytes, kb, mb, gb) and as a percentage (0-100%) of the container limit.
  • CPU usages as a percentage (0-100%) of container limit.
  • automatic restarts performed by the docker daemon
  • container status, i.e. is it running?
  • container health checks are passing?
  • uptime, i.e. is it able to stay running for a long enough time?
  • the presence of a container or containers matching specified names
  • image version, does the running image match that in the remote registry?
  • image age, when was the image built the last time?

With check_swarm you can alert

  • if a node is not joined to a docker swarm
  • if a service is running in a swarm

These checks can communicate with a local docker daemon socket file (default) or with local or remote docker daemons using secure and non-secure TCP connections.

These plugins require python 3. It is tested on 3.5 and greater but may work on older versions of 3.

Installation

With pip

pip3 install check_docker
--or--
pip install check_docker

With curl

curl -o /usr/local/bin/check_docker https://raw.githubusercontent.com/timdaman/check_docker/master/check_docker/check_docker.py
curl -o /usr/local/bin/check_swarm https://raw.githubusercontent.com/timdaman/check_docker/master/check_docker/check_swarm.py
chmod a+rx /usr/local/bin/check_docker /usr/local/bin/check_swarm

With wget

wget -O /usr/local/bin/check_docker https://raw.githubusercontent.com/timdaman/check_docker/master/check_docker/check_docker.py
wget -O /usr/local/bin/check_swarm https://raw.githubusercontent.com/timdaman/check_docker/master/check_docker/check_swarm.py
chmod a+rx /usr/local/bin/check_docker /usr/local/bin/check_swarm

check_docker Usage

usage: check_docker.py [-h]
                       [--connection [/<path to>/docker.socket|<ip/host address>:<port>]
                       | --secure-connection [<ip/host address>:<port>]]
                       [--binary_units | --decimal_units] [--timeout TIMEOUT]
                       [--containers CONTAINERS [CONTAINERS ...]] [--present]
                       [--threads THREADS] [--cpu WARN:CRIT]
                       [--memory WARN:CRIT:UNITS] [--status STATUS] [--health]
                       [--uptime WARN:CRIT] [--image-age WARN:CRIT] [--version]
                       [--insecure-registries INSECURE_REGISTRIES [INSECURE_REGISTRIES ...]]
                       [--restarts WARN:CRIT] [--no-ok] [--no-performance] [-V]

Check docker containers.

optional arguments:
  -h, --help            show this help message and exit
  --connection [/<path to>/docker.socket|<ip/host address>:<port>]
                        Where to find docker daemon socket. (default:
                        /var/run/docker.sock)
  --secure-connection [<ip/host address>:<port>]
                        Where to find TLS protected docker daemon socket.
  --binary_units        Use a base of 1024 when doing calculations of KB, MB,
                        GB, & TB (This is default)
  --decimal_units       Use a base of 1000 when doing calculations of KB, MB,
                        GB, & TB
  --timeout TIMEOUT     Connection timeout in seconds. (default: 10.0)
  --containers CONTAINERS [CONTAINERS ...]
                        One or more RegEx that match the names of the
                        container(s) to check. If omitted all containers are
                        checked. (default: ['all'])
  --present             Modifies --containers so that each RegEx must match at
                        least one container.
  --threads THREADS     This + 1 is the maximum number of concurent
                        threads/network connections. (default: 10)
  --cpu WARN:CRIT       Check cpu usage percentage taking into account any
                        limits. Valid values are 0 - 100.
  --memory WARN:CRIT:UNITS
                        Check memory usage taking into account any limits.
                        Valid values for units are %,B,KB,MB,GB.
  --status STATUS       Desired container status (running, exited, etc).
  --health              Check container's health check status
  --uptime WARN:CRIT    Minimum container uptime in seconds. Use when
                        infrequent crashes are tolerated.
  --image-age WARN:CRIT Maximum image age in days.
  --version             Check if the running images are the same version as
                        those in the registry. Useful for finding stale
                        images. Does not support login.
  --insecure-registries INSECURE_REGISTRIES [INSECURE_REGISTRIES ...]
                        List of registries to connect to with http(no TLS).
                        Useful when using "--version" with images from
                        insecure registries.
  --restarts WARN:CRIT  Container restart thresholds.
  --no-ok               Make output terse suppressing OK messages. If all
                        checks are OK return a single OK.
  --no-performance      Suppress performance data. Reduces output when
                        performance data is not being used.
  -V                    show program's version number and exit

check_swarm Usage

usage: check_swarm.py [-h]
                      [--connection [/<path to>/docker.socket|<ip/host address>:<port>]
                      | --secure-connection [<ip/host address>:<port>]]
                      [--timeout TIMEOUT]
                      (--swarm | --service SERVICE [SERVICE ...] | --ignore_paused)
                      [-V]

Check docker swarm.

optional arguments:
  -h, --help            show this help message and exit
  --connection [/<path to>/docker.socket|<ip/host address>:<port>]
                        Where to find docker daemon socket. (default:
                        /var/run/docker.sock)
  --secure-connection [<ip/host address>:<port>]
                        Where to find TLS protected docker daemon socket.
  --timeout TIMEOUT     Connection timeout in seconds. (default: 10.0)
  --swarm               Check swarm status
  --service SERVICE [SERVICE ...]
                        One or more RegEx that match the names of the
                        services(s) to check.
  --ignore_paused       Don't require global services to be running on paused nodes
  -V                    show program's version number and exit

Gotchas

  • When using check_docker with older versions of docker (I have seen 1.4 and 1.5) –status only supports ‘running’, ‘restarting’, and ‘paused’.
  • When using check_docker, if no container is specified, all containers are checked. Some containers may return critcal status if the selected check(s) require a running container.
  • When using check_docker, --present cannot be used without --containers to indicate what to check the presence of.

check_docker's People

Contributors

kiangj avatar kristianlyng avatar martialblog avatar mattwwarren avatar tatref avatar timdaman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

check_docker's Issues

Wrong shebang on pypi

The version on pip has a wrong shebang:

# check_docker
-bash: /root/Python-3.6.1/.venv/bin/check_docker: /usr/local/bin/python3.6: bad interpreter: No such file or directory
# head -1 $(which check_docker)
#!/usr/local/bin/python3.6

This is different than the version on github.
Can you publish a new version?

Thanks!

Use globbing in container name

Hello,

I'm sorry in advance if it is me who doing something wrong but there is one thing I can't get to work. I'm checking a container to see if it is running. This is the command:

/home/nagios/plugins/check_docker.py --containers container_version_1 --status running

Everything works fine until I do some updates to the container and change the name to container_version_2. Then obviously the alarm will go off because the name is different. Changing names is something I need to do so I tried this command:

/home/nagios/plugins/check_docker.py --containers container_version_* --status running

But it doesn't work. Can this be fixed in some way? Or is it me who is using the wildcard in the wrong way?

containers_runnings not detecting running containers

I been trying to work through is for just monitoring the particular containers that I want when I realized that it was detecting any as running. Over half of the host are running, but the check says 0 are running. Am I doing something wrong with my command?

[nagios@hostname ~]$ /usr/local/nagios/libexec/check_docker.py -H http://127.0.0.1:2375 --check-type 'containers_running' --all -t 0 -l -w '50:' -c '30:'
CRITICAL: 0 running, 11 not running, containers not running: ['/host01', '/host02', '/host03', '/host04', '/host05', '/host06', '/host07', '/host08', '/host09', '/host10', '/host11'] | total_usage=0;50:;30:

check_docker cpu provide inconsistent values to Nagios

Hi,
I use check_docker.py, but I think the CPU information about current CPU usage by a container is not accurate, there is a difference between the reporting value by check_docker.py and the actual value given by docker stats.

Non existant containers ommited

Hi,

It seems that if I watch multiple containers, and if one of them doesn't exist, no alert is raised. I'm not sure if this is wanted, but it is a strange behaviour anyway.

I assume the problem is in the function get_containers(names) when doing the regex matching an exception could be thrown when re.match("^{}$".format(matcher), found) is False.

Thanks for the plugin !

KeyError when checking cpu or memory

I just tried check_docker the first time. If I use the checks for cpu or mem, I get a KeyError.

  • check_docker: most recent version
  • docker: Docker version 18.06.1-ce, build e68fc7a
  • python: 3.4.9
  • OS: Centos 7.5.1804

Error Logs

--cpu

Traceback (most recent call last):
  File "check_docker.py", line 929, in main
    [x.result() for x in futures.as_completed(threads)]
  File "check_docker.py", line 929, in <listcomp>
    [x.result() for x in futures.as_completed(threads)]
  File "/usr/lib64/python3.4/concurrent/futures/_base.py", line 395, in result
    return self.__get_result()
  File "/usr/lib64/python3.4/concurrent/futures/_base.py", line 354, in __get_result
    raise self._exception
  File "/usr/lib64/python3.4/concurrent/futures/thread.py", line 54, in run
    result = self.fn(*self.args, **self.kwargs)
  File "check_docker.py", line 423, in wrapper
    func(container, *args, **kwargs)
  File "check_docker.py", line 672, in check_cpu
    usage = calculate_cpu_capacity_precentage(info=info, stats=stats)
  File "check_docker.py", line 636, in calculate_cpu_capacity_precentage
    num_cpus = len(stats['cpu_stats']['cpu_usage']['percpu_usage'])
KeyError: 'percpu_usage'

--mem

Traceback (most recent call last):
  File "check_docker.py", line 929, in main
    [x.result() for x in futures.as_completed(threads)]
  File "check_docker.py", line 929, in <listcomp>
    [x.result() for x in futures.as_completed(threads)]
  File "/usr/lib64/python3.4/concurrent/futures/_base.py", line 395, in result
    return self.__get_result()
  File "/usr/lib64/python3.4/concurrent/futures/_base.py", line 354, in __get_result
    raise self._exception
  File "/usr/lib64/python3.4/concurrent/futures/thread.py", line 54, in run
    result = self.fn(*self.args, **self.kwargs)
  File "check_docker.py", line 423, in wrapper
    func(container, *args, **kwargs)
  File "check_docker.py", line 513, in check_memory
    adjusted_usage = inspection['memory_stats']['usage'] - inspection['memory_stats']['stats']['total_cache']
KeyError: 'usage'

Add New Maintainer

Hey,

first of: great project.

However, given there there are no more updates recently, I'd suggest adding a new maintainer. So that there aren't endless forks and we can ensure new features.

Of course, I'd volunteer myself if you don't mind. It's been a while since I contributed ( #58 ), but I know my way around the code.

Cheers,
Markus

Checks fail with Podman

When using Podman the ckecks fail:

root@host:~# /usr/lib64/nagios/plugins/check_docker.py --cpu 80:90
Traceback (most recent call last):
  File "/usr/lib64/nagios/plugins/check_docker.py", line 986, in main
    [x.result() for x in futures.as_completed(threads)]
  File "/usr/lib64/nagios/plugins/check_docker.py", line 986, in <listcomp>
    [x.result() for x in futures.as_completed(threads)]
  File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 425, in result
    return self.__get_result()
  File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
  File "/usr/lib64/python3.6/concurrent/futures/thread.py", line 56, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib64/nagios/plugins/check_docker.py", line 413, in wrapper
    func(container, *args, **kwargs)
  File "/usr/lib64/nagios/plugins/check_docker.py", line 689, in check_cpu
    usage = calculate_cpu_capacity_precentage(info=info, stats=stats)
  File "/usr/lib64/nagios/plugins/check_docker.py", line 676, in calculate_cpu_capacity_precentage
    system_delta = stats['cpu_stats']['system_cpu_usage'] - stats['precpu_stats']['system_cpu_usage']
KeyError: 'system_cpu_usage'
UNKNOWN: Exception raised during check': KeyError('system_cpu_usage',)


root@host:~# /usr/lib64/nagios/plugins/check_docker.py --memory 80:90:%
Traceback (most recent call last):
  File "/usr/lib64/nagios/plugins/check_docker.py", line 986, in main
    [x.result() for x in futures.as_completed(threads)]
  File "/usr/lib64/nagios/plugins/check_docker.py", line 986, in <listcomp>
    [x.result() for x in futures.as_completed(threads)]
  File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 425, in result
    return self.__get_result()
  File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
  File "/usr/lib64/python3.6/concurrent/futures/thread.py", line 56, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib64/nagios/plugins/check_docker.py", line 413, in wrapper
    func(container, *args, **kwargs)
  File "/usr/lib64/nagios/plugins/check_docker.py", line 523, in check_memory
    adjusted_usage = inspection['memory_stats']['usage'] - inspection['memory_stats']['stats']['total_cache']
KeyError: 'stats'
UNKNOWN: Exception raised during check': KeyError('stats',)

I was able to workaround with this:

root@host:~# diff /usr/lib64/nagios/plugins/check_docker_old /usr/lib64/nagios/plugins/check_docker.py
522a523
>     if "stats" not in inspection['memory_stats'].keys(): inspection['memory_stats']['stats'] = {'total_cache': 0};
675a677
>     if "system_cpu_usage" not in stats['precpu_stats'].keys(): stats['precpu_stats']['system_cpu_usage'] = 0;
root@host:~#

multiple container names cannot be found

Hi,

it seems that containers with multiple names cannot be found but only the first one, I tried it with several regex but no look.
Would be nice if I could query the name of the container which is displayed in 'docker ps'.

I appreciate your work on this.

CRITICAL: debian's version does not match registry

... but it actually matches.

How to reproduce:

  1. Provision a new VM as show below
  2. Wait a few days so there's a new version of the Docker image debian:testing-slim
  3. In /dc
    1. docker-compose down
    2. docker pull debian:testing-slim
    3. docker-compose pull debian (just to be sure)
    4. docker-compose up -d --force-recreate
  4. Wait a few seconds
  5. check_docker --version
def mkVm (config, hostname, box, script)
  config.vm.define hostname do |cfg|
    cfg.vm.box = box
    cfg.vm.hostname = hostname

    cfg.vm.synced_folder ".", "/vagrant"
    cfg.vm.network "private_network", type: "dhcp"

    cfg.vm.provider "virtualbox" do |vb|
      vb.memory = "1024"
    end

    cfg.vm.provision "shell", inline: script
  end
end

Vagrant.configure("2") do |config|
  mkVm config, "dc", "debian/contrib-buster64", <<-SHELL
sudo bash -exo pipefail <<BASH

apt-get install -y wget python3-pip
wget -O- https://download.docker.com/linux/debian/gpg |apt-key add -

cat <<REPO >/etc/apt/sources.list.d/docker.list
deb https://download.docker.com/linux/debian buster stable
REPO

apt-get update
apt-get install -y docker-ce docker-compose

pip3 install check_docker

mkdir /dc
cd /dc

cat <<YML >docker-compose.yml
version: '2.4'
networks:
  host-debian:
    internal: true
    driver_opts:
      com.docker.network.bridge.name: docker1
    ipam:
      config:
      - subnet: 192.168.234.0/30
        gateway: 192.168.234.1
services:
  debian:
    container_name: debian
    image: debian:testing-slim
    command:
    - bash
    - -exo
    - pipefail
    - -c
    - |-
      while sleep 86400; do true; done
    restart: always
    networks:
      host-debian:
        ipv4_address: 192.168.234.2
YML

/usr/bin/docker-compose up -d --force-recreate

BASH
SHELL
end

Uptime monitoring throws Exception

root@159:~# python3 check_docker.py --connection /var/run/docker.sock --containers '7f381db3-6dd3-4703-811f-e6ab51235800' --uptime 3600
UNKNOWN: Exception raised during check: list index out of range
root@159:~# python3 check_docker.py --connection /var/run/docker.sock --containers '7f381db3-6dd3-4703-811f-e6ab51235800' --status running
OK: 7f381db3-6dd3-4703-811f-e6ab51235800 status is running
root@159:~# python3 -V
Python 3.2.3

Add capability to alert critical when container is removed

Currently, the script will throw an UNKNOWN with exit rc 3 if the container has been removed. I propose that with the --present flag, the script should throw a CRITICAL rc 2 when the container cannot be found.

Locally, I've patched the script like this:

diff --git a/ansible/roles/docker/files/check_docker b/ansible/roles/docker/files/check_docker
index 9bd8e806a..7a692199a 100644
--- a/ansible/roles/docker/files/check_docker
+++ b/ansible/roles/docker/files/check_docker
@@ -237,6 +237,7 @@ def get_containers(names, require_present):
         # If we don't find a container that matches out regex
         if require_present and not found:
             critical("No containers match {}".format(matcher))
+            raise RuntimeError("Container not found")
 
     return filtered
 
@@ -618,6 +619,7 @@ def process_args(args):
 def no_checks_present(parsed_args):
     # Look for all functions whose name starts with 'check_'
     checks = [key[6:] for key in globals().keys() if key.startswith('check_')]
+    checks.append('present')
     return all(getattr(parsed_args, check) is None for check in checks)
 
 
@@ -691,6 +693,9 @@ def perform_checks(raw_args):
                     if args.restarts:
                         check_restarts(container, *parse_thresholds(args.restarts, include_units=False))
 
+        except RuntimeError as e:
+            print_results()
+            exit(rc)
         except Exception as e:
             unknown("Exception raised during check: {}".format(repr(e)))

Does this seem like a reasonable way forward? I considered making a custom exception class so we can catch something more specific than RuntimeError.

NRPE: Unable to read output

icinga node :#/usr/lib64/nagios/plugins/check_nrpe -H node -c check_docker
NRPE: Unable to read output
clientnode:#command[check_docker]=sudo docker run --rm -v /var/run/docker.sock:/var/run checkdocker --cpu $ARG1$:$ARG2$
root@node: /etc/nrpe.d# cat /etc/nagios/nrpe.cfg  | grep user
# user and is running in standalone mode.
# This determines the effective user that the NRPE daemon should run as.
# You can either supply a username or a UID.
nrpe_user=nrpe
root@node: /etc/nrpe.d# cat /etc/sudoers.d/nagios
nagios    ALL=(ALL:ALL)  NOPASSWD:ALL
nrpe ALL=(ALL:ALL)  NOPASSWD: ALL
icinga rule
apply Service "check_docker" {
  import "generic-service"
  check_command = "nrpe"
  vars.nrpe_command = "check_docker"
  vars.nrpe_arguments = [ "70", "90" ]
  assign where match("node*", host.name)
}

Filter images by status

It would be a useful feature to filter images by status (ex. check_docker --filter status=running --cpu 80:90)

check_version: exclude option

Hello!

You probably may have many things more important to do, but I'm putting this idea here if you want to give me your opinion.

I'm using your script and among others the --version argument which is working great !
But my thought is what do you think of adding an argument to exlude some images to not check ?

Because when you use official images of mongo, elastic, or whatever, they often have automatic build process. The impact is that image ID can change and not the tag. So I have often critical reports to just pull and image with a same tag and recreate the exact same container.
It could be cool if I can choose only my images that I build to be checked.

I think it could be a --exclude=mongo:3.6-jessie,redis:3.7 argument, or maybe a label "check_version=false" added to the container, or something like that.

Another idea: change the CRITICAL to a WARNING state when a container is not up to date. Because it's not really a critical thing.

I'm sorry for my english..I hope you understood me anyway.

Thanks a lot for the work already done!

Help with container who has multiple tags/names

Hello,

While using check_docker to check if the running images are the same version as those in the registry (with option --version), I've got this error with InfluxDB :

UNKNOWN: "influxdb" has multiple tags/names. Unsure which one to use to check the version. 

Is there a way you can explain how can I make this work correctly with InfluxDB ?

Here is the relevent part of my docker-compose.yml :

services:
  influxdb:
    image: influxdb:1.8
    container_name: influxdb
    restart: unless-stopped
[...]

Thank you.

BTW : Thanks for this cool plugin ! 👍

Base 1000 or 1024 units?

This a place to discuss units used to be used when processing and returning performance data.

The basic question is 1KB = 1000B or 1KB = 1024B? The Nagios Ecosystem does not make this clear.

According to wikipedia the '1000' base should be used with KB
https://en.wikipedia.org/wiki/Kilobyte

But looking at the actual code for the 'offical' nagios plug-ins they appear to consistantly use a base of '1024'. Best examples are check_disk and check_swap
https://github.com/nagios-plugins/nagios-plugins/search?utf8=%E2%9C%93&q=1024&type=

To add more fuel to the fire I see code in pnp4nagios that looks to use base 1000.

I am inclined to go for 1024 as that seems to be what the Nagios folks intended the meaning to be but I would love to hear other points of view.

Memory usage is wrong (usage vs cache)?

Hi,

It seems that the memory usage reported by the API memory_stats/usage is not correct if the container is doing a lot of IO and thus has lots of disk cache.

Example of a mysql database:

# docker stats
MEM USAGE / LIMIT     MEM %
329.6MiB / 2.868GiB   11.22%

The API shows:

check_docker.sh --containers prd-piwik-mysql --memory 500:1000:m
WARNING: prd-piwik-mysql memory is 784.765625m|prd-piwik-mysql_mem=784.765625m;500;1000;0;2936.6015625

This seems related to the issue moby/moby#10824

Secure Connection

Hi,

I can't use check_docker with my secure docker daemon.
I need to use client certificates, but where do I have to store them?
Are there any command line arguments or environment variables?

./check_docker --secure-connection host:port --health
Traceback (most recent call last):
  File "/usr/lib64/python3.4/urllib/request.py", line 1183, in do_open
    h.request(req.get_method(), req.selector, req.data, headers)
  File "/usr/lib64/python3.4/http/client.py", line 1137, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib64/python3.4/http/client.py", line 1182, in _send_request
    self.endheaders(body)
  File "/usr/lib64/python3.4/http/client.py", line 1133, in endheaders
    self._send_output(message_body)
  File "/usr/lib64/python3.4/http/client.py", line 963, in _send_output
    self.send(msg)
  File "/usr/lib64/python3.4/http/client.py", line 898, in send
    self.connect()
  File "/usr/lib64/python3.4/http/client.py", line 1287, in connect
    server_hostname=server_hostname)
  File "/usr/lib64/python3.4/ssl.py", line 362, in wrap_socket
    _context=self)
  File "/usr/lib64/python3.4/ssl.py", line 580, in __init__
    self.do_handshake()
  File "/usr/lib64/python3.4/ssl.py", line 807, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLError: [SSL: SSLV3_ALERT_BAD_CERTIFICATE] sslv3 alert bad certificate (_ssl.c:600)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./check_docker", line 762, in perform_checks
    containers = get_containers(args.containers, args.present)
  File "./check_docker", line 256, in get_containers
    containers_list, _ = get_url(daemon + '/containers/json?all=1')
  File "/usr/lib64/python3.4/functools.py", line 472, in wrapper
    result = user_function(*args, **kwds)
  File "./check_docker", line 204, in get_url
    response = better_urllib_get.open(url, timeout=timeout)
  File "/usr/lib64/python3.4/urllib/request.py", line 464, in open
    response = self._open(req, data)
  File "/usr/lib64/python3.4/urllib/request.py", line 482, in _open
    '_open', req)
  File "/usr/lib64/python3.4/urllib/request.py", line 442, in _call_chain
    result = func(*args)
  File "/usr/lib64/python3.4/urllib/request.py", line 1226, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "/usr/lib64/python3.4/urllib/request.py", line 1185, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: SSLV3_ALERT_BAD_CERTIFICATE] sslv3 alert bad certificate (_ssl.c:600)>
UNKNOWN: Exception raised during check': URLError(SSLError(1, '[SSL: SSLV3_ALERT_BAD_CERTIFICATE] sslv3 alert bad certificate (_ssl.c:600)'),)

JSON Decode Error

Hi,

I successfully installed check_docker via curl, however I get the following error after using this command check_docker --connection vm_ip:4996 --containers gateway-develop --status running.

I can SSH to this machine and docker ps to confirm the name and port of the container. Any help would be appreciated!

Traceback (most recent call last):
  File "/home/mark/.local/lib/python3.8/site-packages/check_docker/check_docker.py", line 982, in main
    perform_checks(argv[1:])
  File "/home/mark/.local/lib/python3.8/site-packages/check_docker/check_docker.py", line 937, in perform_checks
    containers = get_containers(args.containers, args.present)
  File "/home/mark/.local/lib/python3.8/site-packages/check_docker/check_docker.py", line 319, in get_containers
    containers_list, _ = get_url(daemon + '/containers/json?all=1')
  File "/home/mark/.local/lib/python3.8/site-packages/check_docker/check_docker.py", line 280, in get_url
    return process_urllib_response(response), response.status
  File "/home/mark/.local/lib/python3.8/site-packages/check_docker/check_docker.py", line 287, in process_urllib_response
    return json.loads(body)
  File "/usr/lib/python3.8/json/__init__.py", line 357, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.8/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.8/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
UNKNOWN: Exception raised during check': JSONDecodeError('Expecting value: line 1 column 1 (char 0)')

Docker unrechable return UNKNOWN

Hi @timdaman,

the default behavior of the check when docker is unrechable is at that stage UNKNOWN.

/usr/local/bin/check_docker/check_docker.py --connection /run/docker.sock --health UNKNOWN: Cannot access docker socket file. User ID=0, socket file=/run/docker.sock

According to the numerous backend changes in dockers updates and the criticality of that component, wouldn't it be a good idea to change it it to default CRITICAL?

Or was there a different decision behind it?

Thanks and greetings,
Sebastian

Monitor age

It's frequently nice to get an alert if a service has (presumably unexpectedly) restarted.

Something along the lines of «./check_docker --containers foobar --status running --age 600:60» could work, indicating a warning if the age is less than 600 seconds and a critical if it's less than 60 seconds. Critical needs to be optional, though (don't want to wake people up if it's back up).

ghcr.io/OWNER/IMAGE_NAME

Hi,
I have some images that are located at ghcr.io/OWNER/IMAGE_NAME.
Seems your script can't handle that format... correct?
Any idea on how to fix this or add this functionality?
Thx, T

less verbose output

First of all I like this service very much and want (initially) use it to check whether all docker containers are running. In case of an exited container, I want to get an alert from Nagios.
In general this works fine, but the service gives the status of all containers. With 15 containers (number will grow further in near future) this is a lot of text...
In case of an exited service it's quite hard to find which service is not running between all the OK messages...
I would like to have an option to hide the information for containers in the expected state (the OK messages).

In line with other services, a verbose option would be the preferred way to implement this.
The standard output would be that only warning and critical information is shown. When using the --verbose or -v option, the output would be as currently is.

I'll make an attempt to implement this behavior, but as I don't have experience with python I'm not sure whether my code will be nice and sustainable...

UNKNOWN: Exception raised during check: TypeError('string indices must be integers',)

Afternoon,

I have installed your very lovely tool via pip, but I seem to get an error UNKNOWN: Exception raised during check: TypeError('string indices must be integers',).

This is run on raspbian 9.11 using Python 3.5.3
the full output

matt@pi-docker01:~$ check_swarm  --service all
OK: Replicated service portainer_portainer OK; OK: Replicated service phpipam_phpipam OK; OK: Replicated service phpmyadmin_phpmyadmin OK; OK: Replicated service proxy_proxyv2 OK; OK: Replicated service authelia_openldap OK; OK: Replicated service authelia_smtp OK; OK: Replicated service gogs_gogs OK; OK: Replicated service jukebox_jukebox OK; OK: Replicated service pihole_pihole OK; OK: Replicated service dashmachine_dashmachine OK; UNKNOWN: Exception raised during check: TypeError('string indices must be integers',)

Full service list

ID                  NAME                      MODE                REPLICAS            IMAGE                               PORTS
ag3gcvplpwcp        authelia_authelia         replicated          1/1                 authelia/authelia:4.14.0            *:9091->9091/tcp
mr82j6w87o38        authelia_openldap         replicated          1/1                 osixia/openldap:1.3.0               *:389->389/tcp, *:636->636/tcp
1vygoxc8pv6b        authelia_phpldapadmin     replicated          1/1                 osixia/phpldapadmin:0.9.0           
wkl8csavuj0a        authelia_redis            replicated          1/1                 redis:latest                        
6oh5h4u1rj2q        authelia_smtp             replicated          1/1                 mhzawadi/postfix:v0.0.1             *:25->25/tcp
7tc5mrv9835w        dashmachine_dashmachine   replicated          1/1                 supermamon/rpi-dashmachine:latest   
h9zcnc9qbcmu        gogs_gogs                 replicated          1/1                 gogs/gogs-rpi:latest                *:2250->22/tcp
wfjs3mgcsbgr        jukebox_jukebox           replicated          1/1                 mhzawadi/subsonic_jukebox:v0.0.12   
wjlnmi9sxhe7        phpipam_phpipam           replicated          1/1                 mhzawadi/phpipam:v1.4.0.3           
rgciy0pvvhmo        phpmyadmin_phpmyadmin     replicated          1/1                 mhzawadi/phpmyadmin:v5.0.1.3        
2acbxhzku4q0        pihole_pihole             replicated          1/1                 pihole/pihole:latest                *:53->53/tcp, *:53->53/udp, *:67->67/udp
l3j9bx3cxe3u        portainer_agent           global              2/2                 portainer/agent:latest              
sz4ssl200ufh        portainer_portainer       replicated          1/1                 portainer/portainer:latest          *:9000->9000/tcp
x78edrwv5opb        proxy_proxyv2             replicated          1/1                 traefik:v2.0                        *:80->80/tcp, *:443->443/tcp, *:8181->8080/tcp

Docker version

Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b
 Built:             Wed Mar 11 01:37:36 2020
 OS/Arch:           linux/arm
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b
  Built:            Wed Mar 11 01:31:37 2020
  OS/Arch:          linux/arm
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

ImportError

check_docker.py

File "/usr/local/ncpa/plugins/check_docker.py", line 12, in <module>
from collections import deque, namedtuple, UserDict, defaultdict
ImportError: cannot import name UserDict

This is after installing the collection. Still gets the error

Thorough check of services

Hi @timdaman

I was wondering if it would be possible to expand your service checks in check_swarm? The script only checks if the service is available in the swarm, but it doesn't take into account if any containers are actually scheduled for the service? I hastily glanced over the API reference, and I think it would be possible to use the tasks resource for this. Maybe add a 'tasks'-argument where you can filter on the service or the container name?

Thanks for the great scripts!

Error when running status

Hi guys,

have a error when running the check;
./check_docker.py --containers mycontainer --status running
UNKNOWN: Exception raised during check: KeyError('Status',)

I have Python 3.4.0.

have you had this before?

Thanks

DockerEngine 20.10.10: KeyError: 'total_cache'

After updating from Debian buster(10) to bullseye(11) I am getting KeyError: 'total_cache'.

installed Docker : 5:20.10.103-0debian-buster
installed check_docker: 2.2.2

Exception:
"check_docker", line 541, in check_memory adjusted_usage = inspection['memory_stats']['usage'] - inspection['memory_stats']['stats']['total_cache'] KeyError: 'total_cache' UNKNOWN: Exception raised during check': KeyError('total_cache')

inspection.json.gz

My current "fix" is to skip the adjusted_usage, and only use inspection['memory_stats']['usage']

Limit to number of containers graphed in Nagios XI

I have 10 containers running on a server. When I use your plug-in (either specifying 0 containers, or listing them) I only have 4 on the graph in Nagios XI. Do you know if I can correct this it graph all 10?

How to do version checks through a proxy server

Hi @timdaman

Everything is working fine so far and I am currently trying to add a few more checks to monitor the Docker containers properly.

I noticed the "--version" check and thought, that this check would check the current version, if so I would certainly like to have this feature implemented in my monitoring, so I can keep track of the current version or get notified as soon as there is a new version.

I seem to however have problems getting this to work, due to our servers working behind a proxy server. The main message appearing is "connection refused" which gave me the hint, that the "check_docker version check" script cannot communicate with the registry server due to the proxy server blocking it.

Would there be a possibility to add a proxy option to the command line, so that people with a proxy server can add their IP/FQDN:Port in the query?

Example: --proxy IP/FQDN:Port

I had a look at your code but my last Python coding lies quite a bit back, so I could only achieve to add a new function would not know where to change the actual query.

If you could implement this feature I would be more than happy to test this for you.

Health Check not working

If I do:

usr/lib64/nagios/plugins/check_docker --container='container' --status health it says

CRITICAL: container state is not health

While If i do:

docker inspect -f '{{json .State.Health.Status}}' container

It says "healthy"

Docker version 18.09.1, build 4c52b90

"check swarm status" only works on manager node

check_swarm status checking is implemented by requesting http:/swarm URL, whereas this seems to works only on manager nodes:
$> curl --unix-socket /var/run/docker.sock http:/swarm
{"message":"This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager."}

Parsing http:/info result seems a more portable option.

I checked it with docker versions 19.03.14 and 20.10.7.

Check Docker status with Icinga2

Hi,

I have two servers in the same network, one for Icinga2 and one where I run Docker containers. I now want to check the docker status from the Icinga2 server.

I added the check_docker plugin onto the server where I run my Docker containers as a nrpe plugin. When I run the command ./check_docker from the local machine it's working fine but when I run the command from the Icinga2 server, I get the message:

UNKNOWN: Cannot access docker socket file. User ID=996, socket file=/var/run/docker.sock

So I extended the query by adding --connect ip_to_docker_server:port but now I get an error message:

UNKNOWN: Exception raised during check': URLError(ConnectionRefusedError(111, 'Connection refused'),​)

I also added the Docker tcp port to the firewall, to make sure it can communicate but without success. I'm not sure what I'm doing wrong.

Memory enhancement - cache versus actual usage

First of all, nice script! It helps me alot. I've been monitoring several of my docker containers using your script.

For one container I noticed very high memory usage using the script (>5gb), while 'docker stats' only reported 250mb. After some troubleshooting I noticed the use of memory_stats>usage (line 283) in your script. This corresponds with the file /sys/fs/cgroup/memory/docker/container_id/memory.usage_in_bytes. This memory usage statistic seems to include the total cache as well.

When I looked up the value of the cache and subtracted it from the above memory usage, I got the 250mb I was seeing in the 'docker stats'.

In summary, I changed line 283 to:
usage = (inspection['memory_stats']['usage'] - inspection['memory_stats']['stats']['total_cache']) / UNIT_ADJUSTMENTS[units]

I realize that in some cases you also want to monitor the cache usage, or at least have it included in the memory stats. Perhaps an enhancement could be to add an extra parameter that tells the memory check to include or exclude the cache?

support for python2

Looks like this plugin is only compatible with python3 but most of systems are on python2 so can we convert into python2

Got ValueError Exception on memory check

check_docker Version:

$ /usr/local/monitoring-plugins/libexec/check_docker -V
check_docker 2.2.2

OS:

$ cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Python Version:

$ python3 -V
Python 3.9.2

The Command:

$ /usr/local/monitoring-plugins/libexec/check_docker --memory 80:90%
Traceback (most recent call last):
  File "/usr/local/monitoring-plugins/libexec/check_docker", line 983, in main
    perform_checks(argv[1:])
  File "/usr/local/monitoring-plugins/libexec/check_docker", line 966, in perform_checks
    check_memory(container, parse_thresholds(args.memory, units_required=False))
  File "/usr/local/monitoring-plugins/libexec/check_docker", line 201, in parse_thresholds
    crit = int(parts.popleft())
ValueError: invalid literal for int() with base 10: '90%'
UNKNOWN: Exception raised during check': ValueError("invalid literal for int() with base 10: '90%'")

Thank you very much to keep check_docker alive!

401 response with check_version

Hi,

It seems that --check_version command fails when you have non-official containers.

For example, I use those images:

docker.elastic.co/elasticsearch/elasticsearch:6.0.0
redis:4

It fail with Elasticsearch and works with Redis.
It also fail with images that I made and published on Docker Hub.

The URL to get a token is:
https://auth.docker.io/token?service=registry.docker.io&scope=repository:library/docker.elastic.co/elasticsearch/elasticsearch:pull
This request is working great, but not this one:
https://index.docker.io/v2/library/docker.elastic.co/elasticsearch/elasticsearch/manifests/6.0.0
The query fails with 401 error status, so unauthorized.

Any idea ?

ImportError: No module named machinery

Hello,

I have problems to install this plugin. I follow the instructions:

pip install check_docker

python version: Python 2.7.12

the output is:

Downloading check_docker-2.0.1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-build-9FnbpP/check-docker/setup.py", line 3, in
from importlib.machinery import SourceFileLoader
ImportError: No module named machinery

I tried to install all versions but I got the previous error or check_docker requires Python 3.3 or higher.

Can I install this plugin along with python 2.7.12??

Cheers!

Swarm support improvements

Hi @timdaman,

I've made a few changes which were useful for setting up Docker Swarm checks. They are available at master...operasoftware:master and I thought it may be useful to upstream some of them.

Since I'm not sure what's your vision for future of check_docker, I thought I'd open an issue before making a PR.

The changes are:

  1. Slight performance improvement when fetching containers statuses if the expected status is running. (61041d8)
  2. Rework of check_docker to allow wildcard checks to pass if there's only one container matching the wildcard. This is useful in case of Swarm services, as mentioned in #43 (comment) of issue #43 when using '[stack]_[service].*' wildcards. (2b1fc15)
  3. Check for nodes statuses in check_swarm.py. (92bbe13)
  4. Small improvements to Swarm checks wordings. (211969f)
  5. .json config file support for easier automated deployment of checks. (6b78496)

Please let me know if you think anything from the above would be useful for you, in which case I could prepare some PR.

Check_swarm Error

Hi,

When running the check_swarm command I am getting the following, what am I missing? Thanks!
./check_swarm --secure-connection localhost:2377 --service capture

UNKNOWN: Exception raised during check: URLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:720)'),)

Icinga graphite writer perfdata error

Using this check on icinga2 with the GraphiteWriter plugin results in a "Ignoring invalid perfdata value" warning from GraphiteWriter. This is because the performance data output does not comply with the official standard, which dictates that no unit of measurement should be specified. The check_docker output does specify a unit for the memory performance data.

Error trying to use the check_docker.py remotely

Hello,

So here is the thing, I was trying to use the check_docker.py script using Python 3 on locally on the computer and it was working fine. I could see the output of the different containers that I was running, it was perfect. I have NRPE 2.15 installed on my computer and I want to use a remote computer that will send the checks and send the output on a monitoring screen (using Nagios).

The thing is, when a remote computer is using the check it doesn't work. Here is the command I type using Python 3 on the remote computer.
python3 check_docker.py --connection 1.1.1.1:5666 --cpu 80:90

This command doesn't work. It shows me an error saying that the peer ended the connection. Of course I can ping both computers it's not a network related issue. I did a tcpdump on the computer that host the docker app, it receives the check from the 5666 port, so this is not a firewall issue.

The command: python3 check_docker.py --cpu 80:90 works fine locally on the computer but when it comes to use it via the network, it doesn't work.

Any help is welcome.

EDIT: Here is the exact error returned by the Python 3 script.

Traceback (most recent call last):
File "check_docker.py", line 1000, in main
perform_checks(argv[1:])
File "check_docker.py", line 955, in perform_checks
containers = get_containers(args.containers, args.present)
File "check_docker.py", line 332, in get_containers
containers_list, _ = get_url(daemon + '/containers/json?all=1')
File "check_docker.py", line 283, in get_url
response = better_urllib_get.open(url, timeout=timeout)
File "/usr/lib64/python3.6/urllib/request.py", line 526, in open
response = self._open(req, data)
File "/usr/lib64/python3.6/urllib/request.py", line 544, in _open
'_open', req)
File "/usr/lib64/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/usr/lib64/python3.6/urllib/request.py", line 1346, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/lib64/python3.6/urllib/request.py", line 1321, in do_open
r = h.getresponse()
File "/usr/lib64/python3.6/http/client.py", line 1346, in getresponse
response.begin()
File "/usr/lib64/python3.6/http/client.py", line 307, in begin
version, status, reason = self._read_status()
File "/usr/lib64/python3.6/http/client.py", line 268, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib64/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
ConnectionResetError: [Errno 104] Connection reset by peer
UNKNOWN: Exception raised during check': ConnectionResetError(104, 'Connection reset by peer')

UNKNOWN: Exception raised during check': KeyError('token')

Command line: ["/usr/lib/nagios/plugins/check_docker","--containers","elasticsearch","--present","--restarts","23:42","--status","running","--uptime","120:60","--version"]

Container's image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1

Output:

Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/check_docker/check_docker.py", line 985, in main
    [x.result() for x in futures.as_completed(threads)]
  File "/usr/local/lib/python3.7/dist-packages/check_docker/check_docker.py", line 985, in <listcomp>
    [x.result() for x in futures.as_completed(threads)]
  File "/usr/lib/python3.7/concurrent/futures/_base.py", line 425, in result
    return self.__get_result()
  File "/usr/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
  File "/usr/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.7/dist-packages/check_docker/check_docker.py", line 623, in check_version
    registry_hash = get_digest_from_registry(url)
  File "/usr/local/lib/python3.7/dist-packages/check_docker/check_docker.py", line 374, in get_digest_from_registry
    registry_info, status_code = get_url(url=url)
  File "/usr/local/lib/python3.7/dist-packages/check_docker/check_docker.py", line 278, in get_url
    response = better_urllib_get.open(url, timeout=timeout)
  File "/usr/lib/python3.7/urllib/request.py", line 531, in open
    response = meth(req, response)
  File "/usr/local/lib/python3.7/dist-packages/check_docker/check_docker.py", line 133, in http_response
    return self.process_oauth2(request, response, www_authenticate_header)
  File "/usr/local/lib/python3.7/dist-packages/check_docker/check_docker.py", line 162, in process_oauth2
    auth_token = self._get_outh2_token(www_authenticate_header)
  File "/usr/local/lib/python3.7/dist-packages/check_docker/check_docker.py", line 151, in _get_outh2_token
    return process_urllib_response(token_response)['token']
KeyError: 'token'
OK: elasticsearch status is running; OK: elasticsearch restarts is 0; OK: elasticsearch uptime is 4d 23h; UNKNOWN: Exception raised during check': KeyError('token')

Output usage (or _anything_) if no/wrong arguments are used

Seems --containers is required, as is at least one of --status --memory and --restarts

Should be reflected if none/wrong is supplied.

Also, as such is the case, the help output shouldn't put --containers in square brackets as that indicates it's optional.

Greater than 100% CPU utilization is not documented

Steps to Reproduce:

  • Create a container that needs high CPU utilization at times, but limit the "cpus" for it.
  • Run the container.
  • Run this command in while loop in shell script and print output to some file.
    check_docker --connection /var/run/docker.sock --containers all --cpu 80:95

My Setup:

  • My server has 2 CPUs.
  • My container needs high CPU utilization at start because of many supervisor services getting auto started at once, but I have limited the "cpus" to "0.75".

Output at times, initially:

CRITICAL: {containername} cpu is 103%
|{containername}_cpu=103;80;95;0;100

Reason behind the output:
It seems that when CPU utilization exceeds the limit slightly, it gets under control i.e. there is some margin.

Additional logs:

docker inspect {containername}|grep -i cpu
            "CpuShares": 0,
            "NanoCpus": 0,
            "CpuPeriod": 100000,
            "CpuQuota": 75000,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "CpuCount": 0,
            "CpuPercent": 0,

Issue:
The documentation(README file) only talks about the values from 0-100, not beyond that.
In this case, we should ideally set the warning:critical to something like 110:125 instead.

Forced combination of no_ok and no_performance

There are two flags for compressing the amount of ok messages and disabling performance data. However, currently if you provide no_ok you automatically set no_performance as well:

   global no_performance
   no_performance = args.no_ok

I this a typo? If yes, I suggest to correct it and set args.no_performance. If it is not a typo I would anyways suggest to change it. Everyone who wants to combine the effects can provide both flags. The use case I have, where I want to see compressed OK's, but use performance data anyways, is currently not possible.

Comment does not appear in Icinga

Hello,

I'm running the scripts along with nagios command:

/opt/nagios-check ' python /opt/checks/check_docker --containers <container_name> --health '

It seems all ok, however the output message it's written in console but it's not written to icinga alarm. How do you recommend me to run the check?

Cheers

Version check for arm images

When checking the image-version on a raspberry pi it does not work:

root@RASPI2 ~ $ docker pull pihole/pihole
Using default tag: latest
latest: Pulling from pihole/pihole
Digest: sha256:abdddfb266ddd8e0591f97203ad11fd8dc33f2542187223f20ff18862b76bfbb
Status: Image is up to date for pihole/pihole:latest
docker.io/pihole/pihole:latest
root@RASPI2 ~ $
root@RASPI2 ~ $ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
pihole/pihole       latest              4d43d29c9890        3 days ago          301MB
root@RASPI2 ~ $
root@RASPI2 ~ $ /etc/nagios/check_docker.py --version
CRITICAL: PiHole's version does not match registry
root@RASPI2 ~ $

When I do the same on a amd64 system it shows different image-id and works without problems:

root@Ubuntu ~ $ docker pull pihole/pihole
Using default tag: latest
latest: Pulling from pihole/pihole
Digest: sha256:abdddfb266ddd8e0591f97203ad11fd8dc33f2542187223f20ff18862b76bfbb
Status: Image is up to date for pihole/pihole:latest
docker.io/pihole/pihole:latest
root@Ubuntu ~ $
root@Ubuntu ~ $ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
pihole/pihole       latest              4642d275ab73        3 days ago          296MB
root@Ubuntu ~ $
root@Ubuntu ~ $ /etc/nagios/check_docker.py --version
OK: PiHole's version matches registry
root@Ubuntu ~ $

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.