Coder Social home page Coder Social logo

ribbybibby / ssl_exporter Goto Github PK

View Code? Open in Web Editor NEW
502.0 8.0 95.0 5.81 MB

Exports Prometheus metrics for TLS certificates

License: Apache License 2.0

Go 98.09% Makefile 1.58% Dockerfile 0.33%
prometheus ssl-certificates ssl prometheus-metrics prometheus-exporter tls ssl-certificate certificate metrics ssl-exporter

ssl_exporter's Introduction

SSL Certificate Exporter

Exports metrics for certificates collected from various sources:

The metrics are labelled with fields from the certificate, which allows for informational dashboards and flexible alert routing.

Building

make
./ssl_exporter <flags>

Similarly to the blackbox_exporter, visiting http://localhost:9219/probe?target=example.com:443 will return certificate metrics for example.com. The ssl_probe_success metric indicates if the probe has been successful.

Docker

docker run -p 9219:9219 ribbybibby/ssl-exporter:latest <flags>

Release process

  • Create a release in Github with a semver tag and GH actions will:
    • Add a changelog
    • Upload binaries
    • Build and push a Docker image

Usage

usage: ssl_exporter [<flags>]

Flags:
  -h, --help                     Show context-sensitive help (also try --help-long and
                                 --help-man).
      --web.listen-address=":9219"
                                 Address to listen on for web interface and telemetry.
      --web.metrics-path="/metrics"
                                 Path under which to expose metrics
      --web.probe-path="/probe"  Path under which to expose the probe endpoint
      --config.file=""           SSL exporter configuration file
      --log.level="info"         Only log messages with the given severity or above. Valid
                                 levels: [debug, info, warn, error, fatal]
      --log.format="logger:stderr"
                                 Set the log target and format. Example:
                                 "logger:syslog?appname=bob&local=7" or
                                 "logger:stdout?json=true"
      --version                  Show application version.

Metrics

Metric Meaning Labels Probers
ssl_cert_not_after The date after which a peer certificate expires. Expressed as a Unix Epoch Time. serial_no, issuer_cn, cn, dnsnames, ips, emails, ou tcp, https
ssl_cert_not_before The date before which a peer certificate is not valid. Expressed as a Unix Epoch Time. serial_no, issuer_cn, cn, dnsnames, ips, emails, ou tcp, https
ssl_file_cert_not_after The date after which a certificate found by the file prober expires. Expressed as a Unix Epoch Time. file, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou file
ssl_file_cert_not_before The date before which a certificate found by the file prober is not valid. Expressed as a Unix Epoch Time. file, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou file
ssl_kubernetes_cert_not_after The date after which a certificate found by the kubernetes prober expires. Expressed as a Unix Epoch Time. namespace, secret, key, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou kubernetes
ssl_kubernetes_cert_not_before The date before which a certificate found by the kubernetes prober is not valid. Expressed as a Unix Epoch Time. namespace, secret, key, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou kubernetes
ssl_kubeconfig_cert_not_after The date after which a certificate found by the kubeconfig prober expires. Expressed as a Unix Epoch Time. kubeconfig, name, type, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou kubeconfig
ssl_kubeconfig_cert_not_before The date before which a certificate found by the kubeconfig prober is not valid. Expressed as a Unix Epoch Time. kubeconfig, name, type, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou kubeconfig
ssl_ocsp_response_next_update The nextUpdate value in the OCSP response. Expressed as a Unix Epoch Time tcp, https
ssl_ocsp_response_produced_at The producedAt value in the OCSP response. Expressed as a Unix Epoch Time tcp, https
ssl_ocsp_response_revoked_at The revocationTime value in the OCSP response. Expressed as a Unix Epoch Time tcp, https
ssl_ocsp_response_status The status in the OCSP response. 0=Good 1=Revoked 2=Unknown tcp, https
ssl_ocsp_response_stapled Does the connection state contain a stapled OCSP response? Boolean. tcp, https
ssl_ocsp_response_this_update The thisUpdate value in the OCSP response. Expressed as a Unix Epoch Time tcp, https
ssl_probe_success Was the probe successful? Boolean. all
ssl_prober The prober used by the exporter to connect to the target. Boolean. prober all
ssl_tls_version_info The TLS version used. Always 1. version tcp, https
ssl_verified_cert_not_after The date after which a certificate in the verified chain expires. Expressed as a Unix Epoch Time. chain_no, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou tcp, https
ssl_verified_cert_not_before The date before which a certificate in the verified chain is not valid. Expressed as a Unix Epoch Time. chain_no, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou tcp, https

Configuration

TCP

Just like with the blackbox_exporter, you should pass the targets to a single instance of the exporter in a scrape config with a clever bit of relabelling. This allows you to leverage service discovery and keeps configuration centralised to your Prometheus config.

scrape_configs:
  - job_name: "ssl"
    metrics_path: /probe
    static_configs:
      - targets:
          - example.com:443
          - prometheus.io:443
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: 127.0.0.1:9219 # SSL exporter.

HTTPS

By default the exporter will make a TCP connection to the target. This will be suitable for most cases but if you want to take advantage of http proxying you can use a HTTPS client by setting the https module parameter:

scrape_configs:
  - job_name: "ssl"
    metrics_path: /probe
    params:
      module: ["https"] # <-----
    static_configs:
      - targets:
          - example.com:443
          - prometheus.io:443
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: 127.0.0.1:9219

This will use proxy servers discovered by the environment variables HTTP_PROXY, HTTPS_PROXY and ALL_PROXY. Or, you can set the https.proxy_url option in the module configuration.

The latter takes precedence.

File

The file prober exports ssl_file_cert_not_after and ssl_file_cert_not_before for PEM encoded certificates found in local files.

Files local to the exporter can be scraped by providing them as the target parameter:

curl "localhost:9219/probe?module=file&target=/etc/ssl/cert.pem"

The target parameter supports globbing (as provided by the doublestar package), which allows you to capture multiple files at once:

curl "localhost:9219/probe?module=file&target=/etc/ssl/**/*.pem"

One specific usage of this prober could be to run the exporter as a DaemonSet in Kubernetes and then scrape each instance to check the expiry of certificates on each node:

scrape_configs:
  - job_name: "ssl-kubernetes-file"
    metrics_path: /probe
    params:
      module: ["file"]
      target: ["/etc/kubernetes/**/*.crt"]
    kubernetes_sd_configs:
      - role: node
    relabel_configs:
      - source_labels: [__address__]
        regex: ^(.*):(.*)$
        target_label: __address__
        replacement: ${1}:9219

HTTP File

The http_file prober exports ssl_cert_not_after and ssl_cert_not_before for PEM encoded certificates found at the specified URL.

curl "localhost:9219/probe?module=http_file&target=https://www.paypalobjects.com/marketing/web/logos/paypal_com.pem"

Here's a sample Prometheus configuration:

scrape_configs:
  - job_name: 'ssl-http-files'
    metrics_path: /probe
    params:
      module: ["http_file"]
    static_configs:
      - targets:
        - 'https://www.paypalobjects.com/marketing/web/logos/paypal_com.pem'
        - 'https://d3frv9g52qce38.cloudfront.net/amazondefault/amazon_web_services_inc_2024.pem'
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: 127.0.0.1:9219

For proxying to the target resource, this prober will use proxy servers discovered in the environment variables HTTP_PROXY, HTTPS_PROXY and ALL_PROXY. Or, you can set the http_file.proxy_url option in the module configuration.

The latter takes precedence.

Kubernetes

The kubernetes prober exports ssl_kubernetes_cert_not_after and ssl_kubernetes_cert_not_before for PEM encoded certificates found in secrets of type kubernetes.io/tls.

Provide the namespace and name of the secret in the form <namespace>/<name> as the target:

curl "localhost:9219/probe?module=kubernetes&target=kube-system/secret-name"

Both the namespace and name portions of the target support glob matching (as provided by the doublestar package):

curl "localhost:9219/probe?module=kubernetes&target=kube-system/*"

curl "localhost:9219/probe?module=kubernetes&target=*/*"

The exporter retrieves credentials and context configuration from the following sources in the following order:

  • The kubeconfig path in the module configuration
  • The $KUBECONFIG environment variable
  • The default configuration file ($HOME/.kube/config)
  • The in-cluster environment, if running in a pod
- job_name: "ssl-kubernetes"
  metrics_path: /probe
  params:
    module: ["kubernetes"]
  static_configs:
   - targets:
      - "test-namespace/nginx-cert"
  relabel_configs:
   - source_labels: [ __address__ ]
     target_label: __param_target
   - source_labels: [ __param_target ]
     target_label: instance
   - target_label: __address__
     replacement: 127.0.0.1:9219

Kubeconfig

The kubeconfig prober exports ssl_kubeconfig_cert_not_after and ssl_kubeconfig_cert_not_before for PEM encoded certificates found in the specified kubeconfig file.

Kubeconfigs local to the exporter can be scraped by providing them as the target parameter:

curl "localhost:9219/probe?module=kubeconfig&target=/etc/kubernetes/admin.conf"

One specific usage of this prober could be to run the exporter as a DaemonSet in Kubernetes and then scrape each instance to check the expiry of certificates on each node:

scrape_configs:
  - job_name: "ssl-kubernetes-kubeconfig"
    metrics_path: /probe
    params:
      module: ["kubeconfig"]
      target: ["/etc/kubernetes/admin.conf"]
    kubernetes_sd_configs:
      - role: node
    relabel_configs:
      - source_labels: [__address__]
        regex: ^(.*):(.*)$
        target_label: __address__
        replacement: ${1}:9219

Configuration file

You can provide further module configuration by providing the path to a configuration file with --config.file. The file is written in yaml format, defined by the schema below.

# The default module to use. If omitted, then the module must be provided by the
# 'module' query parameter
default_module: <string>

# Module configuration
modules: [<module>]

<module>

# The type of probe (https, tcp, file, kubernetes, kubeconfig)
prober: <prober_string>

# The probe target. If set, then the 'target' query parameter is ignored.
# If omitted, then the 'target' query parameter is required.
target: <string>

# How long the probe will wait before giving up.
[ timeout: <duration> ]

# Configuration for TLS
[ tls_config: <tls_config> ]

# The specific probe configuration
[ https: <https_probe> ]
[ tcp: <tcp_probe> ]
[ kubernetes: <kubernetes_probe> ]
[ http_file: <http_file_probe> ]

<tls_config>

# Disable target certificate validation.
[ insecure_skip_verify: <boolean> | default = false ]

# Configure TLS renegotiation support.
# Valid options: never, once, freely
[ renegotiation: <string> | default = never ]

# The CA cert to use for the targets.
[ ca_file: <filename> ]

# The client cert file for the targets.
[ cert_file: <filename> ]

# The client key file for the targets.
[ key_file: <filename> ]

# Used to verify the hostname for the targets.
[ server_name: <string> ]

<https_probe>

# HTTP proxy server to use to connect to the targets.
[ proxy_url: <string> ]

<tcp_probe>

# Use the STARTTLS command before starting TLS for those protocols that support it (smtp, ftp, imap, pop3, postgres)
[ starttls: <string> ]

<kubernetes_probe>

# The path of a kubeconfig file to configure the probe
[ kubeconfig: <string> ]

<http_file_probe>

# HTTP proxy server to use to connect to the targets.
[ proxy_url: <string> ]

Example Queries

Certificates that expire within 7 days:

ssl_cert_not_after - time() < 86400 * 7

Wildcard certificates that are expiring:

ssl_cert_not_after{cn=~"\*.*"} - time() < 86400 * 7

Certificates that expire within 7 days in the verified chain that expires latest:

ssl_verified_cert_not_after{chain_no="0"} - time() < 86400 * 7

Number of certificates presented by the server:

count(ssl_cert_not_after) by (instance)

Identify failed probes:

ssl_probe_success == 0

Peer Certificates vs Verified Chain Certificates

Metrics are exported for the NotAfter and NotBefore fields for peer certificates as well as for the verified chain that is constructed by the client.

The former only includes the certificates that are served explicitly by the target, while the latter can contain multiple chains of trust that are constructed from root certificates held by the client to the target's server certificate.

This has important implications when monitoring certificate expiry.

For instance, it may be the case that ssl_cert_not_after reports that the root certificate served by the target is expiring soon even though clients can form another, much longer lived, chain of trust using another valid root certificate held locally. In this case, you may want to use ssl_verified_cert_not_after to alert on expiry instead, as this will contain the chain that the client actually constructs:

ssl_verified_cert_not_after{chain_no="0"} - time() < 86400 * 7

Each chain is numbered by the exporter in reverse order of expiry, so that chain_no="0" is the chain that will expire the latest. Therefore the query above will only alert when the chain of trust between the exporter and the target is truly nearing expiry.

It's very important to note that a query of this kind only represents the chain of trust between the exporter and the target. Genuine clients may hold different root certs than the exporter and therefore have different verified chains of trust.

Grafana

You can find a simple dashboard here that tracks certificate expiration dates and target connection errors.

ssl_exporter's People

Contributors

bot-teutonet avatar britcey avatar dependabot[bot] avatar dragoangel avatar hans-d avatar jaroug avatar johanfleury avatar manishpanjwani21 avatar matthope avatar mistervvp avatar rgl avatar ribbybibby avatar ricardbejarano avatar sechmann avatar stan1334 avatar tarvip avatar tdabasinskas avatar treydock avatar wyfrel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ssl_exporter's Issues

Allow passing `tls_config.server_name` as a query parameter

Hello

I’m using ssl_exporter to monitor certificates for a bunch of websites that are hosted behind multiple load balancers. My goal being to monitor that all the LB’s use the same certificate and that it’s not expired.

At the moment, to be able to specify which SNI to use when connecting to the LB, I need add a module in the configuration for every websites:

config.yaml: |
  modules:
    https:
      prober: https
    site1.example.com:
      prober: https
      tls_config:
        server_name: site1.example.com
    www.site1.example.com:
      prober: https
      tls_config:
        server_name: www.site1.example.com
    site2.example.com:
      prober: https
      tls_config:
        server_name: site1.example.com
    www.site2.example.com:
      prober: https
      tls_config:
        server_name: www.site1.example.com

   # And so on and so forth

This way I can scrape with target set as the load balancer’s IP (e.g.: curl -s 'http://127.0.0.1:9219/probe?module=site1.example.com&target=https://192.0.2.1 will give me the TLS metrics for site1.example.com on 192.0.2.1).

This works fine, but it’s quite cumbersome as I need to do this for every websites I want to monitor that way. It would be great to be able to pass server_name as a query parameter instead (e.g.: curl -s 'http://127.0.0.1:9219/probe?module=https&target=https://192.0.2.1&server_name=site1.example.com)

I’m willing to write a PR for that feature, but wanted to know your thoughts on that.

Exporting OCSP stapling information

Hello,
I am in need of an exporter to give information about OCSP stapling for TLS handshakes. Would you be interested in adding such a feature in ssl_exporter? I need the this_update/produced_at/next_update times/ages to monitor the freshness of OCSP stapling in my TLS servers.

Grafana dashboard only show 100 instances

Hi,

I have over 1000 urls that I need to monitor but with the grafana dashboard if I choose all instances no results are returned, i'm only able to select 100 to get any results. Do you know If it's possible to see more than 100 on your dashboard?

Thanks

Build Windows binaries

Can this project also release Windows binaries? I confirm that it works on Windows.

I've created the PR #43, please take a look.

If you apply it, can you please make a new release? say v2.0.1?

Improve error logging by including the target address in the log

The errors messages should include the target address.

Currently, the error message are not very useful to troubleshoot the problems:

dial tcp 1.2.3.4:443: i/o timeout
remote error: tls: bad certificate

All the errors messages should have a prefix of the target address, for example, when requesting as http://127.0.0.1:9219/probe?target=example.com%3A443, the error messages should have a common prefix:

target example.com:443 failed: dial tcp 1.2.3.4:443: i/o timeout
target example.com:443 failed: remote error: tls: bad certificate

Add proxy parameter

In addition to a http client which can use the HTTP_PROXY environment variable, we could also add a per-probe parameter that configures a proxy in the same way that openssl s_client has a -proxy host:port option.

Output of ssl_exporter shown in scientifc format

Do you know why I would be getting epoch time converted into scientific notation?

ssl_cert_not_after{issuer_cn="DigiCert Global Root CA",serial_no="xxx"} 1.6782768e+09
ssl_cert_not_after{issuer_cn="DigiCert SHA2 Secure Server CA",serial_no="xxx"} 1.598616e+09

It doesn't compile with go 1.13

Get a fresh build env en try running make when using go 1.13. (building on Mac)

  1. staticcheck must be upgraded to 2019.2.3 to deal with v1.13
  2. new version is tar.gz, so some logic in Makefile needs bending (like Promu)
  3. fails with make: *** [common-unused] Error 1

How to add self signed certificates

We have a few certificates which are self signed and I see below errors while trying to monitor them:
x509: certificate signed by unknown authority

Is there a way I can add self-signed certificates to avoid these errors?

Thank you very much for your help and appreciate your support.

Using ssl_exporter with k8s

Hi,

Thanks for creating this project! @ribbybibby
I want to use the ssl_exporter on K8S and I've the following questions:

  1. I use Prometheus , did I need to install something in addition in the cluster to use the ssl_exporter? I saw that you have provided a docker image but what is the best way to use it on K8S, should I create k8s service ?

  2. How should I define the current target ,I want to check the target of the current cluster which Prometheus is deployed in, what I should put inside the target config ?

Thanks!

Improve tests

The tests could be improved.

  • Create local listeners and test against them rather than URLs out on the internet
  • Separate individual checks into separate methods
  • More test cases

CI/CD

I should add some form of CI to the project. Also a release process of some sort.

Add support for sql server certificate chain validation

SQL Server does not use a raw TLS connection, instead it uses something similar to STARTTLS/Opportunistic_TLS, where you first need to do a clear text handshake to tell it to switch to TLS.

It would be pretty nice to have support for this in ssl_exporter. I already have rgl/dump-sql-server-certificate-chain that dumps the certificates, with some modification I believe it can be integrated here (e.g. by handling tds:// schemed urls).

What do you think?

Test certs expired in March

Builds are failing because the hard-coded client and server test certs have expired. The test certs need to be regenerated (or possibly find a way to create the at test runtime).

Steps to reproduce:

  1. Checkout master
  2. Run make or docker build

Actual Behavior:
GO111MODULE=on go test -race -mod=vendor ./... 2020/05/18 16:15:03 http: TLS handshake error from 127.0.0.1:61858: remote error: tls: bad certificate time="2020-05-18T16:15:03-04:00" level=error msg="Get \"https://127.0.0.1:61857\": x509: certificate has expired or is not yet valid: current time 2020-05-18T16:15:03-04:00 is after 2020-03-28T07:52:27Z" source="ssl_exporter.go:108"

Expected Behavior:
Certs expected to not be expired are not expired, tests pass successfully.

Support cases where the hostname is different to the SNI

Hi @ribbybibby ,

We notice this repo which is very nice and maybe could help us...

we are using openssl and to query our ssl we use:

echo | openssl s_client -servername NAME -connect HOST:PORT 2>/dev/null | openssl x509 -noout -dates

and we got the following response:

notBefore=Oct 30 00:00:00 2019 GMT
notAfter=Oct 30 12:00:00 2020 GMT

I saw that this repo using http request to query the ssl certificate data,
is there a way somehow to use the ssl_exporter to query this, or maybe some trick to make it work with the ssl_expoter.

if there is an option, it will be great if you could provide some example...

Thanks!

Release assets don't include version info

A minor issue, but the pre-built binaries don't include release info:

ssl_exporter_1.0.0_darwin_amd64> ./ssl_exporter --version
ssl_exporter, version  (branch: , revision: )
  build user:
  build date:
  go version:       go1.13.8

Building locally does the right thing:

github.com/ribbybibby/ssl_exporter(master|✔)> ./ssl_exporter --version
ssl_exporter, version 1.0.0 (branch: master, revision: b7cdf6249339e74aa88564c1dbc7d18e4bdbcb5e)
  build user:       [email protected]
  build date:       20200513-13:35:56
  go version:       go1.14.2

So not sure what's being missed in the release process.

Thanks for this useful exporter!

No cert metrics when target has expired or invalid certificate

Hi,

When probing a target with invalid or expired certificate, metrics (ssl_cert_not_after, ssl_cert_not_before) are not being populated.

$ curl -s http://localhost:9219/probe?target=example.com:443
# HELP ssl_prober The prober used by the exporter to connect to the target
# TYPE ssl_prober gauge
ssl_prober{prober="tcp"} 1
# HELP ssl_tls_connect_success If the TLS connection was a success
# TYPE ssl_tls_connect_success gauge
ssl_tls_connect_success 0
ssl             | time="2020-07-03T01:46:28Z" level=error msg="x509: certificate has expired or is not yet valid: current time 2020-07-03T01:46:28Z is after 2020-06-22T17:01:09Z" source="ssl_exporter.go:78"

Without ssl_cert_not_after or ssl_cert_not_before, I cannot see the expiration date and other attribute on example.com certificate.

Is there a way to force sampling the metrics even-though there is an error?

Thanks,

Probe SSL not only for :443

First of all thank you for the great job, this exporter is realy very usefull
But i want to know, is it possible to probe targets with non default https port? I use different port and it works pretty well with testssl.sh, for example
And when i try to probe it with exporter, i fased with next error in logs:
" level=error msg="Get https://my_host:my_port: EOF" source="ssl_exporter.go:98"
Is this unusual usecase or i am doing something wrong?

client cert verify or local target

I want checking my docker and consul certs with ssl_exporter, but i've got client verify enabled on server side. Is it possible to configure client cert or set target at local filesystem? Currently i've got error in log:

 remote error: tls: bad certificate" source="ssl_exporter.go:92"

I can reproduce this with curl:

$ curl -k https://my.node.qa.project:2376
curl: (35) error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate

# correct conection
$ # curl -I --cacert ca.pem --cert server.pem --key server.key https://my.node.qa.project:2376
HTTP/1.1 404 Not Found
Content-Type: application/json
Date: Fri, 08 Mar 2019 13:28:02 GMT
Content-Length: 29

Review Makefile

When I initially created this exporter I stuck pretty closely to the project structure and conventions of other, official exporters. This included the Makefile and Dockerfile that form part of the Prometheus build process.

Since then the Dockerfile in this project has diverged and I'm not sure it ever really made sense to follow the other exporters when this one isn't managed by the same processes.

I should review the Makefile and trim out all the bits I don't need and keep only what makes sense.

Use HTTP Proxy

Is it possible to enable the use of a HTTP proxy with the probe. I tried setting HTTP_PROXY and HTTPS_PROXY with no luck and I see no other way to specify one.

Would this be easy to add per target or via an argument to ssl_exporter?

RE: No cert metrics when target has expired or invalid certificate #37

Hi, I tried to add:

modules:
tcp_insecure:
prober: tcp
tls_config:
insecure_skip_verify: true

to my ssl_exporter.yml, but i doesn't work. My ssl_exporter.yml file:

modules:
https_insecure:
prober: https
tls_config:
insecure_skip_verify: true
tcp:
prober: tcp
tcp_insecure:
prober: tcp
tls_config:
insecure_skip_verify: true

I need to add something more to yml to work perfectly? ssl_cert_not_after is working fine like other's metrics, but I don't
have certs that expired in ssl_cert_not_after.

Thanks for your help!

Update docker hub image

The latest docker hub image is still 0.5.0 while the latest release is 0.6.0

Would it be possible to push the update to Docker Hub?

self signed certificates with https

Hi,

I am using https module to monitor websites certs.

- job_name: 'ssl-checker'
    metrics_path: /probe
    params:
      module: ["https"]
    static_configs:
      - targets:
          - 'xxx.com:443'
          - 'yyy.com:443'

Some targets are down because the cert is self-signed.

How can I handle this ?

Regards

The `log.level` flag has no effects since version 2.2.1 (#71)

Since version 2.2.1 was released (with PR #71 of which I am the author), the log.level flag has no effects and all logs are printed regardless of there level

I can’t figure out where the issue is and I can’t reproduce with this simple test code:

package main

import (
	"github.com/go-kit/log"
	"github.com/go-kit/log/level"
	"github.com/prometheus/common/promlog"
	promlogflag "github.com/prometheus/common/promlog/flag"
	"gopkg.in/alecthomas/kingpin.v2"
)

func main() {
	promlogConfig := promlog.Config{}

	promlogflag.AddFlags(kingpin.CommandLine, &promlogConfig)
	kingpin.HelpFlag.Short('h')
	kingpin.Parse()

	logger := promlog.New(&promlogConfig)

	foo(logger)
}

func foo(logger log.Logger) {
	level.Error(logger).Log("msg", "error")
	level.Info(logger).Log("msg", "info")
	level.Debug(logger).Log("msg", "debug")
}

promlogConfig.Level is correctly set to the log level passed on the command line (or info by default).

Adding level.NewFilter(logger, level.AllowInfo()) after line 129 in ssl_exporter.go seems to work and correctly filter debug messages, but setting the promlog.Config manually also has no effect:

	allowedLevel := promlog.AllowedLevel{}
	allowedLevel.Set("info")

	promlogConfig = promlog.Config{
		Level: &allowedLevel,
	}

	logger := promlog.New(&promlogConfig)

Any idea with this happens in ssl_exporter?

allow module and target be configured via flags

Currently the probe target needs to be passed module and target via query params.
However for some systems where the scrapper is hard to change from /metrics it becomes impossible to use this.

As a result add 2 flags web.default-module and web.target which can be used in the probe handler to default (in case query param is empty)

Integration tests

Our unit tests spin up mock servers for verifying functionality. We should also be testing against real targets span up by something like docker-compose.

Export certificate metadata as labels rather than separate metrics

tl;dr I should remove all of the 'informational' certificate metrics and attach that data to ssl_cert_not_after and ssl_cert_not_before as labels. Detailed explanation follows.

This will be a breaking change, so will form part of a 1.0.0 release.


When I first created this exporter over 2 years ago I was fairly new to Prometheus and I didn't really understand, or hadn't thought much about, what made a good metric. I had seen other exporters which used separate metrics for metadata and blindly followed that approach.

However, I don't think the reasons those other exporters put metadata fields into their own metrics apply to certificates.

Typically you would put a piece of metadata, like a consul tag, into its own metric because a consul tag can have any value and any number of values and those values are likely to change over time. If you were putting all the tags for a consul service into a label on each consul service metric then the number of series you were storing for any given metric would double every time a tag was added or removed. With a separate metric, you would get one extra series per new tag.

However, certificates are different from consul services because the information attached to a certificate (like common name) never changes. No matter how many labels you attach to ssl_cert_not_after, or what those labels represent, you will get the same number of series. Therefore, there's no benefit to putting these values in different metrics.

In fact, as it stands, the exporter exports way more metrics than it would if I had chosen to use labels. At the moment it exports 7 metrics for each unique instance+certificate combination. So, if you have 10 certificates, that means you have 70 series overall. But, if all of the metadata metrics were labels that would be 2 metrics, and therefore only 20 series overall.

Furthermore, having a separate metric for common name, and sans, and ou's just makes querying the metrics harder than it needs to be. Compare this query:

((ssl_cert_not_after - time() < 86400 * 30) * on (instance,issuer_cn,serial_no) group_left (dnsnames) ssl_cert_subject_alternative_dnsnames{dnsnames=~".*,.*example.org,.*"}) * on (instance,issuer_cn,serial_no) group_left (subject_cn) ssl_cert_subject_common_name{subject_cn=~"^.*example.org"}

To this one:

ssl_cert_not_after{dnsnames=~".*,.*example.org,.*",subject_cn=~"^.*example.org"}  - time() < 86400 * 30

The latter is clearly more understandable and a lot less work to put together.

Context deadline exceeded

I monitor about 350 TLS certificates and not all are available to do a remote check, which is fine because i can still get prometheus check that it failed. However, there are about 15 or so that wouldn't even fail to connect so no prometheus metrics are available. In the logs they all look like this. Is there a work around?

time="2020-09-30T22:21:54Z" level=error msg="error=Get \"https://somesite.com\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) target=somesite.com prober=https timeout=30s" source="ssl_exporter.go:91"

ARM64 support?

With an increasing number of available arm64 servers (and usually cheaper than amd64) these days, it would be nice to have arm64 versions of the releases and docker images to avoid needing to build my own for my arm64 based servers

Add support for check TLSA records (DANE)

I doesn't find any Prometheus exporter which allow this functionality, so this will be unique future.
TLSA records works on DNSSEC signed zones and allow to authenticate host even with certificate issued via not publicly trusted CA, or add pinning of existing trusted certificate to DNS layer to provide more security.
At current state enabling TLSA for SSL connections gives most of profit for Mail servers. F.e.: Postfix start to work with mandatory encryption when it detect TLSA record for MX record and not allow to downgrade to plain text.
Monitoring of TLSA is important as incorrect value in TLSA record mean untrusted SSL connection from client side which support DANE validation.
Here is short description:
https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Named_Entities
And some other opensource tools description, maybe it can help with ideas:
https://www.internetsociety.org/blog/2017/12/monitoring-your-dane-deployment/
https://github.com/siccegge/dane-monitoring-plugins

context problem

I've tried to configure ssl-exporter, but prometheus gives me context deadline error.
Look please at my config, and tell what i missed.
ss-exporter:

modules:
  https:
    prober: https
  https_insecure:
    prober: https
    tls_config:
      insecure_skip_verify: true

prometheus:

  - job_name: 'ssl'
    metrics_path: /probe
    params:
      module: ["https"]
    static_configs:
      - targets:
        - "www.google.com:443"
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: 88.198.189.243:9219

Prometheus not scraping a file job

I'm using a file probe.

After adding the job and reload the Prometheus config, Prometheus didn't scrape the job. No errors are thrown by Prometheus.

is my config correct?

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['prometheus:9090']

  - job_name: 'certificates exporter'
    metrics_path: /probe
    params:
      module: [ "file" ]
      target: [ "/root/certificates/*.pem" ]
    relabel_configs:
      - target_label: __address__
        replacement: 127.0.0.1:9219  # ssl-exporter

up{} only show Prometheus job, no certificates exporter job

Also no errors if curl the ssl-exporter

$ curl "localhost:9219/probe?module=file&target=/root/certificates/*.pem"

# HELP ssl_file_cert_not_after NotAfter expressed as a Unix Epoch Time for a certificate found in a file
# TYPE ssl_file_cert_not_after gauge
ssl_file_cert_not_after{} 2.06835186e+09
ssl_file_cert_not_after{} 1.92588785e+09
# HELP ssl_file_cert_not_before NotBefore expressed as a Unix Epoch Time for a certificate found in a file
# TYPE ssl_file_cert_not_before gauge
ssl_file_cert_not_before{} 1.59531186e+09
ssl_file_cert_not_before{} 1.61052785e+09
# HELP ssl_probe_success If the probe was a success
# TYPE ssl_probe_success gauge
ssl_probe_success 1
# HELP ssl_prober The prober used by the exporter to connect to the target
# TYPE ssl_prober gauge
ssl_prober{prober="file"} 1

Kubernetes deployment

I should provide a kustomize base/helm chart (or both?) as an upstream for others to use and contribute to.

Check OCSP status for every certificates in the chain

Hi

GlobalSign is currently revoking some of their intermediate CA certificates and I found out that ssl_exporter still considers a certificate issued by one of these intermediate CA to be valid.

To be fair, OpenSSL and GNU TLS both consider such cert as valid too:

$ certtool --verify --infile example.com/fullchain.pem
Loaded system trust (129 CAs available)
        Subject: CN=GlobalSign RSA DV SSL CA 2018,O=GlobalSign nv-sa,C=BE
        Issuer: CN=GlobalSign,O=GlobalSign,OU=GlobalSign Root CA - R3
        Checked against: CN=GlobalSign,O=GlobalSign,OU=GlobalSign Root CA - R3
        Signature algorithm: RSA-SHA256
        Output: Verified. The certificate is trusted.

        Subject: CN=example.com
        Issuer: CN=GlobalSign RSA DV SSL CA 2018,O=GlobalSign nv-sa,C=BE
        Checked against: CN=GlobalSign RSA DV SSL CA 2018,O=GlobalSign nv-sa,C=BE
        Signature algorithm: RSA-SHA256
        Output: Verified. The certificate is trusted.

Chain verification output: Verified. The certificate is trusted.

$ openssl verify -CAfile <(cat example.com/intermediate.pem example.com/root.pem) example.com/fullchain.pem
example.com/fullchain.pem: OK

However, some clients fails to validate this certificate and an OCSP request for the intermediate CA certificate shows that it is actually revoked:

$ openssl x509 -in example.com/intermediate.pem -noout -ocsp_uri
http://ocsp2.globalsign.com/rootr3
$ openssl ocsp -issuer example.com/root.pem -cert example.com/intermediate.pem -url http://ocsp2.globalsign.com/rootr3
Response verify OK
example.com/intermediate.pem: revoked
        This Update: Jan 25 20:38:32 2021 GMT
        Next Update: Jan 29 20:38:32 2021 GMT
        Reason: cessationOfOperation
        Revocation Time: Jan 20 00:00:00 2021 GMT

Do you think it would be possible to implement OCSP verification on every certificate in the chain returned by the TLS server?

insecure_skip_verify: true - does't work

Hello, I have set up insecure_skip_verify: true, but anyway I don't see all metrics and in log I get error below

ERRO[0002] error=Get "https://<IP_ADDRESS>:443": x509: cannot validate certificate for <IP_ADDRESS> because it doesn't contain any IP SANs target=<IP_ADDRESS>:443 prober=https timeout=10s source="ssl_exporter.go:93"

Not able configure FTP endpoint

I'm trying to setup monitoring for SSL certificates over FTP and did below configurations:

  1. Setting config:
    modules:
    tcp_ftp_starttls:
    prober: tcp
    tcp:
    starttls: ftp

  2. Setting scrape config in prometheus:
    job_name: "xyz"
    scrape_interval: 60s
    metrics_path: /probe
    static_configs:

    • targets:
      • <IP_Address>:22
        relabel_configs:
    • source_labels: [address]
      target_label: __param_target
    • source_labels: [__param_target]
      target_label: instance
    • target_label: address
      replacement: 127.0.0.1:9319

But this does not seem to be working. Am i missing something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.