Coder Social home page Coder Social logo

ricoberger / script_exporter Goto Github PK

View Code? Open in Web Editor NEW
311.0 7.0 72.0 302 KB

Prometheus exporter to execute scripts and collect metrics from the output or the exit status.

License: MIT License

Makefile 7.79% Go 84.43% Dockerfile 0.69% Shell 2.62% Smarty 4.47%
go prometheus prometheus-exporter script scripts docker kubernetes

script_exporter's Introduction

script_exporter

The script_exporter is a Prometheus exporter to execute scripts and collect metrics from the output or the exit status. The scripts to be executed are defined via a configuration file. In the configuration file several scripts can be specified. The script which should be executed is indicated by a parameter in the scrap configuration. The output of the script is captured and is provided for Prometheus. Even if the script does not produce any output, the exit status and the duration of the execution are provided.

Building and running

To run the script_exporter you can use the one of the binaries from the release page or the Docker image. You can also build the script_exporter by yourself by running the following commands:

git clone https://github.com/ricoberger/script_exporter.git
cd script_exporter
make build

An example configuration can be found in the examples folder. To use this configuration run the following command:

./bin/script_exporter -config.file ./examples/config.yaml

To run the examples via Docker the following commands can be used:

docker build -f ./Dockerfile -t ricoberger/script_exporter:dev .
docker run --rm -it --name script_exporter -p 9469:9469 -v $(pwd)/examples:/examples ricoberger/script_exporter:dev -config.file /examples/config.yaml

Then visit http://localhost:9469 in the browser of your choice. There you have access to the following examples:

  • test: Invalid values which are returned by the script are omitted.
  • ping: Pings the specified address in the target parameter and returns if it was successful or not.
  • helloworld: Returns the specified argument in args as label.
  • showtimeout: Reports whether or not the script is being run with a timeout from Prometheus, and what it is.
  • docker: Example using docker exec to return the number of files in a Docker container.
  • args: Pass arguments to the script via the configuration file.
  • metrics: Shows internal metrics from the script exporter.

You can also deploy the script_exporter to Kubernetes via Helm:

helm repo add ricoberger https://ricoberger.github.io/helm-charts
helm install script-exporter ricoberger/script-exporter

Usage and configuration

The script_exporter is configured via a configuration file and command-line flags.

Usage of ./bin/script_exporter:
  -config.file file
        Configuration file in YAML format. (default "config.yaml")
  -create-token
        Create bearer token for authentication.
  -timeout-offset seconds
        Offset to subtract from Prometheus-supplied timeout in seconds. (default 0.5)
  -version
        Show version information.
  -web.listen-address string
        Address to listen on for web interface and telemetry. (default ":9469")

The configuration file is written in YAML format, defined by the scheme described below.

tls:
  enabled: <boolean>
  crt: <string>
  key: <string>

basicAuth:
  enabled: <boolean>
  username: <string>
  password: <string>

bearerAuth:
  enabled: <boolean>
  signingKey: <string>

discovery:
  host: <string>
  port: <string>
  scheme: <string>

scripts:
  - name: <string>
    command: <string>
    args:
      - <string>
    # optional
    env:
      <key>: <value>
    # by default the output will also be parsed when the script fails,
    # this can be changed by setting this option to true
    ignoreOutputOnFail: <boolean>
    timeout:
      # in seconds, 0 or negative means none
      max_timeout: <float>
      enforced: <boolean>
    cacheDuration: <duration>
    discovery:
      params:
        <string>: <string>
      prefix: <string>
      scrape_interval: <duration>
      scrape_timeout: <duration>

scripts_configs:
  - <string>

The name of the script must be a valid Prometheus label value. The command string is the script which is executed with all arguments specified in args. To add dynamic arguments you can pass the params query parameter with a list of query parameters which values should be added as argument. The program will be executed directly, without a shell being invoked, and it is recommended that it be specified by path instead of relying on $PATH.

The optional env key allows to run the script with custom environment variables.

Example: set proxy env vars for test_env script

scripts:
  - name: test_env
    command: /tmp/my_script.sh
    env:
      http_proxy: http://proxy.example.com:3128
      https_proxy: http://proxy.example.com:3128

Note: because the program is executed directly, shell constructions can't be used. For example:

# Error: output stream redirection (>) is a shell construction
/bin/foo >/dev/null
# Success: use appropriate command line arguments if supported by the command
/bin/foo --output /dev/null

# Error: logical operator (||) is a shell construction
/bin/foo || true
# Success: use shell interpreter with arguments
/bin/bash -c '/bin/foo || true'
# Success: or create an executable script file
/usr/local/bin/bar.sh
# Success: or run it via interpreter
/bin/bash /usr/local/bin/bar.sh

Prometheus will normally provide an indication of its scrape timeout to the script exporter (through a special HTTP header). This information is made available to scripts through the environment variables $SCRIPT_TIMEOUT and $SCRIPT_DEADLINE. The first is the timeout in seconds (including a fractional part) and the second is the Unix timestamp when the deadline will expire (also including a fractional part). A simple script could implement this timeout by starting with timeout "$SCRIPT_TIMEOUT" cmd .... A more sophisticated program might want to use the deadline time to compute internal timeouts for various operation. If enforced is true, script_exporter attempts to enforce the timeout by killing the script's main process after the timeout expires. The default is to not enforce timeouts. If max_timeout is set for a script, it limits the maximum timeout value that requests can specify; a request that specifies a larger timeout will have the timeout adjusted down to the max_timeout value.

For testing purposes, the timeout can be specified directly as a URL parameter (timeout). If present, the URL parameter takes priority over the Prometheus HTTP header.

The cacheDuration config can be used to cache the results from an execution of the script for the provided time. The provided duration must be parsable by the time.ParseDuration function. If no cache duration is provided or the provided cache duration can not be parsed, the output of an script will not be cached.

You can fine tune the script discovery options via optional script discovery. All these options will go through prometheus configuration where you can change them via relabel mechanism. There are params to define dynamic script parameters (with reserved keys: params, prefix, script and timeout) where only value will be used during script invoking (similar to args), prefix to define prefix for all script metrics, scrape_interval to define how often the script scrape should run and scrape_timeout to define the scrape timeout for prometheus (similar to timeout).

The global discovery configures the main discovery parameters. If not defined, the exporter will use Host: header from the request to decide how to present a target to prometheus.

Prometheus configuration

The script_exporter needs to be passed the script name as a parameter (script). You can also pass a custom prefix (prefix) which is prepended to metrics names and the names of additional parameters which should be passed to the script (params and then additional URL parameters). If the output parameter is set to ignore then the script_exporter only return script_success{}, script_duration_seconds{} and script_exit_code{}.

The params parameter is a comma-separated list of additional URL query parameters that will be used to construct the additional list of arguments, in order. The value of each URL query parameter is not parsed or split; it is passed directly to the script as a single argument.

Example config:

scrape_configs:
  - job_name: 'script_test'
    metrics_path: /probe
    params:
      script: [test]
      prefix: [script]
    static_configs:
      - targets:
        - 127.0.0.1
    relabel_configs:
      - target_label: script
        replacement: test
  - job_name: 'script_ping'
    scrape_interval: 1m
    scrape_timeout: 30s
    metrics_path: /probe
    params:
      script: [ping]
      prefix: [script_ping]
      params: [target]
      output: [ignore]
    static_configs:
      - targets:
        - example.com
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: 127.0.0.1:9469
      - source_labels: [__param_target]
        target_label: target
      - source_labels: [__param_target]
        target_label: instance

  - job_name: 'script_exporter'
    metrics_path: /metrics
    static_configs:
      - targets:
        - 127.0.0.1:9469

Optionally, HTTP service discovery can be configured like this:

- job_name: "exported-scripts"
  http_sd_configs:
  - url: http://prometheus-script-exporter:9469/discovery

This will make prometheus reach to /discovery endpoint and collect the targets. Targets are all the scripts configured in the exporter.

Breaking changes

Changes from version 1.3.0:

  • The command line flag -web.telemetry-path has been removed and its value is now always /probe, which is a change from the previous default of /metrics. The path /metrics now responds with Prometheus metrics for script_exporter itself.
  • The command line flag -config.shell has been removed. Programs are now always run directly.

script_exporter's People

Contributors

baprx avatar billimek avatar dagavi avatar dependabot[bot] avatar diversario avatar earthlingdavey avatar fgouteroux avatar fsadykov avatar gfdsa avatar llamafilm avatar lufik avatar masshash avatar nick-triller avatar ricoberger avatar siebenmann avatar sincasios avatar thecosmicfrog avatar zebradil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

script_exporter's Issues

how to set up a script with params

Hi, how should I set the configs if I want to have script args set in prometheus.yml?

I've tried this, but the param doesn't seem to reach my script, otherwise the script is executed:

params:
      script: [my_script]
      params: [my_param]

Thanks!

Running script exporter on Kubernetes

Hi, I have a problem with understanding how your script should work on Kubernetes. It could be caused by my small knowledge about the Kubernetes environment, but I hope you could something explain to me. Using it locally isn't a problem, but on Kubernetes, it is.

Suppose I have a cluster named my-cluster and in there a few samples pods which exhibit hello world page. My job is to get data about specific files from the containers on which programs are running, for example, on path ~/var/app/log I have two files: log_1.log and log_2.log (in every container). I would like to calculate how many days is between creating/updating log_1.log and log_2.log, export it to Prometheus and creating a diagram in Grafana about this information in every container.

Should I install the script exporter on every container and exhibit the information about differences in files, or can I create the script exporter as the next pod in my cluster and get access to the filesystem in every container to get the required data? If can I do it in a second way, could you explain how it should look?

Thank you very much in advance for your time.

Paweล‚

about executing docker scripts

Hi , i'm using your script_exporter . its great !!! However , when i executed a shell script about docker with your script_exporter, it returned an error result , but it return the correct result when i executed the script through sh script.sh , i can't see the log and I don't know why ,

#!/bin/sh
source /etc/profile
result="$(sudo docker exec -it mysql_slave1 mysql -utest -p'test'  -e "show slave status\G" |grep "Slave_IO_Running: Yes"|wc -l)"
echo "# HELP mysql_slave1_io_running"
echo "# TYPE mysql_slave1_io_running gauge"
echo "mysql_slave1_io_running{label=\"mysql_slave1_io_running\"} $result"
tls:
  enabled: false
  crt: server.crt
  key: server.key

basicAuth:
  enabled: false
  username: admin
  password: admin

bearerAuth:
  enabled: false
  signingKey: my_secret_key

scripts:
  - name: mysql_slave1_sql_running
    script: ./test.sh
    timeout:
      max_timeout: 60
  - name: sleep
    script: sleep 120
    timeout:
      enforced: true

Can you help me?

impossible to launch a windows service from script_exporter-windows-amd64.exe

I create a windows service with sc.exe and i try to start the service, error 1053 appears "The service did not respond to start or control request in a timely fashion"

It's seem that it missing in script_exporter.go informations for Service Control Manager. It does not receive a "service started" notice from the service within this time-out period.

Is it possible to fix it ?

Cannot obtain the script output/result from the script_exporter probe

Looking at the probe page of my custom script execution, and following of course the readme examples, there's no way to obtain the script stdout result.
In an example script, that returns only and always the value "50" looking at probe values it returns only

# HELP script_success Script exit status (0 = error, 1 = success).
# TYPE script_success gauge
script_success{script="pint01"} 0
# HELP script_duration_seconds Script execution time, in seconds.
# TYPE script_duration_seconds gauge
script_duration_seconds{script="pint01"} 0.005414
# HELP script_exit_code The exit code of the script.
# TYPE script_exit_code gauge
script_exit_code{script="pint01"} 0

without any trace of the output.
In the readme file, it's only explained how to ignore the output on the prometheus.yaml, not how to force it.
What am I missing? :(

Provide an example of docker-compose.yml

I am trying to configure the docker-compose.yml file but it doesn't seem to work. On the other hand by making a docker run command I do not have any problem.

What works :

docker run -d -p 9469:9469/tcp -v /data/script_exporter/examples:/opt/examples  -config.file /opt/examples/config.yaml -web.listen-address ":9469" ricoberger/script_exporter:v2.4.0

Script Exporter

Metrics

Probe

  • version: v2.4.0
  • branch: HEAD
  • revision: 5eb48ef
  • go version: go1.17.1
  • build user: root
  • build date: 20210915-14:23:11

What doesn't work :

version: '3'
services:
  script_exporter:
    command:
      - '-config.file=/opt/examples/config.yaml'
      - '-web.listen-address=":9469"'
    container_name: 'script_exporter'
    image: 'ricoberger/script_exporter:v2.4.0'
    ports:
      - '9469:9469'
    volumes:
      - '/data/script_exporter/examples:/opt/examples'

Below is the log when I run the docker-compose :

Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Recreating script_exporter ... done
Attaching to script_exporter
script_exporter    | Starting server (version=v2.4.0, branch=HEAD, revision=5eb48ef1b53c11f98a4a6609667389e5c7140a42)
script_exporter    | Build context (go=go1.17.1, user=root, date=20210915-14:23:11)
script_exporter    | script_exporter listening on ":9469"
script_exporter    | 2021/09/29 20:17:17 listen tcp: address tcp/9469": unknown port
script_exporter exited with code 1

As on your documentation nothing is specified about the parameters for the docker-compose, could you tell me how to make your docker work with a docker-compose, please ? Thanks

Proposal: expose scrape timeout information to scripts and optionally enforce it

When Prometheus makes a scrape, it exposes information about the scrape timeout in a special HTTP header, which is used by Blackbox to set timeouts for its probes (other exporters may also use it, I'm not sure). I propose exposing this information to scripts in some environment variables and optionally having script_exporter enforce this by using exec.CommandContext with a Context that has a deadline (on the grounds that this saves people from having to wrap various scripts in a boilerplate timeout $SCRIPT_TIMEOUT cmd ...).

If this is of interest, I've put together a branch with an implementation of this: https://github.com/siebenmann/script_exporter/tree/script-timeouts

I've split the implementation into two commits (first adding the environment variables, then adding the optional enforcement) in case you'd like to not have the optional enforcement side. There's also a tiny cleanup of command line argument handling. I can revise any or all of these, or make a pull request, as you'd like.

Docker image

Thanks for nice project!!

Looking for container based image (docker)

Request Service Account support in helm chart

Very useful project, thanks! We have one small issue with EKS service accounts

In order to access some of the resources we need to report metrics on, the script-exporter instance needs to use an EKS workload identity provided via the AWS IAM integration with the EKS k8s service account. In order to leverage that in the pod we need to be able to assign a service account to the pod. This requires a small change to the deployment template to take a service account name and bind it to the pod.

See:

After setting a service account on the pod, the AWS CLI may be run inside the pod with the appropriate permissions to scrape the data we need to report metrics for.

missing port in address

hello, i got trouble in running the exporter with a specific port.
when i use the cmd ./script_exporter -config.file config.yml -web.listen-address 19469, there's error msg shows "address 19469: missing port in address",
and i change the line into ./script_exporter -config.file config.yml -web.listen-address 127.0.0.1:19469, the exporter seems working, but i cant access the "host:port/metrics" from a browser, please help!
thanks!!

release

I'd like to use the unreleased improvments in master, can you build a new release?

Is it possible to run multiple scripts on the same prometheus job/endpoint

Hi,
I have this situation where I want to add a prometheus job to scrape an endpoint and on the server script_exporter would execute let's say 3 scripts. Is it possible to scrape them on the same job like this?

  - job_name: "oracle-scripts"
    scrape_interval: "1h"
    scrape_timeout: "1m"
    scheme: "http"
    metrics_path: "/probe"
    params:
      script:
        - "check_script1"
        - "check_script2"
        - "check_script3"
    file_sd_configs:
      - files:
          - "/etc/prometheus/file_sd/targets.yml"

If I run the above I get the metrics only of the first script check_script1 meaning that the other 2 scripts are not executed.
The endpoint scraped from above would be:
http://target_server:9469/probe?script=check_script1&script=check_script2&script=check_script3

Thanks,
Enid

gaps between the metrics

Since we started using script_exporter, we found a weird issue that makes us uncomfortable. We use script_exporter to execute some long-running scripts like smartctl metrics. Even though we have a reasonable cache duration set up, we still have gaps in between the metrics, that correspond to the time of script execution or close. The script-exporter.yml looks like

scripts:
...
- name: node_smartmon
  command: "/usr/lib/prometheus/custom_metric_node_smartmon.sh"
  cacheDuration: 1200s
  timeout:
    max_timeout: 600
    enforced: false

The result is

image

Is there any good way to get rid of this gaps?

RFE: Support directly running scripts and switch to it by default

Currently, the script exporter runs specified scripts by invoking the specified shell (/bin/sh by default) with the command line arguments script [arg ...]. The problem with this is that it is very easy to believe that you can specify a binary executable as the script: setting (a belief that is encouraged by two out of the three examples starting with #! /bin/bash). This doesn't work, and in fact it fails explosively; some shells will try to interpret your binary executable as a shell script and all sorts of crazy things proceed to happen.

I propose two changes (and I can submit a pull request to implement one or both). First, support directly running programs, instead of invoking the shell, by setting an empty shell on the command line: script_exporter -config.shell "". Second, make this the default and require people to specifically set the shell (even to /bin/sh) if they want the behavior of running through the shell.

(This requires only minor changes because exec.Command() already pretty much supports this usage; you just run args[0] instead of *shell, with small other changes.)

Script Results not showing on the web since upgrade to Redhat 8.9

Hi

We have been running the prometheus-script-exporter on Redhat 7, but we have had to upgrade our machies to Redhat 8.9, this also included an upgrade of Java from 8 to 11 and tomcat went from v8 to v9

Whilst many of our scripts are still working, these are all very basic Bash output, but we have a slightly more complex script that connects to ETCD cluster to get some github hashes.

The first line of this script is export PATH="${PATH}:/usr/local/bin" which now includes the v11 Java.

If I run the script manually from within the server I can see the results, when I hit the endpoint url with the probe?script=script_name I see some of the starting text, all the way up to

HELP script_success Script exit status (0 = error, 1 = success).

TYPE script_success gauge

script_success{script="script_name"} 1

HELP script_duration_seconds Script execution time, in seconds.

TYPE script_duration_seconds gauge

script_duration_seconds{script="script_name"} 0.170699

HELP script_exit_code The exit code of the script.

TYPE script_exit_code gauge

script_exit_code{script="script_name"} 0

but I don't see anything below this where the actual results of the script would come out.

This is the only script not working, and we are trying to figure it out. If you have any notion of pre-requisites or how changes in the OS would possibly affect things that would help

Many thanks
MP

Get script exit code as a metric

hi,
Could you advise please if there is a way in the current release to get as a metric returned, script original exit code?

Thanks in advance

how to get shell command metrics?

Mr Rico Berger, I want to use script_exporter to monitor pxf-cli status of greenplum, so I edit the script like

#!/bin/sh
result="$(pxf-cli cluster status | grep 'running on [0-9] out of' | cut -b 19)"
echo "$result"
echo "PXF is running on $result out of 4 hosts"

I want to obtain the result, which equals 4
but in localhost/metrics, I only got

scripts_requests_total{script="pxf_status_check"} 2

pxf_status_check is the name of my shell script.

really thanks to you, expect your response.

Call does not wait for script execution to finish

Hello guys,

I would like some help.

I'm using script_exporter to run a shell script, which runs another powershell script.

Executing the shell script, I have the return successfully, even though the return is not immediate, taking 30s for example.

Even though I configure the scrap time and the timeout time, whenever I execute the curl call from the exporter, the return is immediate and with that it does not return the echo (variable created with the expected value).

Would you have any idea why?

Would it be because of the shell script to run another power shell?

Thank you very much!

Script parameter is missing

Hi
I have tried your script exporter but after call http://localhost:9469/ I can see

Script Exporter
Metrics

Probe

version: v2.2.0
branch: master
revision: b698e33
go version: go1.13.7
build user:
build date:

After click to Probe I see "Script parameter is missing". I am only testing you testing solution:
./bin/script_exporter -config.file ./examples/config.yaml

Could you pls me advise me where can be a problem ?

Thanks a lot

Vojtech

Proposal: provide internal Prometheus metrics for script_exporter

The Prometheus Blackbox exporter both answers your probes with probe-specific metrics and provides additional internal metrics for its own state. Right now, script_exporter has no equivalent of the latter, and I think having it would be potentially handy, especially if the standard Go client metrics were augmented with metrics about what script_exporter is doing and has done so that you could see things like how many scripts were currently active.

I've put together a preliminary version of this to show what I'm thinking of, at https://github.com/siebenmann/script_exporter/tree/internal-metrics. The metrics are always exposed on /metrics; the code arranges to share this URL path with the script handler if necessary. I can further develop this into something worth a pull request if you're interested.

(Changing the default script handler path away from /metrics to, say, /probe, is probably too much of a breaking change. It turned out to be easy enough to share the two based on whether or not there are URL query parameters.)

Script failed: fork/exec, exec format error

Hi

Can anyone help me understand why this fails

$ /app/dvkdbhk/home/dvkdbhkx/REPOS/lbrown15/kdb-data-services/lbrown15/script/sh/metrics/latency_by_exchange
#HELP latency_by_exchange Average Latency by exchange for the last 30 seconds
#TYPE latency_by_exchange gauge
latency_by_exchange{exchange="CU2", host="unycasd20556"} 0.6871429
latency_by_exchange{exchange="DTB", host="unycasd20556"} 0.7325
latency_by_exchange{exchange="EUX", host="unycasd20556"} 0.324
latency_by_exchange{exchange="ICE", host="unycasd20556"} 0.5640802
latency_by_exchange{exchange="LIF", host="unycasd20556"} 646.997
latency_by_exchange{exchange="LME", host="unycasd20556"} 5332.213
latency_by_exchange{exchange="OSA", host="unycasd20556"} 0.3487727

$ ./script_exporter-v2.0.1-linux-amd64 -config.file ./script_exporter.yml -web.listen-address :4522
Starting server (version=v2.0.1, branch=master, revision=92a7645e2e084df7334b71a17b65eb04bbda0e5c)
Build context (go=go1.13.4, user=ricoberger, date=20191213-09:08:44)
script_exporter listening on :4522
2020/03/18 05:14:20 Script failed: fork/exec /app/dvkdbhk/home/dvkdbhkx/REPOS/lbrown15/kdb-data-services/lbrown15/script/sh/metrics/latency_by_exchange: exec format error

script_exporter not passing custom params to script

Hi, I have a script_exporter configured with the following file:

scripts:
  - name: my_script
    script: /full/path/to/script/my_script.py value3 value4

The script my_script.py is designed to output the received args to a file.
When I use this command:
curl http://127.0.0.1:9469/probe?script=my_script&params=param1&param1=value1
I would have expected to see value1 in the file output, however that is not the case, I see only the params value3 and value4. I cannot figure out what is wrong.

Add more script discovery options

I would like to use the great script_exporter discovery but I lack few options there like dynamic parameters and scrape_interval with scrape_target.

I wrote initial PR #89 to take a look what idea I'm talking about.

windows example

is it possible to have an example with powershell file on windows host ?

Update Dockerfile to fix CRITICAL vulnerabilities linked to the used version of the base image

Name and Version
ricoberger/script_exporter:v2.1.1

What is the problem this feature will solve?
This feature will solve secutiry critical vulnerabilities, below the trivy scan :

For ricoberger/script_exporter:v2.1.1 that uses:

  • base image : alpine:3.10
  • build image : golang:1.13-alpine3.10
    image

What is the feature you are proposing to solve the problem?
Update Dockerfile using:

  • base image : alpine:3.11.12
  • build image : golang:1.13-alpine3.11

Below the trivy scan for the suggested build and base images:

image

Using scrape config with discovery does not trigger the scripts

Using the following scrape config in prometheus

      - job_name: 'script-exporter-discovery'
        http_sd_configs:
          - url: http://script-exporter-svc:9469/discovery

i can see the script_success and all the rest of the "basic" metrics but the script itself does not get triggered

Script Failure does not provide indication in script_success metric as to what script failed

Hi,
When a script fails, I won't be able to correlate back to the script that actually failed. It would be great if it was included as a dimension in the script_success metric. Alternatively or additionally, it would also be great if the output parameter was respected as in the case of my script I output an up/down metric that I can build my alerting off of.

Currently

# curl "localhost:9469/probe?output=true&script=db_tablespace"
# HELP script_success Script exit status (0 = error, 1 = success).
# TYPE script_success gauge
script_success{} 0
# HELP script_duration_seconds Script execution time, in seconds.
# TYPE script_duration_seconds gauge
script_duration_seconds{} 1.788413

Imagining

script_success{script="db_tablespace"} 0

And/Or optionally

script_success{script="db_tablespace"} 0
# HELP Script Output param output != ignore
db_tablespace_probe_up{dbhost="dbserver",database="CUST",port="1521"} 0

Please let me know your thoughts on the above.
Regards,
Chris Whelan

add a helm chart

Hi! It would be great if there was a helm chart for this project

alpine lacks bash?

The first line of the Dockerfile sources FROM golang:1.17-alpine3.14 as build, which lacks bash. As such:

# docker run --rm -it --entrypoint /bin/bash ricoberger/script_exporter:v2.5.0
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown.

Yet all the examples provided use bash (e.g. https://github.com/ricoberger/script_exporter/blob/master/examples/ping.sh).

Am I missing something here?

expected label value, got "INVALID"

hi,

I am trying script exporter, I am able to make it work, and it is outputting the expected values using curl request.
curl "http://server.stag.use1b.com:9469/probe?script=check_service_script"

but when I am trying to scrape using Prometheus, I am getting expected label value, got "INVALID" error in Prometheus.
Prometheus config:

  - job_name: 'script_check'
    scrape_interval: 30s
    scrape_timeout: 10s
    metrics_path: /probe
    params:
      script: ['check_service_script']
    static_configs:
    - targets: ['server.stag.use1b.com:9469']

can you tell me what could be the issue here?

script exporter version
[root@server examples]# script_exporter-linux-amd64 --version
script_exporter, version v2.2.0 (branch: HEAD, revision: b698e33)
build user: runner
build date: 20200508-06:03:16
go version: go1.13.10

Powershell output comma converted to dot

Hello,

I'm trying to send powershell output:
windows_scheduled_task_job_execution { taskname="Generate Monthly Report", LastTaskResult="0", LastTaskStatus="Success"} 0

via the script_exporter to prometheus, but the output seem's to be reformated from comma to dot

# HELP script_exit_code The exit code of the script.
# TYPE script_exit_code gauge
script_exit_code{script="ExportScheduledTaskMetric"} 0
windows_scheduled_task_job_execution { taskname="Generate Monthly Report". LastTaskResult="0". LastTaskStatus="Success"} 0

Any help would be greatly appreciated.

Thank you in advanced

script-exporter does not handle process reaping, leaving zombie processes behind

I'm running it in Kubernetes, and use kubectl, jq etc to get the data to produce a metric. I noticed that the script execution fails intermittently with the script getting killed, even though it did not meet the timeout or logged any errors.

The I noticed this in the container:

script-exporter-5dd8cbfc8-xjv7r:$ ps faux
PID   USER     TIME  COMMAND
    1 nobody    0:07 /bin/script_exporter -config.file /etc/script-exporter/script-exporter.yml
 3300 nobody    0:00 [kubectl]
 3700 nobody    0:00 [kubectl]
 3798 nobody    0:00 [kubectl]
 3996 nobody    0:00 [kubectl]
 4900 nobody    0:00 [kubectl]
 6894 nobody    0:00 [kubectl]
 7895 nobody    0:00 [kubectl]
15302 nobody    0:00 [jq]
...

script-exporter-5dd8cbfc8-xjv7r:$ cat /proc/3300/status
Name:   kubectl
State:  Z (zombie)
Tgid:   3300
...

dozens of zombie processes sitting around. Perhaps script-exporter should run under tini or something else that handles reaping defunct processes? This is in containerd, by the way, which seemingly does not handle this the way Docker does.

env: expects list of strings, not a map.

Howdy, Rico!

I have a sensitive pwd that I'd like to provide via env vars.

I tried to follow this snippet as an example:

scripts:
  - name: test_env
    command: /tmp/my_script.sh
    env:
      http_proxy: http://proxy.example.com:3128
      https_proxy: http://proxy.example.com:3128

But it throws this message:

script_exporter[2622]: ts=2023-07-03T22:48:51.713Z caller=exporter.go:57 level=error err="yaml: unmarshal errors:\n line 5: cannot unmarshal !!map into []string"

Version: v2.12.0

Long shoot:
And when I provide the env var as a list of str, it works:

scripts:
  - name: test_env
    command: /tmp/my_script.sh
    env:
    - http_proxy
    - https_proxy

Caching metrics

Hi Rico.

Great work - I love your little exporter here.

Is it possible to add a caching feature to the exporter ?

I have some scripts that I would prefer to run on a hourly basis ( for performance reasons ) - but to avoid prometheus staleness, I cannot set the scrape_interval higher than 5m.

Can the exporter perhaps cache the script output and return the last values for a given set of parameters ?

Or is there a better way to accomplish this ?

Best regards
Torben

How to run scripts for windows?

Hi, what scripts can I use on win. I tried to use python script in title #! python but got an error Script parameter is missing.
scripts:

  • name: my_script
    script: C:\my_script.py

What am I doing wrong?

how to disable param config in url (security concern)

params can be edited in url config , that's a good feature , but also could be a security problem, can we disable this feature

I tried

  - name: "echo"
    script: "echo"
  - name: "echo2"
    command: "echo"

but they both support command param customize.

http://localhost:9469/probe?script=echo&params=s,t&s=foo&t=bar
http://localhost:9469/probe?script=echo2&params=s,t&s=foo&t=bar

both web page shows foo bar, meaning the parameter in web url is passed to the command excuting, which could lead to a security problem.

Include `jq` in the Docker image

Hey!

I was creating scripts that parse a REST API with jq. I was wondering if it was possible to include jq in the default docker image, since its a fairly standard tool to parse json in bash scripts.

Obviously, I can build my own image, but perhaps more people could profit / save time if some common standard scripting tools are shipped with a docker image.

Thank you in advance for creating and maintaining this project :)

params not being honored despite flag noargs not present

I am testing v2.14.0 in Docker as shown in the README.

/examples # ps -ef
PID USER TIME COMMAND
1 root 0:00 /bin/script_exporter -config.file /examples/config.yaml

I slightly modified the helloworld.sh script as below:
/examples # cat helloworld.sh
#!/bin/sh
echo "hello_world{params="$1","$2"} 1"

I then query as below:
http://localhost:9469/probe?script=helloworld&prefix=mypref&params=argv

The output shows (among other lines):
mypref_hello_world{params="test",""} 1

I also tried the ping script (ping.sh) and no argument is passed.

Suggestions?

Script with space character in name or path

hi,
Could you advise please on how is it possible to specify in the config file the path to the script which contains spaces, example of what I mean:

scripts:

  • name: test_spaces
    script: /opt/test dir/test_spaces.sh
    timeout:
    max_timeout: 55
    enforced: true

For this config I am getting following error:

script_exporter: 2022/05/18 14:43:24 Script failed: fork/exec /opt/test: no such file or directory

Whatever escaping, putting in comma etc. I am trying it is still not able to execute the script

Thanks in advance

script_exporter's script_exit_code differs from the actual script's exit code

When executed from CLI:

# /opt/script_exporter/check_type_current.sh
# echo `$?`
0

===
at the same time, script_exporter shows an exit code of -1 (and success as a False):

#  curl localhost:9469/probe?script=check_type_current
# HELP script_success Script exit status (0 = error, 1 = success).
# TYPE script_success gauge
script_success{script="check_type_current"} 0
# HELP script_duration_seconds Script execution time, in seconds.
# TYPE script_duration_seconds gauge
script_duration_seconds{script="check_type_current"} 0.001040
# HELP script_exit_code The exit code of the script.
# TYPE script_exit_code gauge
script_exit_code{script="check_type_current"} -1

I made sure to check the config.yaml file so it points to the proper script:

# grep check_type_current config.yaml
 - name: 'check_type_current'
   script: /opt/script_exporter/check_type_current.sh

Logger dumps sensitive env vars into logs

Hello @ricoberger !

Thank you for maintaining this great tool!

I recently discovered that the script executor dumps all environment variables into logs on errors:

"env", strings.Join(cmd.Env, " "),

which can be very helpful for debugging, but it can also dump sensitive values such as passwords/access tokens/etc, for example if a script is something like

curl -s -u $GITHUB_USERNAME:$GITHUB_TOKEN $somegithublink

and thanks to log shipping that immediately ends up exposed to anyone with access to logs.

It would be great to (conditionally?) avoid dumping env vars to logs.

Define script parameters in config as array

Hi there!

Currently, the script string is split on spaces to generate the program name and any fixed arguments. This approach means that the program name and fixed arguments can't contain spaces.

I propose that fixed arguments in the exporter config should be defined as an array instead of being defined in the script property, e.g. something like this:

Example 1:

scripts:
  - name: "example1"
    command: "./examples/connectivity-check.sh"
    args:
    - "google.com"
    - "bing.com"

Example 2:

scripts:
  - name: "example2"
    command: "netcat"
    args:
    - "-vzw"
    - "2"
    - "example.com"

The change can be implemented in a backwards compatible way:

  1. Keep script property with current behaviour.
  2. Require either script or command to be defined.
  3. Error if script is combined with args or command. args is always optional.

This is also how arguments are passed to a process in a Kubernetes pod manifest which means it will feel natural to many users of script_exporter:

apiVersion: v1
kind: Pod
metadata:
  name: command-demo
spec:
  containers:
  - name: command-demo-container
    image: debian
    command:
    - "printenv"
    args:
    - "HOSTNAME"
    - "KUBERNETES_PORT"

@ricoberger let me know what you think, I will be happy to implement this change.

Script Exporter max_timeout not taking effect if query or header timeouts not specified

Configuring the maximum script timeout in Script Exporter's config.yaml file is not taking effect unless query or header timeouts are being specified alongside. Scripts are executing successfully even when their duration exceeds the max_timeout set.

I created a test_script that sleeps for 5 seconds before executing an echo command.

ping 192.0.2.0 -n 1 -w 5000 >nul
ECHO Script executed successfully 

Then, in Script Exporter's config.yaml file, the script's max_timeout was set to 1 second.

scripts:
  - name: test_script
    command: .\test_script.bat
    timeout:
      max_timeout: 1.0
      enforced: true 

The script was expected to be killed. Instead, it executed successfully.

image-2023-03-17-11-20-31-388

However, the max_timeout worked as expected when passed as a header in the GET request on Script Exporter's probe endpoint. As a result, the script was killed and did not return any value.
image-2023-03-17-13-14-37-832

ts=2023-03-17T11:11:19.895Z caller=metrics.go:77 level=error msg="Run script failed" err="exit status 1" 

The max_timeout also killed the test_script when the timeout was passed as a query in Script Exporter's probe endpoint.
image-2023-03-23-12-52-53-475

ts=2023-03-23T10:23:41.217Z caller=metrics.go:77 level=error msg="Run script failed" err="exit status 1"

I think I found out how this is happening: https://github.com/ricoberger/script_exporter/blob/v2.11.0/pkg/exporter/scripts.go#L121

The code clearly shows that this will be the expected behavior

But is it in the intended behavior or is it a bug?

When the script exits with non-0 code, output is not provided to prometheus

Hello,

Unless mistaking in the way to use the exporter, when the script exits with non-0 return code, its output it not provided to prometheus.
I believe it comes from here : https://github.com/ricoberger/script_exporter/blob/main/pkg/exporter/metrics.go#L67

Example OK :

[~]# cat script_exiting_0
#!/bin/bash
echo "EXIT 0"
exit 0
[~]# 
[~]# curl -s "http://0:9469/probe?script=script_exiting_0"
# HELP script_success Script exit status (0 = error, 1 = success).
# TYPE script_success gauge
script_success{script="script_exiting_0"} 1
# HELP script_duration_seconds Script execution time, in seconds.
# TYPE script_duration_seconds gauge
script_duration_seconds{script="script_exiting_0"} 0.011737
# HELP script_exit_code The exit code of the script.
# TYPE script_exit_code gauge
script_exit_code{script="script_exiting_0"} 0
EXIT 0

Example NOK :

[~]# cat script_exiting_1
#!/bin/bash
echo "EXIT 1"
exit 1
[~]# 
[~]# curl -s "http://0:9469/probe?script=script_exiting_1"
# HELP script_success Script exit status (0 = error, 1 = success).
# TYPE script_success gauge
script_success{script="script_exiting_1"} 0
# HELP script_duration_seconds Script execution time, in seconds.
# TYPE script_duration_seconds gauge
script_duration_seconds{script="script_exiting_1"} 0.002808
# HELP script_exit_code The exit code of the script.
# TYPE script_exit_code gauge
script_exit_code{script="script_exiting_1"} 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.