Coder Social home page Coder Social logo

logstash_exporter's Introduction

Logstash exporter

Prometheus exporter for the metrics available in Logstash since version 5.0.

Usage

go get -u github.com/BonnierNews/logstash_exporter
cd $GOPATH/src/github.com/BonnierNews/logstash_exporter
make
./logstash_exporter -exporter.bind_address :1234 -logstash.endpoint http://localhost:1235

Flags

Flag Description Default
-exporter.bind_address Exporter bind address :9198
-logstash.endpoint Metrics endpoint address of logstash http://localhost:9600

Implemented metrics

  • Node metrics

logstash_exporter's People

Contributors

christoe avatar davidkarlsen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash_exporter's Issues

Command line documentation is wrong

The command line does not accept the arguments documented in README.md.

This:

-exporter.bind_address
-logstash.endpoint

Should be:

--web.listen-address
--logstash.endpoint

Sample usage:

--web.listen-address=:9198
--logstash.endpoint=http://localhost:9600

Events metrics appear to be inverted

I am using the following logstash config with filebeat as the only input and loggly as the only output:

The input block is provided by the incubator/helm Chart, which uses this exporter Docker image.

The output block I use is the following, overriding using my own values.yaml file.

output {
      loggly {
        proto => "https"
        host => "logs-01.loggly.com"
        key => REDACTED
      }
}

When using the community logstash grafana dashboard, I get the following results:

image

The underlying queries are:

Left graph:

sum(rate(logstash_node_plugin_events_in_total[$interval])) by (plugin)

Right graph:

sum(rate(logstash_node_plugin_events_out_total[$interval])) by (plugin)

It appears that they are inverted, meaning that I am seeing large input numbers for loggly, and large output numbers for beats. Very possible I am misunderstanding what "Events In" and "Events Out" means in relation to plugins, but if that is the case, the metric names are very confusing.

Example prometheus config

Hi.

Thanks a lot for creating this exporter, it works great.
Do you by any chance have a example prometheus dashboard lying around?

logstash_exporter_scrape report success when failing

As per issue #13, i was trying the supplied query to detect errors, but it was not working... checking directly the exporter, i get this:

 curl  http://127.0.0.1:9198/metrics -s | grep scrape
# HELP logstash_exporter_scrape_duration_seconds logstash_exporter: Duration of a scrape job.
# TYPE logstash_exporter_scrape_duration_seconds summary
logstash_exporter_scrape_duration_seconds{collector="info",result="success",quantile="0.5"} 0.000544097
logstash_exporter_scrape_duration_seconds{collector="info",result="success",quantile="0.9"} 0.001007189
logstash_exporter_scrape_duration_seconds{collector="info",result="success",quantile="0.99"} 0.001895548
logstash_exporter_scrape_duration_seconds_sum{collector="info",result="success"} 0.025157706999999998
logstash_exporter_scrape_duration_seconds_count{collector="info",result="success"} 38
logstash_exporter_scrape_duration_seconds{collector="node",result="success",quantile="0.5"} 0.000491171
logstash_exporter_scrape_duration_seconds{collector="node",result="success",quantile="0.9"} 0.000994146
logstash_exporter_scrape_duration_seconds{collector="node",result="success",quantile="0.99"} 0.00149815
logstash_exporter_scrape_duration_seconds_sum{collector="node",result="success"} 0.0230836
logstash_exporter_scrape_duration_seconds_count{collector="node",result="success"} 38

But logstash is not even running:

 curl -XGET 'localhost:9600/_node/stats/pipeline?pretty' -sv 
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* connect to 127.0.0.1 port 9600 failed: Connection refused
* Failed to connect to localhost port 9600: Connection refused
* Closing connection 0

and the docker logs even report that:

time="2018-02-19T18:06:30Z" level=error msg="Cannot retrieve metrics: Get http://localhost:9600/_node/stats: dial tcp 127.0.0.1:9600: getsockopt: connection refused" source="api_base.go:32"
time="2018-02-19T18:06:30Z" level=error msg="Cannot retrieve metrics: Get http://localhost:9600/_node: dial tcp 127.0.0.1:9600: getsockopt: connection refused" source="api_base.go:32"

On another machine, where i have a freezed logstash, the exporter reports this:

time="2018-02-19T18:07:54Z" level=error msg="Cannot retrieve metrics: Get http://localhost:9600/_node/stats: read tcp 127.0.0.1:16562->127.0.0.1:9600: read: connection reset by peer" source="api_base.go:32"
time="2018-02-19T18:07:54Z" level=error msg="Cannot retrieve metrics: Get http://localhost:9600/_node: read tcp 127.0.0.1:16560->127.0.0.1:9600: read: connection reset by peer" source="api_base.go:32"

yet in this case, the exporter not only still reports "Successful" scrapes, but also takes a long time to reply, about 100s, probably because it is waiting for the logstash reply, before reach some timeout at 100s. so a locked logstash will lock logstash-exporter for 100s

Crashing containers calling libpthread

I just pulled a fresh round of images from DockerHub today and they all crash out immediately with the following error:

/logstash_exporter: relocation error: /lib/libpthread.so.0: symbol h_errno, version GLIBC_PRIVATE not defined in file libc.so.6 with link time reference

Build tries to alter system files

When I run make i get the following error:

go build net: open /usr/lib/golang/pkg/linux_amd64/net.a: permission denied
!! command failed: build -o /home/myuser/go/src/github.com/BonnierNews/logstash_exporter/logstash_exporter -ldflags -s -X github.com/BonnierNews/logstash_exporter/vendor/github.com/prometheus/common/version.Version=0.1.2 -X github.com/BonnierNews/logstash_exporter/vendor/github.com/prometheus/common/version.Revision=d0391964df9c3ba49a605c0c7d98399b0e54e586 -X github.com/BonnierNews/logstash_exporter/vendor/github.com/prometheus/common/version.Branch=master -X github.com/BonnierNews/logstash_exporter/vendor/github.com/prometheus/common/version.BuildUser=myuser@myhost -X github.com/BonnierNews/logstash_exporter/vendor/github.com/prometheus/common/version.BuildDate=20180704-06:42:50  -extldflags '-static' -i -tags 'netgo static_build' github.com/BonnierNews/logstash_exporter: exit status 1

The build should not try to write to anywhere outside the build-environment

Add queue_size_in_bytes metrics

In Queue capacity stats there is also queue_size_in_bytes which ATM is not collected.

This metric is particullary useful to calculate queue usage percentage: queue_size_in_bytes/max_queue_size_in_bytes

Missing output pipeline failures

Hi @christoe,

I just realized that I am missing some Logstash Stats (Logstash v6.3.1). See,

curl -s -XGET 'localhost:9600/_node/stats/pipelines?pretty'

I see that this failures are not reported by logstash_exporter, for example,

    "main" : {
      "events" : {
        "duration_in_millis" : 2770546365,
        "in" : 1657921575,
        "out" : 1657921575,
        "filtered" : 1657921575,
        "queue_push_duration_in_millis" : 472872
      },
      "plugins" : {
        "inputs" : [ {
[...]
       "outputs" : [ {
          "id" : "48c5afaa6f71527237076eedb20c71a5f894b9b51e2b36c63917c03611361605",
          "documents" : {
            "successes" : 12682774,
            "non_retryable_failures" : 7497226
          },
          "events" : {
            "duration_in_millis" : 42568189,
            "in" : 20180000,
            "out" : 20180000
          },
          "bulk_requests" : {
            "successes" : 160313,
            "with_errors" : 97933,
            "responses" : {
              "200" : 258246
            }
          },
          "name" : "elasticsearch"
        }, {
          "id" : "b35a3cceabb4ffd1feab8e14361adafe56019bb53d853794003844d408da02a0",
          "documents" : {
            "successes" : 1591552581,
            "retryable_failures" : 15155,
            "non_retryable_failures" : 46176108
          },
          "events" : {
            "duration_in_millis" : 2650241358,
            "in" : 1637728689,
            "out" : 1637728689
          },
          "bulk_requests" : {
            "successes" : 18249270,
            "with_errors" : 408927,
            "responses" : {
              "200" : 18658197
            },
            "failures" : 30
          },
          "name" : "elasticsearch"
        }, {
          "id" : "72dde10cd418ae02cd8e9579dccd6596d73f5c71e0170ee794f7fc30343fca43",
          "documents" : {
            "successes" : 1
          },
          "events" : {
            "duration_in_millis" : 164169,
            "in" : 12886,
            "out" : 12886
          },
          },
          "bulk_requests" : {
            "successes" : 12808,
            "responses" : {
              "200" : 12808
            },
            "failures" : 1
          },
          "name" : "elasticsearch"
        }, {
          "id" : "4f78952610769660f73e0bc5e9142c79059acffcc20aa249eb7ea3c3a2a26e9d",
          "events" : {
            "duration_in_millis" : 0,
            "in" : 0,
            "out" : 0
          },
          "name" : "elasticsearch"
        }, {
          "id" : "215b2737000764e6ca61fdbedd32e671cc9b4e8bff6b2bf2d79df367cd2ac9e5",
          "documents" : {
            "successes" : 12886
          },
          "events" : {
            "duration_in_millis" : 169051,
            "in" : 12886,
            "out" : 12886
          },
          "bulk_requests" : {
            "successes" : 1,
            "responses" : {
              "200" : 12808
            },
            "failures" : 1
          },
          "name" : "elasticsearch"
        } ]
      },
      "reloads" : {
        "last_error" : null,
        "successes" : 0,
        "last_success_timestamp" : null,
        "last_failure_timestamp" : null,
        "failures" : 0
      },
      "queue" : {
        "type" : "memory"
      }
    }
  }

I see some like this,

logstash_node_pipeline_events_out_total{pipeline=".monitoring-logstash"} 0
logstash_node_pipeline_events_out_total{pipeline="main"} 1.658111575e+09
logstash_node_plugin_events_in_total{pipeline="main",plugin="elasticsearch",plugin_id="215b2737000764e6ca61fdbedd32e671cc9b4e8bff6b2bf2d79df367cd2ac9e5",plugin_type="output"} 12886
logstash_node_plugin_events_in_total{pipeline="main",plugin="elasticsearch",plugin_id="48c5afaa6f71527237076eedb20c71a5f894b9b51e2b36c63917c03611361605",plugin_type="output"} 2.018e+07
logstash_node_plugin_events_in_total{pipeline="main",plugin="elasticsearch",plugin_id="4f78952610769660f73e0bc5e9142c79059acffcc20aa249eb7ea3c3a2a26e9d",plugin_type="output"} 0
logstash_node_plugin_events_in_total{pipeline="main",plugin="elasticsearch",plugin_id="72dde10cd418ae02cd8e9579dccd6596d73f5c71e0170ee794f7fc30343fca43",plugin_type="output"} 12886
logstash_node_plugin_events_in_total{pipeline="main",plugin="elasticsearch",plugin_id="b35a3cceabb4ffd1feab8e14361adafe56019bb53d853794003844d408da02a0",plugin_type="output"} 1.637918689e+09

But nothings related to failures, see the failure stats

# curl -s -XGET 'localhost:9600/_node/stats/pipelines?pretty' | grep fail
        "last_failure_timestamp" : null,
        "failures" : 0
            "non_retryable_failures" : 7497226
            "retryable_failures" : 15155,
            "non_retryable_failures" : 46176108
            "failures" : 30
            "failures" : 1
            "failures" : 1
        "last_failure_timestamp" : null,
        "failures" : 0

Could you please add this? Thanks!

Missing "reloads" metrics

Hi!
It looks like counter metrics regarding reloads (both for pipelines and logstash) are missing, sth in this form:

logstash_pipeline_reloads_failures 0
logstash_pipeline_reloads_successes 0
logstash_reloads_failures 0
logstash_reloads_successes 0

Error on nodestats_collector.go:658

Hello, our logstash_exporter works alright but logs the following lines every 5 seconds:

time="2018-07-01T03:37:45Z" level=error msg=0 source="nodestats_collector.go:658"
time="2018-07-01T03:37:46Z" level=error msg=0 source="nodestats_collector.go:658"
time="2018-07-01T03:37:50Z" level=error msg=0 source="nodestats_collector.go:658"
time="2018-07-01T03:37:51Z" level=error msg=0 source="nodestats_collector.go:658"
time="2018-07-01T03:37:55Z" level=error msg=0 source="nodestats_collector.go:658"
time="2018-07-01T03:37:56Z" level=error msg=0 source="nodestats_collector.go:658"

logstash exporter version is 0.1.1 and we use it like this:
/opt/prometheus/logstash_exporter/logstash_exporter --logstash.endpoint http://localhost:9600

There are no other logs than this and this one, by itself, is not explanatory enough. What is the reason for this ?

Sample Grafana Dashboard available on Grafana

Hi,

maybe the README should include a reference to a sample Dashboard on Grafana, #2525.

And the metric implemented are most likely not "node" metrics as stated in the README.

Compiled and works, thanks a lot!
Andreas

Exporter not working in Logstash 6.3

Hello,

I'm running this collector on Logstash 6.3.2 and it's not exposing any metrics. It worked on Logstash 5.6.

According to the tests, it should work, but for some reason it doesn't and I'm not enough dev to look into it.

Output of exporter:

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 4.0557e-05
go_gc_duration_seconds{quantile="0.25"} 6.958e-05
go_gc_duration_seconds{quantile="0.5"} 0.000106215
go_gc_duration_seconds{quantile="0.75"} 0.000336537
go_gc_duration_seconds{quantile="1"} 0.005423857
go_gc_duration_seconds_sum 7.33736376
go_gc_duration_seconds_count 17538
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 17
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 1.955104e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 4.8217981696e+10
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.552267e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.02241432e+08
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 507904
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 1.955104e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 4.038656e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 2.646016e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 7488
# HELP go_memstats_heap_released_bytes_total Total number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes_total counter
go_memstats_heap_released_bytes_total 1.10592e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 6.684672e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.534244197725826e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 370671
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.0224892e+08
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 4800
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 30704
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 49152
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.194304e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.124717e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 655360
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 655360
# HELP go_memstats_sys_bytes Number of bytes obtained by system. Sum of all system allocations.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.0590456e+07
# HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
# TYPE http_request_duration_microseconds summary
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 9675.681
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 29112.124
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 76503.452
http_request_duration_microseconds_sum{handler="prometheus"} 5.661427601970019e+08
http_request_duration_microseconds_count{handler="prometheus"} 44665
# HELP http_request_size_bytes The HTTP request sizes in bytes.
# TYPE http_request_size_bytes summary
http_request_size_bytes{handler="prometheus",quantile="0.5"} 175
http_request_size_bytes{handler="prometheus",quantile="0.9"} 175
http_request_size_bytes{handler="prometheus",quantile="0.99"} 471
http_request_size_bytes_sum{handler="prometheus"} 7.817436e+06
http_request_size_bytes_count{handler="prometheus"} 44665
# HELP http_requests_total Total number of HTTP requests made.
# TYPE http_requests_total counter
http_requests_total{code="200",handler="prometheus",method="get"} 44665
# HELP http_response_size_bytes The HTTP response sizes in bytes.
# TYPE http_response_size_bytes summary
http_response_size_bytes{handler="prometheus",quantile="0.5"} 2329
http_response_size_bytes{handler="prometheus",quantile="0.9"} 2333
http_response_size_bytes{handler="prometheus",quantile="0.99"} 2339
http_response_size_bytes_sum{handler="prometheus"} 1.03254351e+08
http_response_size_bytes_count{handler="prometheus"} 44665
# HELP logstash_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which logstash_exporter was built.
# TYPE logstash_exporter_build_info gauge
logstash_exporter_build_info{branch="",goversion="go1.8.3",revision="",version=""} 1
# HELP logstash_exporter_scrape_duration_seconds logstash_exporter: Duration of a scrape job.
# TYPE logstash_exporter_scrape_duration_seconds summary
logstash_exporter_scrape_duration_seconds{collector="node",result="success",quantile="0.5"} 0.008065311
logstash_exporter_scrape_duration_seconds{collector="node",result="success",quantile="0.9"} 0.02810888
logstash_exporter_scrape_duration_seconds{collector="node",result="success",quantile="0.99"} 0.075507031
logstash_exporter_scrape_duration_seconds_sum{collector="node",result="success"} 497.8846417519984
logstash_exporter_scrape_duration_seconds_count{collector="node",result="success"} 44666
# HELP logstash_node_gc_collection_duration_seconds_total gc_collection_duration_seconds_total
# TYPE logstash_node_gc_collection_duration_seconds_total counter
logstash_node_gc_collection_duration_seconds_total{collector="old"} 2.1408032e+07
logstash_node_gc_collection_duration_seconds_total{collector="young"} 1.509914e+06
# HELP logstash_node_gc_collection_total gc_collection_total
# TYPE logstash_node_gc_collection_total gauge
logstash_node_gc_collection_total{collector="old"} 166593
logstash_node_gc_collection_total{collector="young"} 229027
# HELP logstash_node_jvm_threads_count jvm_threads_count
# TYPE logstash_node_jvm_threads_count gauge
logstash_node_jvm_threads_count 174
# HELP logstash_node_jvm_threads_peak_count jvm_threads_peak_count
# TYPE logstash_node_jvm_threads_peak_count gauge
logstash_node_jvm_threads_peak_count 176
# HELP logstash_node_mem_heap_committed_bytes mem_heap_committed_bytes
# TYPE logstash_node_mem_heap_committed_bytes gauge
logstash_node_mem_heap_committed_bytes 1.038876672e+09
# HELP logstash_node_mem_heap_max_bytes mem_heap_max_bytes
# TYPE logstash_node_mem_heap_max_bytes gauge
logstash_node_mem_heap_max_bytes 1.038876672e+09
# HELP logstash_node_mem_heap_used_bytes mem_heap_used_bytes
# TYPE logstash_node_mem_heap_used_bytes gauge
logstash_node_mem_heap_used_bytes 6.618756e+08
# HELP logstash_node_mem_nonheap_committed_bytes mem_nonheap_committed_bytes
# TYPE logstash_node_mem_nonheap_committed_bytes gauge
logstash_node_mem_nonheap_committed_bytes 2.90275328e+08
# HELP logstash_node_mem_nonheap_used_bytes mem_nonheap_used_bytes
# TYPE logstash_node_mem_nonheap_used_bytes gauge
logstash_node_mem_nonheap_used_bytes 2.49728912e+08
# HELP logstash_node_mem_pool_committed_bytes mem_pool_committed_bytes
# TYPE logstash_node_mem_pool_committed_bytes gauge
logstash_node_mem_pool_committed_bytes{pool="old"} 7.2482816e+08
logstash_node_mem_pool_committed_bytes{pool="survivor"} 3.4865152e+07
logstash_node_mem_pool_committed_bytes{pool="young"} 2.7918336e+08
# HELP logstash_node_mem_pool_max_bytes mem_pool_max_bytes
# TYPE logstash_node_mem_pool_max_bytes gauge
logstash_node_mem_pool_max_bytes{pool="old"} 7.2482816e+08
logstash_node_mem_pool_max_bytes{pool="survivor"} 3.4865152e+07
logstash_node_mem_pool_max_bytes{pool="young"} 2.7918336e+08
# HELP logstash_node_mem_pool_peak_max_bytes mem_pool_peak_max_bytes
# TYPE logstash_node_mem_pool_peak_max_bytes gauge
logstash_node_mem_pool_peak_max_bytes{pool="old"} 7.2482816e+08
logstash_node_mem_pool_peak_max_bytes{pool="survivor"} 7.2482816e+08
logstash_node_mem_pool_peak_max_bytes{pool="young"} 7.2482816e+08
# HELP logstash_node_mem_pool_peak_used_bytes mem_pool_peak_used_bytes
# TYPE logstash_node_mem_pool_peak_used_bytes gauge
logstash_node_mem_pool_peak_used_bytes{pool="old"} 6.05408008e+08
logstash_node_mem_pool_peak_used_bytes{pool="survivor"} 6.05408008e+08
logstash_node_mem_pool_peak_used_bytes{pool="young"} 6.05408008e+08
# HELP logstash_node_mem_pool_used_bytes mem_pool_used_bytes
# TYPE logstash_node_mem_pool_used_bytes gauge
logstash_node_mem_pool_used_bytes{pool="old"} 5.944496e+08
logstash_node_mem_pool_used_bytes{pool="survivor"} 8.000088e+06
logstash_node_mem_pool_used_bytes{pool="young"} 5.9425912e+07
# HELP logstash_node_pipeline_duration_seconds_total pipeline_duration_seconds_total
# TYPE logstash_node_pipeline_duration_seconds_total counter
logstash_node_pipeline_duration_seconds_total 0
# HELP logstash_node_pipeline_events_filtered_total pipeline_events_filtered_total
# TYPE logstash_node_pipeline_events_filtered_total counter
logstash_node_pipeline_events_filtered_total 0
# HELP logstash_node_pipeline_events_in_total pipeline_events_in_total
# TYPE logstash_node_pipeline_events_in_total counter
logstash_node_pipeline_events_in_total 0
# HELP logstash_node_pipeline_events_out_total pipeline_events_out_total
# TYPE logstash_node_pipeline_events_out_total counter
logstash_node_pipeline_events_out_total 0
# HELP logstash_node_process_cpu_total_seconds_total process_cpu_total_seconds_total
# TYPE logstash_node_process_cpu_total_seconds_total counter
logstash_node_process_cpu_total_seconds_total 673483
# HELP logstash_node_process_max_filedescriptors process_max_filedescriptors
# TYPE logstash_node_process_max_filedescriptors gauge
logstash_node_process_max_filedescriptors 16384
# HELP logstash_node_process_mem_total_virtual_bytes process_mem_total_virtual_bytes
# TYPE logstash_node_process_mem_total_virtual_bytes gauge
logstash_node_process_mem_total_virtual_bytes 5.824548864e+09
# HELP logstash_node_process_open_filedescriptors process_open_filedescriptors
# TYPE logstash_node_process_open_filedescriptors gauge
logstash_node_process_open_filedescriptors 302
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 205.06
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1024
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 9
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 8.822784e+06
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.53379762791e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 4.45329408e+08

Add 'up' metric

Hello,

I'm testing your exporter with our logstash instances and I'm missing an 'up' metrics which would tell me if logstash is running or not, so I can alert if it crashes.

Alxrem is exposing this metric in his exporter. Would you be willing to add it as well? It seems to me like a useful metric to have.

Thanks

Expose Logtash, OS, and JVM Information

Hi @christoe ,

Would it be possible exposing this information as labels of a metric with value 1, like prometheus does with logstash_exporter_build_info? I think this would be a nice to have information for a text field.

  • Logstash Information (version, pipeline information)
  • OS Information (name, arch, version, available processors)
  • JVM Infomation (vm_version, vm_vendor, vm_name)

See,

root@hostname ~ # curl -s http://localhost:9600/_node | jq
{
  "host": "hostname",
  "version": "5.6.5",
  "http_address": "127.0.0.1:9600",
  "id": "XXX",
  "name": "hostname",
  "pipeline": {
    "workers": 1234,
    "batch_size": 1234,
    "batch_delay": 10,
    "config_reload_automatic": false,
    "config_reload_interval": 3,
    "dead_letter_queue_enabled": false,
    "id": "main"
  },
  "os": {
    "name": "Linux",
    "arch": "amd64",
    "version": "3.10.0-693.11.1.el7.x86_64",
    "available_processors": 64
  },
  "jvm": {
    "pid": 34059,
    "version": "1.8.0_151",
    "vm_version": "1.8.0_151",
    "vm_vendor": "Oracle Corporation",
    "vm_name": "OpenJDK 64-Bit Server VM",
    "start_time_in_millis": 1513864863107,
    "mem": {
      "heap_init_in_bytes": 1073741824,
      "heap_max_in_bytes": 4151836672,
      "non_heap_init_in_bytes": 2555904,
      "non_heap_max_in_bytes": 0
    },
    "gc_collectors": [
      "ParNew",
      "ConcurrentMarkSweep"
    ]
  }
}

Thanks!

go get error

go get -u github.com/DagensNyheter/logstash_exporter
# github.com/DagensNyheter/logstash_exporter
logstash_exporter/logstash_exporter.go:73:18: cannot use ch (type chan<- "github.com/DagensNyheter/logstash_exporter/vendor/github.com/prometheus/client_golang/prometheus".Metric) as type chan<- "github.com/BonnierNews/logstash_exporter/vendor/github.com/prometheus/client_golang/prometheus".Metric in argument to c.Collect
go version
go version go1.9 linux/amd64
govendor -version
v1.0.8

Issue while building docker image

Facing issue while building docker image.
package github.com/BonnierNews/logstash_exporter: directory "/go/src/github.com/BonnierNews/logstash_exporter" is not using a known version control system
T

Below are trace while building image.
Sending build context to Docker daemon 12.45MB
Step 1/8 : FROM golang:1.9 as golang
---> ef89ef5c42a9
Step 2/8 : ADD . $GOPATH/src/github.com/BonnierNews/logstash_exporter/
---> 97ace632cfe1
Step 3/8 : RUN curl -fsSL -o /usr/local/bin/dep https://github.com/golang/dep/releases/download/v0.3.2/dep-linux-amd64 && chmod +x /usr/local/bin/dep && go get -u github.com/BonnierNews/logstash_exporter && cd $GOPATH/src/github.com/BonnierNews/logstash_exporter && dep ensure && make
---> Running in 9b8597dd1c79
package github.com/BonnierNews/logstash_exporter: directory "/go/src/github.com/BonnierNews/logstash_exporter" is not using a known version control system
The command '/bin/sh -c curl -fsSL -o /usr/local/bin/dep https://github.com/golang/dep/releases/download/v0.3.2/dep-linux-amd64 && chmod +x /usr/local/bin/dep && go get -u github.com/BonnierNews/logstash_exporter && cd $GOPATH/src/github.com/BonnierNews/logstash_exporter && dep ensure && make' returned a non-zero code: 1

Add queue metrics

Would be nice to have the queue metrics for persistent queues too.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.