Coder Social home page Coder Social logo

prometheus / haproxy_exporter Goto Github PK

View Code? Open in Web Editor NEW
609.0 609.0 219.0 10.06 MB

Simple server that scrapes HAProxy stats and exports them via HTTP for Prometheus consumption

License: Apache License 2.0

Makefile 3.23% Go 95.93% Dockerfile 0.84%
go haproxy haproxy-exporter metrics prometheus prometheus-exporter

haproxy_exporter's Introduction

Prometheus
Prometheus

Visit prometheus.io for the full documentation, examples and guides.

CI Docker Repository on Quay Docker Pulls Go Report Card CII Best Practices Gitpod ready-to-code Fuzzing Status OpenSSF Scorecard

Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed.

The features that distinguish Prometheus from other metrics and monitoring systems are:

  • A multi-dimensional data model (time series defined by metric name and set of key/value dimensions)
  • PromQL, a powerful and flexible query language to leverage this dimensionality
  • No dependency on distributed storage; single server nodes are autonomous
  • An HTTP pull model for time series collection
  • Pushing time series is supported via an intermediary gateway for batch jobs
  • Targets are discovered via service discovery or static configuration
  • Multiple modes of graphing and dashboarding support
  • Support for hierarchical and horizontal federation

Architecture overview

Architecture overview

Install

There are various ways of installing Prometheus.

Precompiled binaries

Precompiled binaries for released versions are available in the download section on prometheus.io. Using the latest production release binary is the recommended way of installing Prometheus. See the Installing chapter in the documentation for all the details.

Docker images

Docker images are available on Quay.io or Docker Hub.

You can launch a Prometheus container for trying it out with

docker run --name prometheus -d -p 127.0.0.1:9090:9090 prom/prometheus

Prometheus will now be reachable at http://localhost:9090/.

Building from source

To build Prometheus from source code, You need:

Start by cloning the repository:

git clone https://github.com/prometheus/prometheus.git
cd prometheus

You can use the go tool to build and install the prometheus and promtool binaries into your GOPATH:

GO111MODULE=on go install github.com/prometheus/prometheus/cmd/...
prometheus --config.file=your_config.yml

However, when using go install to build Prometheus, Prometheus will expect to be able to read its web assets from local filesystem directories under web/ui/static and web/ui/templates. In order for these assets to be found, you will have to run Prometheus from the root of the cloned repository. Note also that these directories do not include the React UI unless it has been built explicitly using make assets or make build.

An example of the above configuration file can be found here.

You can also build using make build, which will compile in the web assets so that Prometheus can be run from anywhere:

make build
./prometheus --config.file=your_config.yml

The Makefile provides several targets:

  • build: build the prometheus and promtool binaries (includes building and compiling in web assets)
  • test: run the tests
  • test-short: run the short tests
  • format: format the source code
  • vet: check the source code for common errors
  • assets: build the React UI

Service discovery plugins

Prometheus is bundled with many service discovery plugins. When building Prometheus from source, you can edit the plugins.yml file to disable some service discoveries. The file is a yaml-formated list of go import path that will be built into the Prometheus binary.

After you have changed the file, you need to run make build again.

If you are using another method to compile Prometheus, make plugins will generate the plugins file accordingly.

If you add out-of-tree plugins, which we do not endorse at the moment, additional steps might be needed to adjust the go.mod and go.sum files. As always, be extra careful when loading third party code.

Building the Docker image

The make docker target is designed for use in our CI system. You can build a docker image locally with the following commands:

make promu
promu crossbuild -p linux/amd64
make npm_licenses
make common-docker-amd64

Using Prometheus as a Go Library

Remote Write

We are publishing our Remote Write protobuf independently at buf.build.

You can use that as a library:

go get buf.build/gen/go/prometheus/prometheus/protocolbuffers/go@latest

This is experimental.

Prometheus code base

In order to comply with go mod rules, Prometheus release number do not exactly match Go module releases. For the Prometheus v2.y.z releases, we are publishing equivalent v0.y.z tags.

Therefore, a user that would want to use Prometheus v2.35.0 as a library could do:

go get github.com/prometheus/[email protected]

This solution makes it clear that we might break our internal Go APIs between minor user-facing releases, as breaking changes are allowed in major version zero.

React UI Development

For more information on building, running, and developing on the React-based UI, see the React app's README.md.

More information

  • Godoc documentation is available via pkg.go.dev. Due to peculiarities of Go Modules, v2.x.y will be displayed as v0.x.y.
  • See the Community page for how to reach the Prometheus developers and users on various communication channels.

Contributing

Refer to CONTRIBUTING.md

License

Apache License 2.0, see LICENSE.

haproxy_exporter's People

Contributors

ashmere avatar brian-brazil avatar colakong avatar dependabot[bot] avatar dimitrovvlado avatar discordianfish avatar frittentheke avatar gkistler avatar grobie avatar inosato avatar juliusv avatar matthiasr avatar matttproud avatar mmoya avatar philipfoulkes avatar pjhamala avatar prombot avatar roidelapluie avatar roman-vynar avatar sdurrheimer avatar simonpasquier avatar superq avatar tboerger avatar tescherm avatar thomasgl-orange avatar vitorarins avatar vsamidurai avatar wdauchy avatar xla avatar yuvraj9 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

haproxy_exporter's Issues

Most gauges are actually counters

Looks like haproxy exporter uses gauges for most metrics although they are in fact counters, e.g:

# TYPE haproxy_backend_bytes_in_total gauge
...
# TYPE haproxy_backend_http_responses_total gauge
...
...

Support reading from UNIX socket

From this fine article

The stats page is great for a quick, human-readable view of HAProxy. However, there are a couple of downsides:

  • static: the page must be refreshed to be updated
  • ephemeral: old statistics are lost on each refresh If you plan on scraping HAproxy’s metrics for a script or otherwise, communicating over the socket interface is a much more practical method.

UPDATE 1:
Though, as mentioned in the insightful comment by @grobie, reading stats through UNIX socket will still carry the above characteristics, and that part of the article is incoherent, there are still more stats available than through the HTTP endpoint, and getting those exported appears to be attractive to some members of the community.

Metrics Value too high

heya,
So after using the haproxy_exporter I see that:
increase(haproxy_server_connections_total{backend=~"$backend", server=~"$server", alias="$alias"}[$interval])
as a too high value to be considered valid, in haproxy the pool is set to have maximum of 5000 connections how does it appear to have eg: 171472?

thanks
Ricardo

Compatibility with single-dash command line flags broken in v0.8.0

In v0.8.0, the syntax for command-line flags was changed to use double dashed prefix instead of single dash prefix, without any backward compatibility.

This has the effect of breaking all projects which use the unversionned docker image (prom/haproxy-exporter:latest) and pass custom argument to the /bin/haproxy_exporter entrypoint. Those now fail with this message in the log:

haproxy_exporter: error: unknown short flag '-a', try --help

I don't mind updating my own projects to use versioned images, but here it broke existing important project which may not be updated that easily such as the openshift haproxy router addon (see my issue here: openshift/origin#15982 ).

Would it be possible to ensure some backward compatibility in the next release?

Haproxy exporter unable to fetch data getting invalid URL and invalid userinfo errors

When i run below command, I am getting error invalid URL port.

./haproxy_exporter --no-haproxy.ssl-verify --haproxy.scrape-uri="http://user:$(cat pwfile)192.168.1.10:10000/haproxy/stats;csv"

OUTPUT:

INFO[0000] Starting haproxy_exporter (version=0.9.0, branch=master, revision=0cae8ee3e3f3b7c517db2cc68f386672d8b1b6a7)  source=haproxy_exporter.go:495
INFO[0000] Build context (go=go1.10.1, user=root@rlinux57, date=20180724-16:08:06)  source=haproxy_exporter.go:496
INFO[0000] Listening on :9101                            source=haproxy_exporter.go:521


**ERRO[0013] Can't scrape HAProxy: Get http://admin:abEDokA("192.168.1.10:10000/haproxy/stats;csv: invalid URL port abEDokA("192.168.1.10:10000"  source=haproxy_exporter.go:315**

And when i placed @ sign between password and IP address, such as ./haproxy_exporter --no-haproxy.ssl-verify --haproxy.scrape-uri="http://admin:abEDokA("@192.168.1.10:10000/haproxy/stats;csv"
It gives below error:

INFO[0000] Starting haproxy_exporter (version=0.9.0, branch=master, revision=0cae8ee3e3f3b7c517db2cc68f386672d8b1b6a7)  source=haproxy_exporter.go:495
INFO[0000] Build context (go=go1.10.1, user=root@rlinux57, date=20180724-16:08:06)  source=haproxy_exporter.go:496
FATA[0000] parse http://admin:abEDokA("@192.168.1.10:10000/haproxy/stats;csv: net/url: invalid userinfo  source=haproxy_exporter.go:500

And my prometheus settings are:

  - job_name: 'haproxy'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
   static_configs:
    - targets: ['localhost:9101']

[Feature Request] haproxy_exporter should be able to parse haproxy syslog logs

haproxy_exporter should be able to parse haproxy syslog format,
the syslog (at info level) allows for 'access_log' style metrics to be digested, but with the added value of deep metrics on each request including:

  • session duration per the HTTP request path (this is only sensible if you have a bounded small set of http_request_paths)
  • request rates per backend, server and status code
  • histograms for the server and backend queue lengths experienced by each request

*capabilties list 'stolen' from a comment by a python implementer on google

This feature would allow the haproxy exporter to be have complete monitoring capabilities for haproxy, not just those exposed by the stats endpoint..

possible shortcuts for implementations:
grok exporter Allows syslog metrics collection with logstash patterns, which include haproxy

thoughts: This would require the haproxy_exporter to host a syslog server internally, which is also being done by community\personal project stream_exporter

Config for status checks

Is it possible to have the status checks for server state be definable in a config? Currently it's hard-coded to limit your health checks to the default fall 3 rise 2, however in many cases you want to be able to adjust that along with the interval.

image

For example, I may want to have fall 6 rise 1 with an interval of 10s, if I do that, during any transition state, the value of the haproxy_server_up in prom is 0 when it should be either 1 for up transitioning down or 0 for down transitioning up.

System Proxy not used

I need to connect to the haproxy stats uri using a http proxy.

grafana:~/haproxy_exporter # http_proxy="IP:3128" go run /root/haproxy_exporter/haproxy_exporter.go --web.listen-address=":9102" --haproxy.scrape-uri="http://RESSOURCE/backhand/abc;csv"
INFO[0000] Starting haproxy_exporter (version=, branch=, revision=) source="haproxy_exporter.go:495"
INFO[0000] Build context (go=go1.9.4, user=, date=) source="haproxy_exporter.go:496"
INFO[0000] Listening on :9102 source="haproxy_exporter.go:521"
ERRO[0009] Can't scrape HAProxy: Get http://RESSOURCE/backhand/abc;csv: dial tcp: lookup RESSOURCE on [::1]:53: read udp [::1]:54687->[::1]:53: read: connection refused source="haproxy_exporter.go:315"

Same with ip adress used, no requests incoming on squid proxy.

Proxy is set as System Proxy, tried with https but same result.

getting stats from nbproc > 1

I would like to know how is the metrics gathered when haproxy is configured as nbproc >1.

Because the problem with hatop is that can only attach itself to one process, does the same happens with this haproxy_exporter?

context deadline exceeded all the way down

Guys could someone suggest right time for this?
I'm try out 5,10,15 but no luck just get same error. Meanwhile via browser all data looks good, and haproxy host not even hardly loaded, just regular ~10rps

Provide ARM64 binary

At Arm and Linaro we are working on getting Kolla-Kubernetes to deploy OpenStack on ARM64. But the deployment need to deploy haproxy_exporter docker image, which the image building need haproxy_exporter binary for ARM64.

We may help with getting it built.

Response time from HAproxy stats

Is there a way to get the response time from HAProxy exporter? The HAProxy stats shows the average response time for the last 1024 successful connections.

Can't scrape HAProxy using dial unix

Hi,
I have a cluster of docker in swarm mode and I would like to monitor all of my haproxy instances running in global mode.

See my compose:

    image: prom/haproxy-exporter
    networks:
      - dev
    volumes:
      - /run/haproxy:/run/haproxy,readonly
    command: --haproxy.scrape-uri=unix:/run/haproxy/admin.sock
    deploy:
      mode: global
      endpoint_mode: dnsrr
      resources:
        limits:
          memory: 128M
        reservations:
          memory: 64M

I configured ha proxy to enable stats socks:
stats socket /run/haproxy/admin.sock mode 660 level admin

I mapped the volume to docker hosts:

    volumes:
      - /run/haproxy:/run/haproxy

I can´t figured out what is going on because I´m getting this error:

monitoring_haproxy-exporter.0.iihvejbsceh5@server1    | time="2018-04-30T05:11:17Z" level=error msg="Can't scrape HAProxy: dial unix /run/haproxy/admin.sock: connect: no such file or directory" source="haproxy_exporter.go:315" 

It seems that the path is correct, any idea ?
Thanks in advance
`

Can not read haproxy csv data

Haproxy_exporter conf and run log:

 ./haproxy_exporter -haproxy.scrape-uri=http://localhost:8089:/stats;csv -haproxy.pid-file=/xxx/xxx/haproxy-server/conf/haproxy.pid
INFO[0000] Starting haproxy_exporter (version=0.7.1, branch=master, revision=735866c7ef98062d5f0a4a25680ac3b96d6767f2)  source=haproxy_exporter.go:460
INFO[0000] Build context (go=go1.7.1, user=root@c84140abee66, date=20161012-10:58:02)  source=haproxy_exporter.go:461
INFO[0000] Listening on :9101                            source=haproxy_exporter.go:486

but error is:

ERRO[0008] Can't read CSV: line 1, column 22: bare " in non-quoted-field  source=haproxy_exporter.go:317
ERRO[0008] Can't read CSV: line 2, column 38: bare " in non-quoted-field  source=haproxy_exporter.go:317

why?

Metrics are incomplete if error encountered parsing CSV from HAProxy

This error could possibly even include a timeout, in which case we would prefer to not expose any metrics at all rather than an incomplete set (which can lead to heavy underreporting of traffic and thus cause false alerts). Essentially, in the scrape() method, we loop through the lines, setting metrics, and we will just skip errors there.

This will probably implicitly change/get fixed by moving to const metrics, see #43

Can't scrape stats but curl is working

Hi guys,

I'm not able to scrape data with haproxy_exporter but my curl is working properly. I tried with 0.9.0 and master versions from https://hub.docker.com/r/prom/haproxy-exporter

docker logs output:

# curl localhost:9101/metrics
time="2018-08-20T19:47:17Z" level=info msg="Starting haproxy_exporter (version=0.9.0, branch=master, revision=44e70a0ba99f1443a406073f6e41b372e5e02fd5)" source="haproxy_exporter.go:508" 
time="2018-08-20T19:47:17Z" level=info msg="Build context (go=go1.10.3, user=root@956ee5a5616a, date=20180807-19:25:22)" source="haproxy_exporter.go:509" 
time="2018-08-20T19:47:17Z" level=info msg="Listening on :9101" source="haproxy_exporter.go:534" 
time="2018-08-20T19:48:25Z" level=error msg="Can't scrape HAProxy: Get http://xxx:[email protected]:1984/haproxy?stats;csv: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" source="haproxy_exporter.go:328"

curl output:

# curl "http://xxx:[email protected]:1984/haproxy?stats;csv"
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
stats,FRONTEND,,,1,1,2000,2,184,23802,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,1,0,1,,,,0,1,0,0,0,0,,1,1,2,,,0,0,0,0,,,,,,,,
stats,BACKEND,0,0,0,0,200,0,184,23802,0,0,,0,0,0,0,UP,0,0,0,,0,1229393,0,,1,2,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0,0,0,0,0,0,,,0,0,0,0,
portalredirect,FRONTEND,,,0,2,2000,15,2063,4133,0,0,9,,,,,OPEN,,,,,,,,,1,3,0,,,,0,0,0,2,,,,0,0,5,10,0,0,,0,2,15,,,0,0,0,0,,,,,,,,
portalredirect,controller001,0,0,0,1,,2,618,770,,0,,0,0,0,0,UP,1,1,0,0,0,1229393,0,,1,3,1,,2,,2,0,,1,L7OK,301,3,0,0,2,0,0,0,0,,,,0,0,,,,,1226516,Moved Permanently,,0,0,0,1,
portalredirect,controller002,0,0,0,1,,2,391,710,,0,,0,0,0,0,UP,1,1,0,1,0,1229393,0,,1,3,2,,2,,2,0,,1,L7OK,301,3,0,0,1,1,0,0,0,,,,0,0,,,,,1224142,Moved Permanently,,0,1,0,1,
portalredirect,controller003,0,0,0,1,,2,1054,770,,0,,0,0,0,0,UP,1,1,0,1,0,1229393,0,,1,3,3,,2,,2,0,,1,L7OK,301,3,0,0,2,0,0,0,0,,,,0,0,,,,,1222380,Moved Permanently,,0,0,0,0,
portalredirect,BACKEND,0,0,0,1,200,6,2063,4133,0,0,,0,0,0,0,UP,3,3,0,,0,1229393,0,,1,3,0,,6,,1,0,,1,,,,0,0,5,1,0,0,,,,,0,0,0,0,0,0,1222380,,,0,0,0,1,
portal,FRONTEND,,,0,15,2000,342,1063917,6461543,0,0,0,,,,,OPEN,,,,,,,,,1,4,0,,,,0,0,0,6,,,,,,,,,,,0,0,0,,,0,0,0,0,,,,,,,,
portal,controller001,0,0,0,2,,41,34032,313825,,0,,0,0,0,0,UP,1,1,0,1,0,1229393,0,,1,4,1,,41,,2,0,,2,L4OK,,2,,,,,,,0,,,,3,0,,,,,1221784,,,0,0,0,378,
portal,controller002,0,0,0,13,,227,709898,4189785,,0,,0,3,10,0,UP,1,1,0,1,0,1229393,0,,1,4,2,,217,,2,0,,6,L4OK,,2,,,,,,,0,,,,77,3,,,,,1221692,,,8,6,0,24114,
portal,controller003,0,0,0,10,,84,319987,1957933,,0,,0,0,0,0,UP,1,1,0,1,0,1229393,0,,1,4,3,,84,,2,0,,5,L4OK,,2,,,,,,,0,,,,23,0,,,,,1221771,,,0,0,0,13303,
portal,BACKEND,0,0,0,15,200,342,1063917,6461543,0,0,,0,3,10,0,UP,3,3,0,,0,1229393,0,,1,4,0,,342,,1,0,,6,,,,,,,,,,,,,,103,3,0,0,0,0,1221692,,,7,4,0,22542,

Thanks for your help.

Support for multiple haproxy processes

We run haproxy with nbproc set to > 1, which means that haproxy_exporter will only scrape a single instance. It would be useful if there was some support for multiple scrape URIs.

Specifying additional metrics when running HaProxy exporter doesn't seem to have any effect on the metrics reported.

Specifying -haproxy.server-metric-fields doesn't seem to have any effect on the list of metrics being collected by the haproxy exporter,

Based on the help documentation, we got the list of metrics from here

So when we run the exporter with the following commands,

docker run -d -p 9101:9101 prom/haproxy-exporter -haproxy.scrape-uri="http://<User>:<Password>@wordpress.wordpress.mysite.com:8010/haproxy?stats;csv" -haproxy.server-metric-fields="1,2,3,4,7,12"

docker run -d -p 9101:9101 prom/haproxy-exporter -haproxy.scrape-uri="http://<User>:<Password>@wordpress.wordpress.mysite.com:8010/haproxy?stats;csv"

The list of metrics seems to be the same, we still don't metrics like max connections/request queue size etc.

Support configuration through environment variables

It would be nice, if haproxy_exporter supported configuration via environment variables for --haproxy.scrape-uri. The scrape-uri can contain username:password combination and might be visible on Linux using a simple ps(1) command. Kingpin supports overrides from environment variables via the OverrideDefaultFromEnvar() function (there is a simple example on the Kingpin Homepage).

Continue processing after broken CSV lines

While one server was drained, our HAProxies produced lines like

backend-name,server-name,0,0,0,1,,17,21927,1153320,,0,,0,0,0,0,DRAIN (agent)100,1,0,0,0,4632,0,,1,42,4,,17,,2,0,,1,L7OK,200,0,0,9,5,3,0,0,0,,,,0,0,,,,,11352,OK,Connection refused,0,0,1,15,

Note the missing comma after DRAIN (agent). Most metrics went AWOL, even though the lines for other servers are still processable. The exporter should log a warning but carry on.

Support basic auth

We're using the haproxy exporter in an insecure environment and would like to be able to require basic auth for scraping it. Would it be possible to add support for it?

Thank you for a really useful piece of software!

No stats, but 0 scrape errors

I am running this from the docker container. Prometheus is showing 0 haproxy cvs scrape errors, but the only haproxy stats I see are up, haproxy_exporter_csv_parse_failures, and haproxy_exporter_total_scrapes. Is there something else I should be doing to get the haproxy metrics to show?

first path segment in URL cannot contain colon

if we use an IP, i have this message (both master and v0.8.0 in docker image) :
time="2017-12-19T18:22:08Z" level=info msg="Starting haproxy_exporter (version=0.8.0, branch=master, revision=2894f78b2ac6b3bb270dbe6920367ac6309aff9e)" source="haproxy_exporter.go:494"
time="2017-12-19T18:22:08Z" level=info msg="Build context (go=go1.9.2, user=root@8824b8ce515a, date=20171216-23:43:45)" source="haproxy_exporter.go:495"
time="2017-12-19T18:22:08Z" level=fatal msg="parse "http://10.10.0.208:9300/haproxy?stats;csv\": first path segment in URL cannot contain colon" source="haproxy_exporter.go:499"

Monitor multiple nginx pods under service

I need to monitor each pod under my service. All pods are running HAproxy. Any sample example for this kind of requirement. Prometheus was launched using prometheus-operator. Under targets I can see only haproxy exporter.

Offer: Ambiguity of quotes usage

In your solution You did write

Usage

HTTP stats URL

Specify custom URLs for the HAProxy stats port using the --haproxy.scrape-uri flag. For example, if you have set stats uri /baz,

haproxy_exporter --haproxy.scrape-uri="http://localhost:5000/baz?stats;csv"
Or to scrape a remote host:

haproxy_exporter --haproxy.scrape-uri="http://haproxy.example.com/haproxy?stats;csv"
Note that the ;csv is mandatory (and needs to be quoted).

If your stats port is protected by basic auth, add the credentials to the scrape URL:

haproxy_exporter --haproxy.scrape-uri="http://user:[email protected]/haproxy?stats;csv"
You can also scrape HTTPS URLs. Certificate validation is enabled by default, but you can disable it using the --haproxy.ssl-verify=false flag:

haproxy_exporter --no-haproxy.ssl-verify --haproxy.scrape-uri="https://haproxy.example.com/haproxy?stats;csv"

BUT If I want to make service In Linux by systemctl, for example

ExecStart=/opt/haproxy_exporter/haproxy_exporter --web.listen-address=":8000" "--haproxy.scrape-uri="http://localhost:5000/baz?stats;csv"

you did get error

time="2018-07-13T19:00:23+03:00" level=fatal msg="parse "http://localhost:5000/baz?stats;csv": first path segment in URL cannot contain colon" source="haproxy_exporter.go:498"

If did change contain to this one

/opt/haproxy_exporter/haproxy_exporter "--web.listen-address=:8000" ""--haproxy.scrape-uri=http://localhost:5000/baz?stats;csv"

will be OK.

I did check in command line Linux/Windows - did work good!

I suggest that you make changes to the description to avoid other mistakes in the future.

Thank You.

Release 0.8.1 ?

Hi @grobie,
Is it possible for you to release a v0.8.1, so that we can benefit from the changes (both fixes and new features)?
Thanks,

Wrong CSV field count

I have configured haproxy exporter on my ubuntu server. This haproxy exporter suppose to connect haproxy server and read the data. But after executing following command getting Wrong CSV field count error.

command executed: ./haproxy_exporter -haproxy.scrape-uri 'http://"user:pass"@host:port/admin?stats/;CSV'

Output: INFO[0000] Starting Server: :9101 source=haproxy_exporter.go:424

But when I refresh the page localhost:9101 It throws following error on command line.
ERRO[0008] Wrong CSV field count: 1 vs. 52 source=haproxy_exporter.go:304
ERRO[0008] Wrong CSV field count: 1 vs. 52 source=haproxy_exporter.go:304
ERRO[0008] Wrong CSV field count: 1 vs. 52 source=haproxy_exporter.go:304

What is `-haproxy.pid-file` used for?

I can't find any information in README about it and -help only states

Path to haproxy's pid file.

Do I need it? When should I need it? I propose improving the -help message somewhat.

Long timeout metrics capture process when process terminates

Description of problem:
After upgrading from Openshift 3.3 to 3.4, the router metrics exporting pod is no longer working, seemingly due to a port conflict (where nothing is listening on the port)

I don't know if the container is setting SO_REUSEPORT on the container (I don't see it in haproxy_exporter.go and I don't think the golang http.ListenAndServe() does it by default).

See https://bugzilla.redhat.com/show_bug.cgi?id=1426446 for context.

MAINT status should not return 0

When a frontend or backend is set to maintenance mode, is is down on purpose and should not return a fail state to prometheus/grafana by returning 0. Instead I suggest a returncode of 2 so grafana can give the status a special color in case a loadbalancer is equipped with such a setting.

We've tried changing the return value in the code but that didn't seem to do the trick.

Use ConstMetrics

The haproxy exporter should be using ConstMetrics, rather than sharing state across scrapes.

Unable to scrape HAProxy instance running on pfSense

When trying to scrape some stats from haproxy running on pfsense I get an error like:

level=error msg="Parser expected at least 33 CSV fields, but got: 1" source="haproxy_exporter.go:361"

Looking at the haproxy stats csv output in a browser I get a single "#" on the fist row, I'm guessing that this is the issue?

Example:
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,agent_status,agent_code,agent_duration,check_desc,agent_desc,check_rise,check_fall,check_health,agent_rise,agent_fall,agent_health,addr,cookie,mode,algo,conn_rate,conn_rate_max,conn_tot,intercepted,dcon,dses,
Followed by the frontends and backends of my setup...

I'm running HAProxy version 1.7.9

I started the haproxy_exporter in a docker container on a separate instance like this:
$ docker run --rm -p 9101:9101 --name haproxy-exporter quay.io/prometheus/haproxy-exporter:v0.8.0 --haproxy.scrape-uri="http://user:[email protected]:8080/haproxy/haproxy_stats.php?haproxystats=1;csv"

(stats uri in the config is set by pfsense to):
stats uri /haproxy/haproxy_stats.php?haproxystats=1

connections_total should be named sessions_total

The metric connections_total should be named sessions_total.


Field 7 of the HAProxy stats CSV is named stot

In HAProxy 1.5 it is described as:

cumulative number of connections

In HAProxy 1.6 it is described as:

cumulative number of connections

In HAProxy 1.7 it is described as:

cumulative number of sessions

1.7 also adds:

 77: conn_rate [.F..]: number of connections over the last elapsed second
 78: conn_rate_max [.F..]: highest known conn_rate
 79: conn_tot [.F..]: cumulative number of connections

With a changelog entry of:

  • MINOR: stats: add 3 fields to report the frontend-specific connection stats

Link to the commit for that changelog entry.

It would appear that the HAProxy documentation up to 1.7 had an incorrect description for stot which this exporter reflects.

Further evidence that the description is incorrect:

  • stot makes more sense as an abbreviation for session total than connection total.
  • The three metrics preceding stot are:
  4. scur [LFBS]: current sessions
  5. smax [LFBS]: max sessions
  6. slim [LFBS]: configured session limit
  • The HAProxy changelog makes no mention of stot changing purpose.

I believe the metric in this exporter should be renamed, so as to:

  1. Ensure accuracy.
  2. Make room for the addition of HAProxy 1.7 metrics.

Thoughts? :)

--haproxy.ssl-verify=false does not work

Latest release says in the readme that --haproxy.ssl-verify=false can be used when scraping a https:// url.

I tried the release tag and head, both print the following. Also tried --haproxy.ssl-verify false that afaik works with golangs flag stdlib package.

haproxy_exporter: error: unexpected false, try --help

I don't know whats up, you are using some kingpin lib to parse the options. https://github.com/prometheus/haproxy_exporter/blob/master/haproxy_exporter.go#L475-L488 It seems this library does not recognize false as a boolean. So the intention is then to only use it to default to false, and if the param it defined without a value, it gets set to true.

If you don't want to set ssl verify default behavior to false, you could change the flag type to string, and check if its explicitly set to "false", make that the boolean in your code. Or you could negate the flag meaning --haproxy.ssl-skip-verify=true, looks like its new in 0.9.0, but that would ofc be a breaking change.

P.S. Working on my certs atm. and it seem go:s http client and stuff like curl cannot fully verify the cert chain. I would have liked to use this as a temp solution. Wont need it in a few days anymore when I get this cert stuff sorted out. But as its a documented feature I thought I'd report it anyway :)

Provide prebuilt binaries

It would be tremendously easier to deploy haproxy_exporter if you could provide prebuilt binaries for the commen platforms. At least for linux/amd64.

How to connecto https haproxy source

Hi,

I'm trying to connect to haproxy like this ~/work/bin/haproxy_exporter -haproxy.scrape-uri="https://loadbalancerserver/haproxy?stats;csv" but i have to use https and get same error all the time (which make sense): ERRO[0019] Can't scrape HAProxy: Get https://loadbalancerserver/haproxy?stats;csv: x509: certificate signed by unknown authority source=haproxy_exporter.go:299

If i try to curl the same uri i get a certificate error (which again makes sense) but here i have the option to add --insecure and problem solved.
How to solve this in the exporter?

Note: the socket option is not a real option in my situation nor run the exporter from the loadbalancer itself.

Thanks

some connections_total metrics are missing

The haproxy dashboard i use (https://grafana.com/dashboards/2428) uses the following metrics:

  • haproxy_server_connections_total
  • haproxy_frontend_connections_total
  • haproxy_backend_connections_total metrics

but this metrics seems not the exist. At least they are not shown on the localhost/metrics page.

All other metrics just work fine. Even stuff like haproxy_server_connection_errors_total

My Haproxy Version is 1.5.18

Why are these metrics not shown?
Is some configuration missing?

haproxy_exporter New flag handling > 0.8

haproxy_exporter has implemented new flag handling (double dashs are required).
The haproxy.scrape-uri is using single dash.
Insert a version case to different between single and douple dash.

haproxy.ssl-verify doesn't like being false

As per the README.md, I set haproxy.ssl-verify to false:

./haproxy_exporter --haproxy.ssl-verify=false --haproxy.scrape-uri=...

(with a valid URI) and got the following error:

haproxy_exporter: error: unexpected false, try --help

Changing the URI to one that does have a valid SSL cert fixes this, but wasn't what I wanted to do.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.