Coder Social home page Coder Social logo

dandydeveloper / charts Goto Github PK

View Code? Open in Web Editor NEW
143.0 143.0 136.0 9.32 MB

Various helm charts migrated from [helm/stable] due to deprecation

Home Page: https://dandydeveloper.github.io/charts

License: Apache License 2.0

Mustache 94.36% Smarty 5.64%
charts helm kubernetes

charts's People

Contributors

34fathombelow avatar agaudreault avatar alexandru-postolache avatar aniekgul avatar archoversight avatar benoitsob avatar cschockaert avatar dandydeveloper avatar daniel-cortez-stevenson avatar deblaci avatar ebgc avatar eddycharly avatar eraac avatar ggre4 avatar gmartinez-sisti avatar j771 avatar jdheyburn avatar jorenz avatar lord-kyron avatar martijnvdp avatar metajiji avatar mhkarimi1383 avatar michael-michalski avatar rowanruseler avatar sean-mcgimpsey avatar silvpol avatar smcavallo avatar sonrai-doyle avatar tahajahangir avatar therealdwright avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

[chart/redis-ha][REQUEST] Pull S3 credentials from k8s secret

Is your feature request related to a problem? Please describe.
AWS S3 credentials should be able to be pulled from a Kubernetes secret instead of a string in the values.yaml file.

Describe the solution you'd like
Reference a Kubernetes secret containing S3 keys.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[chart/redis-ha][BUG]

Describe the bug
I am deploying v4.10.4 of the redis-ha chart in staging kubernetes cluster (Azure) with 2 replicas and one of the replicas is not able to access the master node through the redis-ha-haproxy.

PS: I have it running in production with no issues with 3 replicas.

To Reproduce
Settings below apply both for HAProxy and Redis:

  • replicas: 2
  • hardAntiAffinity: true
  • Quorum: 1

Expected behavior
redis-ha-haproxy is accessible from either node.

Additional context
PODS:
redis-ha-haproxy-fbb7b4cfd-tps4m 1/1 Running 0 19m 10.244.42.87 vmss000001
redis-ha-haproxy-fbb7b4cfd-wkrb2 1/1 Running 0 19m 10.244.42.11 vmss000000
redis-ha-server-0 2/2 Running 0 19m 10.244.42.56 vmss000001
redis-ha-server-1 2/2 Running 0 18m 10.244.42.42 vmss000000

SERVICES:
redis-ha ClusterIP None 6379/TCP,26379/TCP 25m
redis-ha-announce-0 ClusterIP 10.0.29.229 6379/TCP,26379/TCP 25m
redis-ha-announce-1 ClusterIP 10.0.87.103 6379/TCP,26379/TCP 25m
redis-ha-haproxy ClusterIP 10.0.240.163 6379/TCP 25m

redis-ha-server-0 (node1) --- Everything okay in this container. Can also access the master node through redis-ha-haproxy:6379
redis-ha-server-1 (node2) -- Problematic

logs on redis-ha-server-1 redis container
1:S 19 Nov 2020 12:53:01.198 * Connecting to MASTER 10.0.29.229:6379
1:S 19 Nov 2020 12:53:01.198 * MASTER <-> REPLICA sync started
1:S 19 Nov 2020 12:54:02.376 # Timeout connecting to the MASTER...

logs on redis-ha-server-1 sentinel container
1:X 19 Nov 2020 12:50:52.030 # Sentinel ID is bc16ddc9a19ce8f3ff5ec8f31a2d6b94436af8a3
1:X 19 Nov 2020 12:50:52.030 # +monitor master mymaster 10.0.29.229 6379 quorum 2
1:X 19 Nov 2020 12:51:02.038 # +sdown master mymaster 10.0.29.229 6379

logs on redis-ha-haproxy for node2
[NOTICE] 323/124217 (1) : New worker #1 (6) forked
[WARNING] 323/124219 (6) : Server check_if_redis_is_master_0/R0 is DOWN, reason: Layer4 timeout, info: " at step 1 of tcp-check (connect)", check duration: 1000ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 323/124219 (6) : Server check_if_redis_is_master_0/R1 is DOWN, reason: Layer4 timeout, info: " at step 1 of tcp-check (connect)", check duration: 1001ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 323/124219 (6) : backend 'check_if_redis_is_master_0' has no server available!
[WARNING] 323/124219 (6) : Server check_if_redis_is_master_1/R0 is DOWN, reason: Layer4 timeout, info: " at step 1 of tcp-check (connect)", check duration: 1001ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 323/124219 (6) : Server check_if_redis_is_master_1/R1 is DOWN, reason: Layer4 timeout, info: " at step 1 of tcp-check (connect)", check duration: 1000ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 323/124219 (6) : backend 'check_if_redis_is_master_1' has no server available!
[WARNING] 323/124219 (6) : Server bk_redis_master/R0 is DOWN, reason: Layer4 timeout, info: " at step 1 of tcp-check (connect)", check duration: 1000ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 323/124219 (6) : Server bk_redis_master/R1 is DOWN, reason: Layer4 timeout, info: " at step 1 of tcp-check (connect)", check duration: 1001ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 323/124219 (6) : backend 'bk_redis_master' has no server available!

[chart/redis-ha][BUG] Can not set Values.persistentVolume.reclaimPolicy

Describe the bug

Setting Values.persistentVolume.reclaimPolicy for the redis-ha chart results in an error on helm install

To Reproduce

Steps to reproduce the behavior:

  1. Create a new values.yaml file named values-pv-reclaim-policy-bug.yaml:
persistentVolume:
  size: 2Gi
  reclaimPolicy: "Delete"
  1. From the repo root dir, install the chart with Helm 3
helm install redis-ha-will-fail ./charts/redis-ha -f values-pv-reclaim-policy-bug.yaml
  1. See error:
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(StatefulSet.spec.volumeClaimTemplates[0]): unknown field "persistentVolumeReclaimPolicy" in io.k8s.api.core.v1.PersistentVolumeClaim

Expected behavior

Setting Values.persistentVolume.reclaimPolicy should not result in an error on helm install

Additional context

helm version
>>> version.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"}
kubectl version
>>> Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T23:34:48Z", GoVersion:"go1.14.2", Compiler:"gc", Platform:"darwin/amd64"}
>>> Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
minikube version
>>> minikube version: v1.9.2
>>> commit: 93af9c1e43cab9618e301bc9fa720c63d5efa393

[chart/redis-ha][BUG] - Managing configuration differences for old Redis versions

Firstly, thanks for a great chart. It makes our life so much easier for a Production Redis implementation.

Describe the bug
Config issues when using Redis 4

To Reproduce
When using an older Redis version tag (e.g. 4.0.14-alpine), the version 5 configuration is still applied (throwing an error) even if version 4 configuration is added to a values files. The note provided:

image

suggests that config options will need to be "renamed", but it isn't clear how to do so.

I tried adding min-slaves-max-lag: 5 under redis config, but an error was still thrown about the presence of min-replicas-max-lag: 5

Short of feeding the chart an entire redis.conf file (given how many config option name changes between 4 and 5 there were), how do I prevent the chart from using the version 5 options? How do I "rename" them as advised by the note?

[charts/redis-ha][BUG] EOF errors after upgrading to v4.5.3

Describe the bug

After upgrading chart from v3.3.1 to v4.5.3 the proxy is unable to connect to redis and all requests are failing with EOF error. As a workaround, we've "restarted" redis stateful set and proxy pods. A couple of minutes after the restart issue got resolved.

To Reproduce
Steps to reproduce the behavior:

Unfortunately, no clear steps to reproduce. I've once observed it twice after upgrading ~40 redis HA clusters. Several users of our product reported the same.

Expected behavior
It is expected to see intermittent issue during the upgrade but no manual intervention should be required.

Additional context
Please find proxy and redis/sentinel logs in the attached files.

ha-proxy.log
redis-pod.log

chart/*: create releases / tags as part of CI process for individual charts

Is your feature request related to a problem? Please describe.
Chart repository doesn't facilitate easy navigation to chart releases for comparisons from release to release.

Describe the solution you'd like
At the time that a particular chart is released, a GitHub release and/or tag should be created. This would be used for comparing releases for audit and changelog purposes.

Describe alternatives you've considered
Unfortunately, this is a problem propagated by helm/charts with managing multiple charts within a single repository. I don't know of any good examples of other monolithic repos of various components successfully creating tags/releases off subsections, which is why I'm somewhat against this pattern to begin with. I took a long shot looking at alpinelinux/aports releases, which builds alpine packages for alpine linux distribution; they release / tag as this forms the version being released whereas helm chart monolithic repos don't work the same.

Additional context

charts/redis-ha prometheus scraping with service monitoring not working

Describe the bug
I have prometheus operator installed in my kubernetes cluster. By simply enabling the metrics and serviceMonitor, I do not see in Prometheus the metrics.

To Reproduce
My configuration is
metrics:
enabled: true
# prometheus port & scrape path
port: 9101
portName: http-exporter-port
scrapePath: /metrics
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"

serviceMonitor:
  # When set true then use a ServiceMonitor to configure scraping
  enabled: true
  # Set the namespace the ServiceMonitor should be deployed
  namespace: monitoring
  # Set how frequently Prometheus should scrape
  interval: 30s
  # Set path to redis-exporter telemtery-path
  telemetryPath: /metrics
  # Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
  labels: {}
  # Set timeout for scrape
  timeout: 10s

Expected behavior
See some redis_ metrics show up in prometheus.

Additional context
Is there supposed to me a metrics kubernets service? I see this in the bitnami redis helm chart

[chart/redis-ha][BUG] Default HAproxy timeout of 30s causes a lot of reconnecting

Describe the bug

In values.yaml, the haproxy.timeout.client and .server default to 30s. Considering Redis connections are relatively long-lived and may be used relatively infrequently, this default seems too low and leads to Redis connections being disconnected due to idle timeout every 30 seconds. Most users of Redis will automatically reconnect, but the disconnect/reconnect cycle is logged, causing lots of noise in the logs.

Redis documentation on TCP keepalive states

Recent versions of Redis (3.2 or greater) have TCP keepalive (SO_KEEPALIVE socket option) enabled by default and set to about 300 seconds. This option is useful in order to detect dead peers (clients that cannot be reached even if they look connected). Moreover, if there is network equipment between clients and servers that need to see some traffic in order to take the connection open, the option will prevent unexpected connection closed events.

As this is longer than the HAproxy default timeout, it won't help in this occasion.

Suggested fix

Raise the default of haproxy.timeout.client and haproxy.timeout.server to 330s. Being longer than the Redis default TCP keepalive interval, the keepalives will keep the connections alive.

To Reproduce
Steps to reproduce the behavior:

  1. Set up redis-ha with haproxy.enabled: true without touching timeouts
  2. Connect to redis-ha-haproxy with a client that logs reconnects and leave it idle
  3. Observe the client gets disconnected and reconnects at 30 seconds of idle time

Expected behavior

Client connections stay open even if idle.

[chart/redis-ha][BUG] Permission denied when opening files needed for syncing

Description

When running the chart, we started encountering a ton of NOREPLICAS Not enough good replicas to write errors in our application logs. Upon further investigation, it looks like Redis does not have permissions to open files that are needed for syncing since we are seeing Opening the temp file needed for MASTER <-> REPLICA synchronization: Permission denied and Failed opening the RDB file root (in server root dir /etc/crontabs) for saving: Permission denied in our logs.

Our Config

---
haproxy:
  enabled: true
  stickyBalancing: true
  service:
    type: LoadBalancer

rbac:
  create: true

sysctlImage:
  enabled: true
  mountHostSys: true
  command:
    - /bin/sh
    - -xc
    - |-
      sysctl -w net.core.somaxconn=10000
      echo never > /host-sys/kernel/mm/transparent_hugepage/enabled

persistentVolume:
  enabled: true
  storageClass: singlewriter-standard
  size: 50Gi

exporter:
  enabled: true
  serviceMonitor:
    enabled: true

Logs

I 2020-04-09T13:13:53.931471711Z 1:S 09 Apr 2020 13:13:53.931 # Opening the temp file needed for MASTER <-> REPLICA synchronization: Permission denied
 
I 2020-04-09T13:13:53.931719338Z 1:S 09 Apr 2020 13:13:53.931 * 1 changes in 900 seconds. Saving...
 
I 2020-04-09T13:13:53.931890231Z 1:M 09 Apr 2020 13:13:53.931 # Connection with replica 10.0.13.151:6379 lost.
 
I 2020-04-09T13:13:53.932284469Z 1:S 09 Apr 2020 13:13:53.932 * Background saving started by pid 1046
 
I 2020-04-09T13:13:53.932351310Z 1046:C 09 Apr 2020 13:13:53.932 # Failed opening the RDB file root (in server root dir /etc/crontabs) for saving: Permission denied
 
I 2020-04-09T13:13:53.932536819Z 1:M 09 Apr 2020 13:13:53.932 # Connection with replica 10.0.5.223:6379 lost.
 
I 2020-04-09T13:13:53.939283142Z 1:S 09 Apr 2020 13:13:53.931 # Opening the temp file needed for MASTER <-> REPLICA synchronization: Permission denied
 
I 2020-04-09T13:13:53.939322056Z 1:S 09 Apr 2020 13:13:53.931 * 1 changes in 900 seconds. Saving...
 
I 2020-04-09T13:13:53.939328666Z 1:S 09 Apr 2020 13:13:53.931 * Background saving started by pid 1448
 
I 2020-04-09T13:13:53.939334871Z 1448:C 09 Apr 2020 13:13:53.933 # Failed opening the RDB file root (in server root dir /etc/crontabs) for saving: Permission denied
 
I 2020-04-09T13:13:53.947186305Z 1:M 09 Apr 2020 13:13:53.946 # Background transfer error
 
I 2020-04-09T13:13:54.032355153Z 1:S 09 Apr 2020 13:13:54.032 # Background saving error
 
I 2020-04-09T13:13:54.033385622Z 1:S 09 Apr 2020 13:13:54.033 # Background saving error
 

Summary

We are currently looking for a solution to this problem. Are we doing something wrong in our config? Also, let me know if you have any other questions, happy to help as much as I can!

Best,
Ian

[chart/redis-ha][BUG]

Describe the bug
Hello. I have very strange bug and cannot locate it. We're using Nextcloud, which is connected to Redis-ha.
Several times a week Nextcloud is going down (504 gateway timeout, or loading the page, but without CSS). After lots of researches. we figured out, that the problem is somewhere in redis's side.

After I saw 504 error or page without CSS, I performed a test: wrote simple script, which is connecting to current Redis, creating, putting, getting the key-value. The script is on the same machine, as NC. Nextcloud cannot connect to Redis, but script can.

In NC log there is only one message - redis server went away. In redis log - totally nothing.

There is no logic in when the problem occurs: it can happen in the night, when there are no online users, or during the day, when there is 40-50 online users.

For now, the only solution I fount - connect to redis and execute flushall command. After this operation Nextcloud starts to work.

In the same time we're using the same configuration in staging environment, but there are no active users. In staging there is no such problem. So it somehow connected with load. I could not reproduce this problem somewhere else. So, unfortunately, I cannot provide steps, how to reproduce it.

To Reproduce
Steps to reproduce the behavior:

  1. Deploy chart
  2. Connect application to it

Expected behavior
Working without outages

[charts/redis-ha][BUG] Lost conectivity due to announce service

Description:
Lost connectivity first to master and then to slave.

Steps to reproduce the behavior:

  1. Start minikube on debian VM.
  2. Set hardAntiAffinity to false.
  3. Setup redis-ha.
  4. Wait until master<-> slave relationships are settled.
  5. Check logs of sentinel.
  6. Delete master with kubectl delete.
  7. Wait until failover is complete and master<-> slave relationships are settled.
  8. Check redis-cli role on each replica.

Actual behaviour
At step 5 there is a +sdown master mymaster 10.107.27.237 6379 in master sentinel logs.
At step 8:

Pod 0 result of redis-cli role

  1. "slave"
  2. "10.109.188.240"
  3. (integer) 6379
  4. "connected"
  5. (integer) 103726

Pod 1 result of redis-cli role

  1. "master"
  2. (integer) 104010
      1. "10.107.27.237"
      2. "6379"
      3. "103868"

Pod 2 result of redis-cli role

  1. "slave"
  2. "10.107.27.237"
  3. (integer) 6379
  4. "connected"
  5. (integer) 104010

Expected behaviour
There should no sdown reported, and there should be one master that reports two connected slaves, and two slaves that report connection to the master.

Additional context
Kubernetes version: v1.15.4
Helm version: v2.15.1
Minikube version: v1.6.2
Redis HA version: v4.5.2
Debian version: 9.11 (VM running on Windows 10)

I have tested with version Redis HA version 3.0.3 and there the same problem does not occur, but it appears on 3.0.4, thus the claim in the title that it depends on the announce service.

I have also been able to test in another environment where this problems does not occur, but in minikube it occurs almost every time I run the test.

Furthermore, when running this test I have also seen that the result have been three slaves that do not do a proper failover, and also split-brain, i.e. two masters and one slave, but I haven't analysed these issues further.

Is this a known issue or a bug?

[chart/redis-ha][REQUEST] - Extend redis.config and sentinel.config

Is your feature request related to a problem? Please describe.
Right now we cannot configure multiple configuration lines through redis.config. I know redis.customConfig can be used to solve this problem but it will be nice to have redis.config take care of this as well. Furtermore, sentinel.config values adds "sentinel {{key}} {{value}}" for all values, it will be nice to support other config values which doesn't need to start with "sentinel" keyword in sentinel.conf file

Describe the solution you'd like
We can have a key called "redis.config.multipleLineConfig" where multipleLineConfig is just a blob of data like:

redis:
  config:
    multipleLineConfig: |
        save 100 60
        save 200 30

In _configs.tpl we can check for this key to just copy this whole blob of data to redis.conf

For sentinel:
We can have "sentinel.config.otherConfig" where otherConfig can have values which doesn't need to be pre appended by "sentinel {{key}} ..."

Describe alternatives you've considered
Alternative is to use redis.customConfig and sentinel.customConfig

[chart/redis-ha][REQUEST] Enable custom annotations on the Redis statefulset

Is your feature request related to a problem? Please describe.
We deploy Redis with TLS enabled, supplying the certificates in the redis-tls secret.
We rotate these certificates every 48 hours, and upon rotation, Redis should reload.
For this, in turn, we deploy Stakater reloader, which requires annotations on the StatefulSet in question to do this.

The helm chart currently does not support setting your own custom annotations on the Redis StatefulSet itself.

Describe the solution you'd like
A configuration option in values.yaml to set custom annotations on the Redis StatefulSet.

Describe alternatives you've considered
Manually patching the StatefulSet after deployment. That seems like an unnecessary step.

Additional context

[charts/redis-ha][BUG] redis-ha service connects to read-only replicas

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:
1.

    helm upgrade --install redis-ha dandydeveloper/redis-ha \
      --set auth=true,redisPassword=password \
      --namespace monitoring \
      --version "4.6.2"

Add redis-ha.monitoring.svc.cluster.local as your Redis HOST in your app, as the instructions indicate:

Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
redis-ha.monitoring.svc.cluster.local

Try to use Redis.

Mostly get this error as it round robins between master and slaves:
READONLY You can't write against a read only replica.

Expected behavior
I only connect to the master.

Additional context
What am I doing wrong?

[redis-ha][BUG] misleading hardAntiAffinity values

Describe the bug
The behavior of hardAntiAffinity, when is set to false, is really misleading.
The template redis-ha-statefulset.yaml for podAffinity looks like:

 podAntiAffinity:
    {{- if .Values.hardAntiAffinity }}
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app: {{ template "redis-ha.name" . }}
                  release: {{ .Release.Name }}
                  {{ template "redis-ha.fullname" . }}: replica
              topologyKey: kubernetes.io/hostname
    {{- else }}
          preferredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app: {{ template "redis-ha.name" . }}
                  release: {{ .Release.Name }}
                  {{ template "redis-ha.fullname" . }}: replica
              topologyKey: kubernetes.io/hostname
    {{- end }}
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app:  {{ template "redis-ha.name" . }}
                    release: {{ .Release.Name }}
                    {{ template "redis-ha.fullname" . }}: replica
                topologyKey: failure-domain.beta.kubernetes.io/zone
    {{- end }}

To Reproduce
Steps to reproduce the behavior:

  1. set hardAntiAffinity to false for redis or haproxy
  2. upgrade/install release using helm upgrade/helm install
  3. check podAntiAffinity
  4. See error

Expected behavior
I expect, by setting hardAntiAffinity to false, that requiredDuringSchedulingIgnoredDuringExecution to be changed with preferredDuringSchedulingIgnoredDuringExecution and nothing more.
Additional context
The resulted config will have podAntiAffinity set with two preferredDuringSchedulingIgnoredDuringExecution keys. First contains topologyKey: kubernetes.io/hostname and the last one contains topologyKey: failure-domain.beta.kubernetes.io/zone.
When the resulted yamls are applied to k8s cluster only the last one remains.

This behaviour is misleading because I expect that pods will be spawned on different nodes, if possible, but with topologyKey: failure-domain.beta.kubernetes.io/zone almost all pods may end up on the same node.

redis-ha psp

short question regarding podSecurityPolicy (PSP), will it be supported by this chart? Actually getting

4m47s       Warning   FailedCreate            statefulset/redis-cache-redis-ha-server                    create Pod redis-cache-redis-ha-server-0 in StatefulSet redis-cache-redis-ha-server failed error: pods "redis-cache-redis-ha-server-0" is forbidden: unable to validate against any pod security policy: []

on a node where psp is enabled. Thanks for infos.

/chart/redis-ha For ha proxy service, there is no port 26379 for sentinel connection

Describe the bug
I'm using StackExchange.redis and have been using bitnami redis helm chart, but I like redis-ha as it handles pod failure. I don't understand HA-proxy service. Shouldn't there be a port for 26379 for sentinel connection along with port 6379? bitnami helm chart follows this model.

To Reproduce

  1. Install helm chart with default redis values.yaml

Expected behavior
For ha proxy service, there is no port 26379 for sentinel connection

Additional context
Maybe this is feature request?

Does HAProxy support TLS?

Hi,

In my case, I've enabled the built-in HAProxy and NodePort so that my external client can access to the Redis through the HAProxy.
I also need a secure connection between my client (outside the Kubernetes) and the Redis HAProxy.

Does it support this feature?

Thanks,
Hujun

[chart/redis-ha] Implement network policies in the chart

Is your feature request related to a problem? Please describe.
We use network policy in our clusters and the Redis don't work if Network Policy are defined inside the namespace where Redis is running.

Describe the solution you'd like
I would like the to have proper Network Policies to make Redis work in case of a policy like this:

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default.deny-all-ingress-traffic
  namespace: default
spec:
  podSelector: {}
  policyTypes:
    - Ingress

Describe alternatives you've considered
There is no alternative

Additional context
Ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/

[chart/redis-ha][BUG] AntiAffinity preventing node scheduling due to node capacity

Describe the bug
Setting hardAntiAffinity to true for haproxy makes the pods unschedulable on upgrade when the number of replicas is the same number of kubernetes nodes. The new revision is only scheduled once the pods of the old revision are manually deleted.

To Reproduce
Steps to reproduce the behavior:

  1. Create a Kubernetes cluster with 3 nodes
  2. Set haproxy hardAntiAffinity value to true
  3. Install the chart. At this point no error
  4. Upgrade the chart.

Expected behavior
The surge happens and the old pods get replaced by the pods of the new revision. But instead pods become unschedulable.

[chart/redis-ha][BUG]

Describe the bug
Config-init File: templates/_configs.tpl

if user setting custom Values.redis.port,
Line 93 missing '-p {{ .Values.redis.port }}' always cause the ping command to timeout!

find_master() {
        echo "Attempting to find master"
        if [ "$(redis-cli -h "$MASTER"{{ if .Values.auth }} -a "$AUTH"{{ end }} ping)" != "PONG" ]; then

[chart/redis_ha][REQUEST] Add PodDisruptionBudget to HAProxy instances

Is your feature request related to a problem? Please describe.
Currently you can manage a PodDisruptionBudget for the redis statefulsets, but not for the HA Proxy deployment. As a result, even though the redis backend will be kept HA during a maintenance task, it is possible to cause the HA Proxy service to become temporarily unavailable.

Describe the solution you'd like
Add an additional PDB for the HA Proxy deployment.

Describe alternatives you've considered
Currently manually managing the second PDB as part of CI/CD system but it is cleaner to have it all in Helm.

[chart/redis-ha] add more control around update strategy

Hello can you add in your redis-ha-statefulset.yaml this ability to control the upgrade mechanism

(could be tricky sometime to upgrade our redis sentinel cluster in production when we want to control WHEN each pod is restarted)

OnDelete / RollingUpdate

thanks

[chart/redis-ha][REQUEST] Redis 6.0 TLS support

Is your feature request related to a problem? Please describe.
Now that Redis 6.0 supports TLS, would it be possible to have support in the chart for configuring TLS?

Describe the solution you'd like
Generally I would use the redis.config or redis.customconfig directive to configure tls related configurations, but I also need to be able to pass in additional secrets for the TLS certs.

Describe alternatives you've considered
None

[charts/redis-ha][BUG] pod has unbound immediate PersistentVolumeClaims

Describe the bug
After running:

helm install dandydev/redis-ha
helm install redis-ha dandydev/redis-ha

I get the following error/events:

running "VolumeBinding" filter plugin for pod "redis-ha-server-1": pod has unbound immediate PersistentVolumeClaims
running "VolumeBinding" filter plugin for pod "redis-ha-server-1": pod has unbound immediate PersistentVolumeClaims
0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules.

And redis-ha-server-1 fails to start.

To Reproduce
Steps to reproduce the behavior:

  1. helm install dandydev/redis-ha
  2. helm install redis-ha dandydev/redis-ha
  3. See error

Expected behavior
For it to work :-D

Additional context
I'm running on minikube (v1.9.0) with kvm2 on centos 7 box.
kubectl version version is v1.18.0, and helm version v3.1.2

[charts/redis-ha][BUG] Exporter not deployed

Describe the bug
When "exporter.enabled" is set to "true" it is not deployed.

To Reproduce
Steps to reproduce the behavior:

  1. Enable the exporter in values.yaml
exporter:
  enabled: true
  image: oliver006/redis_exporter
  tag: v1.3.5-arm64
  1. Install the chart
helm install --namespace database redis-ha dandydev/redis-ha -f database/redis-ha-helm-values.yaml
  1. Check the pods
kubectl get -n database pods
kubectl get -A pods | grep redis
kubectl get -A pods | grep exporter

Expected behavior
The exporter gets deployed.

Additional context
I'm running this on K3S on a 4 node Raspberry Pi 4 cluster on Raspbian Buster running the aarch64 kernel.

[chart/redis-ha][BUG] redis_exporter does not trust the redis server in TLS mode

Describe the bug
When TLS is enabled on redis, the redis_exporter cannot connect to the redis server.
This is because the redis_exporter does not receive the redis server CA, so it cannot validate the server's certificate.

To Reproduce
We rolled out the helm chart with

redis:
  port: 0
  tlsPort: 6379
  tlsReplication: true
sentinel:
  port: 0
  tlsPort: 26379
  tlsReplication: true
exporter:
  enabled: true
  sslEnabled: true
tls:
  secretName: mysecret-tls
  certFile: tls.crt
  keyFile: tls.key
  caCertFile: ca.crt

and found the following redis_exporter debug logs (I manually set it to debug mode with REDIS_EXPORTER_DEBUG: "true")

level=debug msg="Trying DialURL(): rediss://localhost:6379"
level=debug msg="DialURL() failed, err: x509: certificate signed by unknown authority"
level=debug msg="Trying: Dial(): rediss localhost:6379"
level=error msg="Couldn't connect to redis instance"
level=debug msg="connectToRedis( rediss://localhost:6379 ) err: dial rediss: unknown network rediss"
level=debug msg="scrapeRedisHost() done"

Expected behavior
The redis_exporter can connect to a redis server in TLS mode.

Additional context
Adding

- name: REDIS_EXPORTER_TLS_CA_CERT_FILE
  value: /tls-certs/{{ .Values.tls.caCertFile }}```

to the redis HA statefulset node_exporter's environment fixed the issue for me.
As per the node_exporter documentation, this allows it to verify the server:

level=debug msg="Trying DialURL(): rediss://localhost:6379"
level=debug msg="connected to: rediss://localhost:6379"
level=debug msg="connecting took 0.123456 seconds"
[...]

[chart/<redisha>][REQUEST]

Is your feature request related to a problem? Please describe.
How to configure redis ha to work cross region. Lets say i have separate Kubernetes cluster on region A and B

[chart/redis-ha][BUG] Malformed sentinel id in myid option

Describe the bug

When using the latest version 4.5.4 the first sentinel container logs the following error:

*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 1
>>> 'sentinel myid f58981a95ae04cab08ae2c2096722429ab8e06bc6a2fda61e1846a90aad7722e'
Malformed Sentinel id in myid option.

and the provisioning of the cluster won't go forward as the respective pod is stuck in the status CrashLoopBackOff.

To Reproduce

I could reproduce the error by installing it via Helm 3 in a Kubernetes cluster with version 1.17.4 with the following command:

$ helm install dandydev/redis-ha --version 4.5.4 --generate-name

Additional context

This seems to stem from the fact, that after PR #15 uses SHA256 instead of SHA1 for the ID generation. This makes the ID 64 instead of 40 characters long when hex encoded. The Redis source seems to have 40 characters hardcoded (at least in 5.0). A fix could therefore trim the encoded SHA-256 to 40 characters in length (e.g. trunc 40 .theLongHash).

A workaround for me is to use version 4.5.3 instead.

[chart/redis-ha][REQUEST]

Is your feature request related to a problem? Please describe.
Yes, we are unable to use template strings in the haproxy.customConfig value. The suggested solution is already applied to sentinel.customConfig and redis.customConfig.

Describe the solution you'd like
I would suggest updating templates/_configs.tpl in the redis-ha chart. The line,

{{ .Values.haproxy.customConfig | indent 4}}

could be,

{{ tpl .Values.haproxy.customConfig . | indent 4}}

It may also be useful to run haproxy.extraConfig through tpl as well. i.e. {{ .Values.haproxy.extraConfig | indent 4 }} changed to {{ tpl .Values.haproxy.extraConfig . | indent 4 }}

Thank you

[chart/redis-ha][INFO] - Password protection for Sentinel

Hi there!

I'm trying to add password protection to my sentinels and I found out that these two things must be set to make it work in sentinel.conf:

requirepass <password>
sentinel auth-pass <master-group-name> <password>

What I have seen, requirepass cannot be set via sentinel.config.

How can I set these correctly with redis-ha chart to make sentinel auth possible?

thanks in advance!

[chart/redis-ha][INFO] - HAProxy causes reconnects

Background:

Currently using this chart to deploy my sentinel redis cluster to be used as a session-store. Connecting to the cluster via node-redis as a client.

I seem to be getting multiple-dropped connection issues if I attempt to enable haproxy as a layer between the client and redis server, when the connection is left idle for a period.

Debug logs show this block repeating continiously.

2020-06-09T07:38:49.150Z Redis server ready.
2020-06-09T07:38:49.150Z on_ready called app-redis-redis-ha-haproxy:6379 id 0
{"short_message":"Redis (session store) ready","full_message":"","level":6,"level_name":"INFO","timestamp":1591688329.15}
2020-06-09T07:39:19.154Z Redis connection is gone from end event.
2020-06-09T07:39:19.154Z Retry connection in 200 ms
2020-06-09T07:39:19.354Z Retrying connection...
{"short_message":"Re-establisihing connection to redis (session store)","full_message":"","level":6,"level_name":"INFO","timestamp":1591688359.355}
{"short_message":"Connected to redis (session store)","full_message":"","level":6,"level_name":"INFO","timestamp":1591688359.366}

If I have a constant traffic between the client and server, this does not occur.

I have attempted the following:

  • Using a sentinel aware client (io-redis) without haproxy, and this does not occur
  • Using a different client with ha-proxy enabled, and this occurs
  • Have tweaked various keepalive settings on the various clients(increased up to 10 seconds) and the error still presents
  • Using a standard client, connecting directly to a single redis node (bypassing haproxy), and this does not occur

Is there perhaps some network latency that is causing haproxy to close a connection with the client, which would require a config change from the defaults?

[chart/redis-ha] Support for livenessProbe options.

Is your feature request related to a problem? Please describe.
Currently we have a problem whehere livenessProbe for HaProxy has a 1s timeout, k8s it's killing it and causing lots connections "Redis server went away" when our network it's on peaks.

Describe the solution you'd like
Add support for live livenessProbe values

If ok I can create the PR

[chart/redis-ha][REQUEST] allow setting terminationGracePeriodSeconds

Is your feature request related to a problem? Please describe.
In order to help with graceful shutdown of Redis and its components, please allow setting the Pod's terminationGracePeriodSeconds parameter.

Additional context
We can already set the container lifecycle settings but the termination grace period is not changeable from the default 30 seconds. It would be useful to be able to set this on large Redis instances to allow the full RDB contents to be written to disk. See https://learnk8s.io/graceful-shutdown.

[chart/redis-ha][BUG] Readiness probe should wait until replica is completed loading

Describe the bug
Readiness probe should not mark redis replicas as ready until the Loading Dataset is complete

To Reproduce
Steps to reproduce the behavior:

  1. Setup redis ha
  2. hook up to load with reasonable traffic (or for loop)
  3. spin up new replica (scale from 3->4)

Expected behavior
Connections should always succeed

Additional context
Instead of successful connections, we get:
ERROR: LOADING Redis is loading the dataset in memory

[chart/redis-ha][BUG]

Describe the bug
Default value haproxy.enabled to false breaks service-test. The service test specifies a hard coded reference to the haproxy` service. While HAProxy is not enabled, the test pod will fail, and this in its place will fail deployments (i.e. CI) that require all test pods to succeed.

To Reproduce
Steps to reproduce the behaviour:

  1. Deploy the chart with HAPoxy disabled.
  2. Test pod will fail (https://github.com/DandyDeveloper/charts/blob/master/charts/redis-ha/templates/tests/test-redis-ha-pod.yaml).

Expected behaviour
No test pods will fail when HAProxy is disabled.

Additional context
For my setup I believe I do not necessarily need the HAProxy. The 3rd-party software I use relies on configuring the sentinel endpoints (I use the created services for this). I'm not sure why I should enable/require the HAProxy. My knowledge of HAProxy is too low to understand it in the setup of the chart. Could you explain this?

[chart/redis-ha][BUG]

Describe the bug
The redis-server process (pid 1) does not expect to reap the processes spawned for the exec probes. Defunct timeout processes accumulate and when redis-server reaps such a process it logs this warning: wait3() returned a pid (4861) we can't find in our scripts execution queue!

To Reproduce
Check the process list and the logs for redis-server containers.

Expected behavior
There should be no accumulation of defunct processes, and no warnings logged for reaping the processes spawned for the probes.

haproxy - exit-on-failure: killing every processes with SIGTERM

bash-4.2$ kubectl get po -n redis
NAME                                      READY   STATUS    RESTARTS   AGE
redis-redis-ha-haproxy-78758777fb-44b4l   1/1     Running   4          38m
redis-redis-ha-haproxy-78758777fb-g7lt5   1/1     Running   3          17m
redis-redis-ha-haproxy-78758777fb-nv4s8   1/1     Running   2          17m
redis-redis-ha-server-0                   2/2     Running   0          17m
redis-redis-ha-server-1                   2/2     Running   0          17m
redis-redis-ha-server-2                   2/2     Running   0          17m
bash-4.2$ kubectl logs redis-redis-ha-haproxy-78758777fb-44b4l -n redis
[NOTICE] 272/185535 (1) : New worker #1 (6) forked
[WARNING] 272/185536 (6) : Server check_if_redis_is_master_0/R0 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string '172.20.167.69')", check duration: 1000ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 272/185536 (6) : Server check_if_redis_is_master_0/R1 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string '172.20.167.69')", check duration: 1000ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 272/185536 (6) : Server check_if_redis_is_master_0/R2 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string '172.20.167.69')", check duration: 1001ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 272/185536 (6) : backend 'check_if_redis_is_master_0' has no server available!
[WARNING] 272/185536 (6) : Server check_if_redis_is_master_1/R0 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string '172.20.251.237')", check duration: 1001ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 272/185536 (6) : Server check_if_redis_is_master_1/R1 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string '172.20.251.237')", check duration: 1000ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 272/185536 (6) : Server check_if_redis_is_master_1/R2 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string '172.20.251.237')", check duration: 1000ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 272/185536 (6) : backend 'check_if_redis_is_master_1' has no server available!
[WARNING] 272/185537 (6) : Server bk_redis_master/R0 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string 'role:master')", check duration: 1000ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 272/185537 (6) : Server bk_redis_master/R1 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string 'role:master')", check duration: 1001ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 272/185633 (1) : Current worker #1 (6) exited with code 139 (Segmentation fault)
[ALERT] 272/185633 (1) : exit-on-failure: killing every processes with SIGTERM
[WARNING] 272/185633 (1) : All workers exited. Exiting... (139)

This is out of box deployment from HELM chart with 3 haproxy and 3 redis pods

Here is output from haproxy exporter

#HELP haproxy_process_nbthread Configured number of threads.
#TYPE haproxy_process_nbthread gauge
haproxy_process_nbthread 16
#HELP haproxy_process_nbproc Configured number of processes.
#TYPE haproxy_process_nbproc gauge
haproxy_process_nbproc 1
#HELP haproxy_process_relative_process_id Relative process id, starting at 1.
#TYPE haproxy_process_relative_process_id gauge
haproxy_process_relative_process_id 1
#HELP haproxy_process_start_time_seconds Start time in seconds.
#TYPE haproxy_process_start_time_seconds gauge
haproxy_process_start_time_seconds 1601405740
#HELP haproxy_process_max_memory_bytes Per-process memory limit (in bytes); 0=unset.
#TYPE haproxy_process_max_memory_bytes gauge
haproxy_process_max_memory_bytes 0
#HELP haproxy_process_pool_allocated_bytes Total amount of memory allocated in pools (in bytes).
#TYPE haproxy_process_pool_allocated_bytes gauge
haproxy_process_pool_allocated_bytes 1025184
#HELP haproxy_process_pool_used_bytes Total amount of memory used in pools (in bytes).
#TYPE haproxy_process_pool_used_bytes gauge
haproxy_process_pool_used_bytes 992416
#HELP haproxy_process_pool_failures_total Total number of failed pool allocations.
#TYPE haproxy_process_pool_failures_total counter
haproxy_process_pool_failures_total 0
#HELP haproxy_process_max_fds Maximum number of open file descriptors; 0=unset.
#TYPE haproxy_process_max_fds gauge
haproxy_process_max_fds 65536
#HELP haproxy_process_max_sockets Maximum numer of open sockets.
#TYPE haproxy_process_max_sockets gauge
haproxy_process_max_sockets 65536
#HELP haproxy_process_max_connections Maximum number of concurrent connections.
#TYPE haproxy_process_max_connections gauge
haproxy_process_max_connections 32726
#HELP haproxy_process_hard_max_connections Initial Maximum number of concurrent connections.
#TYPE haproxy_process_hard_max_connections gauge
haproxy_process_hard_max_connections 32726
#HELP haproxy_process_current_connections Number of active sessions.
#TYPE haproxy_process_current_connections gauge
haproxy_process_current_connections 2
#HELP haproxy_process_connections_total Total number of created sessions.
#TYPE haproxy_process_connections_total counter
haproxy_process_connections_total 199
#HELP haproxy_process_requests_total Total number of requests (TCP or HTTP).
#TYPE haproxy_process_requests_total counter
haproxy_process_requests_total 199
#HELP haproxy_process_max_ssl_connections Configured maximum number of concurrent SSL connections.
#TYPE haproxy_process_max_ssl_connections gauge
haproxy_process_max_ssl_connections 0
#HELP haproxy_process_current_ssl_connections Number of opened SSL connections.
#TYPE haproxy_process_current_ssl_connections gauge
haproxy_process_current_ssl_connections 0
#HELP haproxy_process_ssl_connections_total Total number of opened SSL connections.
#TYPE haproxy_process_ssl_connections_total counter
haproxy_process_ssl_connections_total 0
#HELP haproxy_process_max_pipes Configured maximum number of pipes.
#TYPE haproxy_process_max_pipes gauge
haproxy_process_max_pipes 0
#HELP haproxy_process_pipes_used_total Number of pipes in used.
#TYPE haproxy_process_pipes_used_total counter
haproxy_process_pipes_used_total 0
#HELP haproxy_process_pipes_free_total Number of pipes unused.
#TYPE haproxy_process_pipes_free_total counter
haproxy_process_pipes_free_total 0
#HELP haproxy_process_current_connection_rate Current number of connections per second over last elapsed second.
#TYPE haproxy_process_current_connection_rate gauge
haproxy_process_current_connection_rate 8
#HELP haproxy_process_limit_connection_rate Configured maximum number of connections per second.
#TYPE haproxy_process_limit_connection_rate gauge
haproxy_process_limit_connection_rate 0
#HELP haproxy_process_max_connection_rate Maximum observed number of connections per second.
#TYPE haproxy_process_max_connection_rate gauge
haproxy_process_max_connection_rate 10
#HELP haproxy_process_current_session_rate Current number of sessions per second over last elapsed second.
#TYPE haproxy_process_current_session_rate gauge
haproxy_process_current_session_rate 8
#HELP haproxy_process_limit_session_rate Configured maximum number of sessions per second.
#TYPE haproxy_process_limit_session_rate gauge
haproxy_process_limit_session_rate 0
#HELP haproxy_process_max_session_rate Maximum observed number of sessions per second.
#TYPE haproxy_process_max_session_rate gauge
haproxy_process_max_session_rate 10
#HELP haproxy_process_current_ssl_rate Current number of SSL sessions per second over last elapsed second.
#TYPE haproxy_process_current_ssl_rate gauge
haproxy_process_current_ssl_rate 0
#HELP haproxy_process_limit_ssl_rate Configured maximum number of SSL sessions per second.
#TYPE haproxy_process_limit_ssl_rate gauge
haproxy_process_limit_ssl_rate 0
#HELP haproxy_process_max_ssl_rate Maximum observed number of SSL sessions per second.
#TYPE haproxy_process_max_ssl_rate gauge
haproxy_process_max_ssl_rate 0
#HELP haproxy_process_current_frontend_ssl_key_rate Current frontend SSL Key computation per second over last elapsed second.
#TYPE haproxy_process_current_frontend_ssl_key_rate gauge
haproxy_process_current_frontend_ssl_key_rate 0
#HELP haproxy_process_max_frontend_ssl_key_rate Maximum observed frontend SSL Key computation per second.
#TYPE haproxy_process_max_frontend_ssl_key_rate gauge
haproxy_process_max_frontend_ssl_key_rate 0
#HELP haproxy_process_frontent_ssl_reuse SSL session reuse ratio (percent).
#TYPE haproxy_process_frontent_ssl_reuse gauge
haproxy_process_frontent_ssl_reuse 0
#HELP haproxy_process_current_backend_ssl_key_rate Current backend SSL Key computation per second over last elapsed second.
#TYPE haproxy_process_current_backend_ssl_key_rate gauge
haproxy_process_current_backend_ssl_key_rate 0
#HELP haproxy_process_max_backend_ssl_key_rate Maximum observed backend SSL Key computation per second.
#TYPE haproxy_process_max_backend_ssl_key_rate gauge
haproxy_process_max_backend_ssl_key_rate 0
#HELP haproxy_process_ssl_cache_lookups_total Total number of SSL session cache lookups.
#TYPE haproxy_process_ssl_cache_lookups_total counter
haproxy_process_ssl_cache_lookups_total 0
#HELP haproxy_process_ssl_cache_misses_total Total number of SSL session cache misses.
#TYPE haproxy_process_ssl_cache_misses_total counter
haproxy_process_ssl_cache_misses_total 0
#HELP haproxy_process_http_comp_bytes_in_total Number of bytes per second over last elapsed second, before http compression.
#TYPE haproxy_process_http_comp_bytes_in_total counter
haproxy_process_http_comp_bytes_in_total 0
#HELP haproxy_process_http_comp_bytes_out_total Number of bytes per second over last elapsed second, after http compression.
#TYPE haproxy_process_http_comp_bytes_out_total counter
haproxy_process_http_comp_bytes_out_total 0
#HELP haproxy_process_limit_http_comp Configured maximum input compression rate in bytes.
#TYPE haproxy_process_limit_http_comp gauge
haproxy_process_limit_http_comp 0
#HELP haproxy_process_current_zlib_memory Current memory used for zlib in bytes.
#TYPE haproxy_process_current_zlib_memory gauge
haproxy_process_current_zlib_memory 0
#HELP haproxy_process_max_zlib_memory Configured maximum amount of memory for zlib in bytes.
#TYPE haproxy_process_max_zlib_memory gauge
haproxy_process_max_zlib_memory 0
#HELP haproxy_process_current_tasks Current number of tasks.
#TYPE haproxy_process_current_tasks gauge
haproxy_process_current_tasks 60
#HELP haproxy_process_current_run_queue Current number of tasks in the run-queue.
#TYPE haproxy_process_current_run_queue gauge
haproxy_process_current_run_queue 1
#HELP haproxy_process_idle_time_percent Idle to total ratio over last sample (percent).
#TYPE haproxy_process_idle_time_percent gauge
haproxy_process_idle_time_percent 100
#HELP haproxy_process_stopping Non zero means stopping in progress.
#TYPE haproxy_process_stopping gauge
haproxy_process_stopping 0
#HELP haproxy_process_jobs Current number of active jobs (listeners, sessions, open devices).
#TYPE haproxy_process_jobs gauge
haproxy_process_jobs 6
#HELP haproxy_process_unstoppable_jobs Current number of active jobs that can't be stopped during a soft stop.
#TYPE haproxy_process_unstoppable_jobs gauge
haproxy_process_unstoppable_jobs 0
#HELP haproxy_process_listeners Current number of active listeners.
#TYPE haproxy_process_listeners gauge
haproxy_process_listeners 4
#HELP haproxy_process_active_peers Current number of active peers.
#TYPE haproxy_process_active_peers gauge
haproxy_process_active_peers 0
#HELP haproxy_process_connected_peers Current number of connected peers.
#TYPE haproxy_process_connected_peers gauge
haproxy_process_connected_peers 0
#HELP haproxy_process_dropped_logs_total Total number of dropped logs.
#TYPE haproxy_process_dropped_logs_total counter
haproxy_process_dropped_logs_total 0
#HELP haproxy_process_busy_polling_enabled Non zero if the busy polling is enabled.
#TYPE haproxy_process_busy_polling_enabled gauge
haproxy_process_busy_polling_enabled 0
#HELP haproxy_frontend_status Current status of the service (frontend: 0=STOP, 1=UP, 2=FULL - backend/server: 0=DOWN, 1=UP).
#TYPE haproxy_frontend_status gauge
haproxy_frontend_status{proxy="health_check_http_url"} 1
haproxy_frontend_status{proxy="ft_redis_master"} 1
haproxy_frontend_status{proxy="metrics"} 1
#HELP haproxy_frontend_current_sessions Current number of active sessions.
#TYPE haproxy_frontend_current_sessions gauge
haproxy_frontend_current_sessions{proxy="health_check_http_url"} 0
haproxy_frontend_current_sessions{proxy="ft_redis_master"} 0
haproxy_frontend_current_sessions{proxy="metrics"} 2
#HELP haproxy_frontend_max_sessions Maximum observed number of active sessions.
#TYPE haproxy_frontend_max_sessions gauge
haproxy_frontend_max_sessions{proxy="health_check_http_url"} 3
haproxy_frontend_max_sessions{proxy="ft_redis_master"} 16
haproxy_frontend_max_sessions{proxy="metrics"} 2
#HELP haproxy_frontend_limit_sessions Configured session limit.
#TYPE haproxy_frontend_limit_sessions gauge
haproxy_frontend_limit_sessions{proxy="health_check_http_url"} 32726
haproxy_frontend_limit_sessions{proxy="ft_redis_master"} 32726
haproxy_frontend_limit_sessions{proxy="metrics"} 32726
#HELP haproxy_frontend_sessions_total Total number of sessions.
#TYPE haproxy_frontend_sessions_total counter
haproxy_frontend_sessions_total{proxy="health_check_http_url"} 33
haproxy_frontend_sessions_total{proxy="ft_redis_master"} 164
haproxy_frontend_sessions_total{proxy="metrics"} 2
#HELP haproxy_frontend_limit_session_rate Configured limit on new sessions per second.
#TYPE haproxy_frontend_limit_session_rate gauge
haproxy_frontend_limit_session_rate{proxy="health_check_http_url"} 0
haproxy_frontend_limit_session_rate{proxy="ft_redis_master"} 0
haproxy_frontend_limit_session_rate{proxy="metrics"} 0
#HELP haproxy_frontend_max_session_rate Maximum observed number of sessions per second.
#TYPE haproxy_frontend_max_session_rate gauge
haproxy_frontend_max_session_rate{proxy="health_check_http_url"} 1
haproxy_frontend_max_session_rate{proxy="ft_redis_master"} 9
haproxy_frontend_max_session_rate{proxy="metrics"} 1
#HELP haproxy_frontend_connections_rate_max Maximum observed number of connections per second.
#TYPE haproxy_frontend_connections_rate_max gauge
haproxy_frontend_connections_rate_max{proxy="health_check_http_url"} 1
haproxy_frontend_connections_rate_max{proxy="ft_redis_master"} 9
haproxy_frontend_connections_rate_max{proxy="metrics"} 1
#HELP haproxy_frontend_connections_total Total number of connections.
#TYPE haproxy_frontend_connections_total counter
haproxy_frontend_connections_total{proxy="health_check_http_url"} 33
haproxy_frontend_connections_total{proxy="ft_redis_master"} 164
haproxy_frontend_connections_total{proxy="metrics"} 2
#HELP haproxy_frontend_bytes_in_total Current total of incoming bytes.
#TYPE haproxy_frontend_bytes_in_total counter
haproxy_frontend_bytes_in_total{proxy="health_check_http_url"} 3729
haproxy_frontend_bytes_in_total{proxy="ft_redis_master"} 0
haproxy_frontend_bytes_in_total{proxy="metrics"} 0
#HELP haproxy_frontend_bytes_out_total Current total of outgoing bytes.
#TYPE haproxy_frontend_bytes_out_total counter
haproxy_frontend_bytes_out_total{proxy="health_check_http_url"} 5610
haproxy_frontend_bytes_out_total{proxy="ft_redis_master"} 0
haproxy_frontend_bytes_out_total{proxy="metrics"} 0
#HELP haproxy_frontend_requests_denied_total Total number of denied requests.
#TYPE haproxy_frontend_requests_denied_total counter
haproxy_frontend_requests_denied_total{proxy="health_check_http_url"} 0
haproxy_frontend_requests_denied_total{proxy="ft_redis_master"} 0
haproxy_frontend_requests_denied_total{proxy="metrics"} 0
#HELP haproxy_frontend_responses_denied_total Total number of denied responses.
#TYPE haproxy_frontend_responses_denied_total counter
haproxy_frontend_responses_denied_total{proxy="health_check_http_url"} 0
haproxy_frontend_responses_denied_total{proxy="ft_redis_master"} 0
haproxy_frontend_responses_denied_total{proxy="metrics"} 0
#HELP haproxy_frontend_request_errors_total Total number of request errors.
#TYPE haproxy_frontend_request_errors_total counter
haproxy_frontend_request_errors_total{proxy="health_check_http_url"} 0
haproxy_frontend_request_errors_total{proxy="ft_redis_master"} 0
haproxy_frontend_request_errors_total{proxy="metrics"} 0
#HELP haproxy_frontend_denied_connections_total Total number of requests denied by "tcp-request connection" rules.
#TYPE haproxy_frontend_denied_connections_total counter
haproxy_frontend_denied_connections_total{proxy="health_check_http_url"} 0
haproxy_frontend_denied_connections_total{proxy="ft_redis_master"} 0
haproxy_frontend_denied_connections_total{proxy="metrics"} 0
#HELP haproxy_frontend_denied_sessions_total Total number of requests denied by "tcp-request session" rules.
#TYPE haproxy_frontend_denied_sessions_total counter
haproxy_frontend_denied_sessions_total{proxy="health_check_http_url"} 0
haproxy_frontend_denied_sessions_total{proxy="ft_redis_master"} 0
haproxy_frontend_denied_sessions_total{proxy="metrics"} 0
#HELP haproxy_frontend_failed_header_rewriting_total Total number of failed header rewriting warnings.
#TYPE haproxy_frontend_failed_header_rewriting_total counter
haproxy_frontend_failed_header_rewriting_total{proxy="health_check_http_url"} 0
haproxy_frontend_failed_header_rewriting_total{proxy="ft_redis_master"} 0
haproxy_frontend_failed_header_rewriting_total{proxy="metrics"} 0
#HELP haproxy_frontend_http_requests_rate_max Maximum observed number of HTTP requests per second.
#TYPE haproxy_frontend_http_requests_rate_max gauge
haproxy_frontend_http_requests_rate_max{proxy="health_check_http_url"} 1
haproxy_frontend_http_requests_rate_max{proxy="metrics"} 1
#HELP haproxy_frontend_http_requests_total Total number of HTTP requests received.
#TYPE haproxy_frontend_http_requests_total counter
haproxy_frontend_http_requests_total{proxy="health_check_http_url"} 33
haproxy_frontend_http_requests_total{proxy="metrics"} 2
#HELP haproxy_frontend_http_responses_total Total number of HTTP responses.
#TYPE haproxy_frontend_http_responses_total counter
haproxy_frontend_http_responses_total{proxy="health_check_http_url",code="1xx"} 0
haproxy_frontend_http_responses_total{proxy="metrics",code="1xx"} 0
haproxy_frontend_http_responses_total{proxy="health_check_http_url",code="2xx"} 33
haproxy_frontend_http_responses_total{proxy="metrics",code="2xx"} 0
haproxy_frontend_http_responses_total{proxy="health_check_http_url",code="3xx"} 0
haproxy_frontend_http_responses_total{proxy="metrics",code="3xx"} 0
haproxy_frontend_http_responses_total{proxy="health_check_http_url",code="4xx"} 0
haproxy_frontend_http_responses_total{proxy="metrics",code="4xx"} 0
haproxy_frontend_http_responses_total{proxy="health_check_http_url",code="5xx"} 0
haproxy_frontend_http_responses_total{proxy="metrics",code="5xx"} 0
haproxy_frontend_http_responses_total{proxy="health_check_http_url",code="other"} 0
haproxy_frontend_http_responses_total{proxy="metrics",code="other"} 0
#HELP haproxy_frontend_intercepted_requests_total Total number of intercepted HTTP requests.
#TYPE haproxy_frontend_intercepted_requests_total counter
haproxy_frontend_intercepted_requests_total{proxy="health_check_http_url"} 33
haproxy_frontend_intercepted_requests_total{proxy="metrics"} 0
#HELP haproxy_frontend_http_cache_lookups_total Total number of HTTP cache lookups.
#TYPE haproxy_frontend_http_cache_lookups_total counter
haproxy_frontend_http_cache_lookups_total{proxy="health_check_http_url"} 0
haproxy_frontend_http_cache_lookups_total{proxy="metrics"} 0
#HELP haproxy_frontend_http_cache_hits_total Total number of HTTP cache hits.
#TYPE haproxy_frontend_http_cache_hits_total counter
haproxy_frontend_http_cache_hits_total{proxy="health_check_http_url"} 0
haproxy_frontend_http_cache_hits_total{proxy="metrics"} 0
#HELP haproxy_frontend_http_comp_bytes_in_total Total number of HTTP response bytes fed to the compressor.
#TYPE haproxy_frontend_http_comp_bytes_in_total counter
haproxy_frontend_http_comp_bytes_in_total{proxy="health_check_http_url"} 0
haproxy_frontend_http_comp_bytes_in_total{proxy="metrics"} 0
#HELP haproxy_frontend_http_comp_bytes_out_total Total number of HTTP response bytes emitted by the compressor.
#TYPE haproxy_frontend_http_comp_bytes_out_total counter
haproxy_frontend_http_comp_bytes_out_total{proxy="health_check_http_url"} 0
haproxy_frontend_http_comp_bytes_out_total{proxy="metrics"} 0
#HELP haproxy_frontend_http_comp_bytes_bypassed_total Total number of bytes that bypassed the HTTP compressor (CPU/BW limit).
#TYPE haproxy_frontend_http_comp_bytes_bypassed_total counter
haproxy_frontend_http_comp_bytes_bypassed_total{proxy="health_check_http_url"} 0
haproxy_frontend_http_comp_bytes_bypassed_total{proxy="metrics"} 0
#HELP haproxy_frontend_http_comp_responses_total Total number of HTTP responses that were compressed.
#TYPE haproxy_frontend_http_comp_responses_total counter
haproxy_frontend_http_comp_responses_total{proxy="health_check_http_url"} 0
haproxy_frontend_http_comp_responses_total{proxy="metrics"} 0
#HELP haproxy_backend_status Current status of the service (frontend: 0=STOP, 1=UP, 2=FULL - backend/server: 0=DOWN, 1=UP).
#TYPE haproxy_backend_status gauge
haproxy_backend_status{proxy="health_check_http_url"} 1
haproxy_backend_status{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_status{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_status{proxy="check_if_redis_is_master_2"} 1
haproxy_backend_status{proxy="bk_redis_master"} 1
#HELP haproxy_backend_current_sessions Current number of active sessions.
#TYPE haproxy_backend_current_sessions gauge
haproxy_backend_current_sessions{proxy="health_check_http_url"} 0
haproxy_backend_current_sessions{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_current_sessions{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_current_sessions{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_current_sessions{proxy="bk_redis_master"} 0
#HELP haproxy_backend_max_sessions Maximum observed number of active sessions.
#TYPE haproxy_backend_max_sessions gauge
haproxy_backend_max_sessions{proxy="health_check_http_url"} 0
haproxy_backend_max_sessions{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_max_sessions{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_max_sessions{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_max_sessions{proxy="bk_redis_master"} 5
#HELP haproxy_backend_limit_sessions Configured session limit.
#TYPE haproxy_backend_limit_sessions gauge
haproxy_backend_limit_sessions{proxy="health_check_http_url"} 3273
haproxy_backend_limit_sessions{proxy="check_if_redis_is_master_0"} 1
haproxy_backend_limit_sessions{proxy="check_if_redis_is_master_1"} 1
haproxy_backend_limit_sessions{proxy="check_if_redis_is_master_2"} 1
haproxy_backend_limit_sessions{proxy="bk_redis_master"} 3273
#HELP haproxy_backend_sessions_total Total number of sessions.
#TYPE haproxy_backend_sessions_total counter
haproxy_backend_sessions_total{proxy="health_check_http_url"} 0
haproxy_backend_sessions_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_sessions_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_sessions_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_sessions_total{proxy="bk_redis_master"} 164
#HELP haproxy_backend_max_session_rate Maximum observed number of sessions per second.
#TYPE haproxy_backend_max_session_rate gauge
haproxy_backend_max_session_rate{proxy="health_check_http_url"} 0
haproxy_backend_max_session_rate{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_max_session_rate{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_max_session_rate{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_max_session_rate{proxy="bk_redis_master"} 9
#HELP haproxy_backend_last_session_seconds Number of seconds since last session assigned to server/backend.
#TYPE haproxy_backend_last_session_seconds gauge
haproxy_backend_last_session_seconds{proxy="health_check_http_url"} -1
haproxy_backend_last_session_seconds{proxy="check_if_redis_is_master_0"} -1
haproxy_backend_last_session_seconds{proxy="check_if_redis_is_master_1"} -1
haproxy_backend_last_session_seconds{proxy="check_if_redis_is_master_2"} -1
haproxy_backend_last_session_seconds{proxy="bk_redis_master"} 0
#HELP haproxy_backend_current_queue Current number of queued requests.
#TYPE haproxy_backend_current_queue gauge
haproxy_backend_current_queue{proxy="health_check_http_url"} 0
haproxy_backend_current_queue{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_current_queue{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_current_queue{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_current_queue{proxy="bk_redis_master"} 0
#HELP haproxy_backend_max_queue Maximum observed number of queued requests.
#TYPE haproxy_backend_max_queue gauge
haproxy_backend_max_queue{proxy="health_check_http_url"} 0
haproxy_backend_max_queue{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_max_queue{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_max_queue{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_max_queue{proxy="bk_redis_master"} 0
#HELP haproxy_backend_connection_attempts_total Total number of connection establishment attempts.
#TYPE haproxy_backend_connection_attempts_total counter
haproxy_backend_connection_attempts_total{proxy="health_check_http_url"} 0
haproxy_backend_connection_attempts_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_connection_attempts_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_connection_attempts_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_connection_attempts_total{proxy="bk_redis_master"} 164
#HELP haproxy_backend_connection_reuses_total Total number of connection reuses.
#TYPE haproxy_backend_connection_reuses_total counter
haproxy_backend_connection_reuses_total{proxy="health_check_http_url"} 0
haproxy_backend_connection_reuses_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_connection_reuses_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_connection_reuses_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_connection_reuses_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_bytes_in_total Current total of incoming bytes.
#TYPE haproxy_backend_bytes_in_total counter
haproxy_backend_bytes_in_total{proxy="health_check_http_url"} 3729
haproxy_backend_bytes_in_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_bytes_in_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_bytes_in_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_bytes_in_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_bytes_out_total Current total of outgoing bytes.
#TYPE haproxy_backend_bytes_out_total counter
haproxy_backend_bytes_out_total{proxy="health_check_http_url"} 5610
haproxy_backend_bytes_out_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_bytes_out_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_bytes_out_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_bytes_out_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_http_queue_time_average_seconds Avg. queue time for last 1024 successful connections.
#TYPE haproxy_backend_http_queue_time_average_seconds gauge
haproxy_backend_http_queue_time_average_seconds{proxy="health_check_http_url"} 0
haproxy_backend_http_queue_time_average_seconds{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_http_queue_time_average_seconds{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_http_queue_time_average_seconds{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_http_queue_time_average_seconds{proxy="bk_redis_master"} 0
#HELP haproxy_backend_http_connect_time_average_seconds Avg. connect time for last 1024 successful connections.
#TYPE haproxy_backend_http_connect_time_average_seconds gauge
haproxy_backend_http_connect_time_average_seconds{proxy="health_check_http_url"} 0
haproxy_backend_http_connect_time_average_seconds{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_http_connect_time_average_seconds{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_http_connect_time_average_seconds{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_http_connect_time_average_seconds{proxy="bk_redis_master"} 0
#HELP haproxy_backend_http_response_time_average_seconds Avg. response time for last 1024 successful connections.
#TYPE haproxy_backend_http_response_time_average_seconds gauge
haproxy_backend_http_response_time_average_seconds{proxy="health_check_http_url"} 0
haproxy_backend_http_response_time_average_seconds{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_http_response_time_average_seconds{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_http_response_time_average_seconds{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_http_response_time_average_seconds{proxy="bk_redis_master"} 0
#HELP haproxy_backend_http_total_time_average_seconds Avg. total time for last 1024 successful connections.
#TYPE haproxy_backend_http_total_time_average_seconds gauge
haproxy_backend_http_total_time_average_seconds{proxy="health_check_http_url"} 0
haproxy_backend_http_total_time_average_seconds{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_http_total_time_average_seconds{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_http_total_time_average_seconds{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_http_total_time_average_seconds{proxy="bk_redis_master"} 0
#HELP haproxy_backend_requests_denied_total Total number of denied requests.
#TYPE haproxy_backend_requests_denied_total counter
haproxy_backend_requests_denied_total{proxy="health_check_http_url"} 0
haproxy_backend_requests_denied_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_requests_denied_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_requests_denied_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_requests_denied_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_responses_denied_total Total number of denied responses.
#TYPE haproxy_backend_responses_denied_total counter
haproxy_backend_responses_denied_total{proxy="health_check_http_url"} 0
haproxy_backend_responses_denied_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_responses_denied_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_responses_denied_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_responses_denied_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_connection_errors_total Total number of connection errors.
#TYPE haproxy_backend_connection_errors_total counter
haproxy_backend_connection_errors_total{proxy="health_check_http_url"} 0
haproxy_backend_connection_errors_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_connection_errors_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_connection_errors_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_connection_errors_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_response_errors_total Total number of response errors.
#TYPE haproxy_backend_response_errors_total counter
haproxy_backend_response_errors_total{proxy="health_check_http_url"} 0
haproxy_backend_response_errors_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_response_errors_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_response_errors_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_response_errors_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_retry_warnings_total Total number of retry warnings.
#TYPE haproxy_backend_retry_warnings_total counter
haproxy_backend_retry_warnings_total{proxy="health_check_http_url"} 0
haproxy_backend_retry_warnings_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_retry_warnings_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_retry_warnings_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_retry_warnings_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_redispatch_warnings_total Total number of redispatch warnings.
#TYPE haproxy_backend_redispatch_warnings_total counter
haproxy_backend_redispatch_warnings_total{proxy="health_check_http_url"} 0
haproxy_backend_redispatch_warnings_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_redispatch_warnings_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_redispatch_warnings_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_redispatch_warnings_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_failed_header_rewriting_total Total number of failed header rewriting warnings.
#TYPE haproxy_backend_failed_header_rewriting_total counter
haproxy_backend_failed_header_rewriting_total{proxy="health_check_http_url"} 0
haproxy_backend_failed_header_rewriting_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_failed_header_rewriting_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_failed_header_rewriting_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_failed_header_rewriting_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_client_aborts_total Total number of data transfers aborted by the client.
#TYPE haproxy_backend_client_aborts_total counter
haproxy_backend_client_aborts_total{proxy="health_check_http_url"} 0
haproxy_backend_client_aborts_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_client_aborts_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_client_aborts_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_client_aborts_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_server_aborts_total Total number of data transfers aborted by the server.
#TYPE haproxy_backend_server_aborts_total counter
haproxy_backend_server_aborts_total{proxy="health_check_http_url"} 0
haproxy_backend_server_aborts_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_server_aborts_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_server_aborts_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_server_aborts_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_weight Service weight.
#TYPE haproxy_backend_weight gauge
haproxy_backend_weight{proxy="health_check_http_url"} 0
haproxy_backend_weight{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_weight{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_weight{proxy="check_if_redis_is_master_2"} 3
haproxy_backend_weight{proxy="bk_redis_master"} 1
#HELP haproxy_backend_active_servers Current number of active servers.
#TYPE haproxy_backend_active_servers gauge
haproxy_backend_active_servers{proxy="health_check_http_url"} 0
haproxy_backend_active_servers{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_active_servers{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_active_servers{proxy="check_if_redis_is_master_2"} 3
haproxy_backend_active_servers{proxy="bk_redis_master"} 1
#HELP haproxy_backend_backup_servers Current number of backup servers.
#TYPE haproxy_backend_backup_servers gauge
haproxy_backend_backup_servers{proxy="health_check_http_url"} 0
haproxy_backend_backup_servers{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_backup_servers{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_backup_servers{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_backup_servers{proxy="bk_redis_master"} 0
#HELP haproxy_backend_check_up_down_total Total number of UP->DOWN transitions.
#TYPE haproxy_backend_check_up_down_total counter
haproxy_backend_check_up_down_total{proxy="health_check_http_url"} 0
haproxy_backend_check_up_down_total{proxy="check_if_redis_is_master_0"} 1
haproxy_backend_check_up_down_total{proxy="check_if_redis_is_master_1"} 1
haproxy_backend_check_up_down_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_check_up_down_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_check_last_change_seconds Number of seconds since the last UP<->DOWN transition.
#TYPE haproxy_backend_check_last_change_seconds gauge
haproxy_backend_check_last_change_seconds{proxy="health_check_http_url"} 102
haproxy_backend_check_last_change_seconds{proxy="check_if_redis_is_master_0"} 101
haproxy_backend_check_last_change_seconds{proxy="check_if_redis_is_master_1"} 101
haproxy_backend_check_last_change_seconds{proxy="check_if_redis_is_master_2"} 102
haproxy_backend_check_last_change_seconds{proxy="bk_redis_master"} 102
#HELP haproxy_backend_downtime_seconds_total Total downtime (in seconds) for the service.
#TYPE haproxy_backend_downtime_seconds_total counter
haproxy_backend_downtime_seconds_total{proxy="health_check_http_url"} 102
haproxy_backend_downtime_seconds_total{proxy="check_if_redis_is_master_0"} 101
haproxy_backend_downtime_seconds_total{proxy="check_if_redis_is_master_1"} 101
haproxy_backend_downtime_seconds_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_downtime_seconds_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_loadbalanced_total Total number of times a service was selected, either for new sessions, or when redispatching.
#TYPE haproxy_backend_loadbalanced_total counter
haproxy_backend_loadbalanced_total{proxy="health_check_http_url"} 0
haproxy_backend_loadbalanced_total{proxy="check_if_redis_is_master_0"} 0
haproxy_backend_loadbalanced_total{proxy="check_if_redis_is_master_1"} 0
haproxy_backend_loadbalanced_total{proxy="check_if_redis_is_master_2"} 0
haproxy_backend_loadbalanced_total{proxy="bk_redis_master"} 0
#HELP haproxy_backend_http_requests_total Total number of HTTP requests received.
#TYPE haproxy_backend_http_requests_total counter
haproxy_backend_http_requests_total{proxy="health_check_http_url"} 0
#HELP haproxy_backend_http_responses_total Total number of HTTP responses.
#TYPE haproxy_backend_http_responses_total counter
haproxy_backend_http_responses_total{proxy="health_check_http_url",code="1xx"} 0
haproxy_backend_http_responses_total{proxy="health_check_http_url",code="2xx"} 0
haproxy_backend_http_responses_total{proxy="health_check_http_url",code="3xx"} 0
haproxy_backend_http_responses_total{proxy="health_check_http_url",code="4xx"} 0
haproxy_backend_http_responses_total{proxy="health_check_http_url",code="5xx"} 0
haproxy_backend_http_responses_total{proxy="health_check_http_url",code="other"} 0
#HELP haproxy_backend_http_cache_lookups_total Total number of HTTP cache lookups.
#TYPE haproxy_backend_http_cache_lookups_total counter
haproxy_backend_http_cache_lookups_total{proxy="health_check_http_url"} 0
#HELP haproxy_backend_http_cache_hits_total Total number of HTTP cache hits.
#TYPE haproxy_backend_http_cache_hits_total counter
haproxy_backend_http_cache_hits_total{proxy="health_check_http_url"} 0
#HELP haproxy_backend_http_comp_bytes_in_total Total number of HTTP response bytes fed to the compressor.
#TYPE haproxy_backend_http_comp_bytes_in_total counter
haproxy_backend_http_comp_bytes_in_total{proxy="health_check_http_url"} 0
#HELP haproxy_backend_http_comp_bytes_out_total Total number of HTTP response bytes emitted by the compressor.
#TYPE haproxy_backend_http_comp_bytes_out_total counter
haproxy_backend_http_comp_bytes_out_total{proxy="health_check_http_url"} 0
#HELP haproxy_backend_http_comp_bytes_bypassed_total Total number of bytes that bypassed the HTTP compressor (CPU/BW limit).
#TYPE haproxy_backend_http_comp_bytes_bypassed_total counter
haproxy_backend_http_comp_bytes_bypassed_total{proxy="health_check_http_url"} 0
#HELP haproxy_backend_http_comp_responses_total Total number of HTTP responses that were compressed.
#TYPE haproxy_backend_http_comp_responses_total counter
haproxy_backend_http_comp_responses_total{proxy="health_check_http_url"} 0
#HELP haproxy_server_status Current status of the service (frontend: 0=STOP, 1=UP, 2=FULL - backend/server: 0=DOWN, 1=UP).
#TYPE haproxy_server_status gauge
haproxy_server_status{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_status{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_status{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_status{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_status{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_status{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_status{proxy="check_if_redis_is_master_2",server="R0"} 1
haproxy_server_status{proxy="check_if_redis_is_master_2",server="R1"} 1
haproxy_server_status{proxy="check_if_redis_is_master_2",server="R2"} 1
haproxy_server_status{proxy="bk_redis_master",server="R0"} 0
haproxy_server_status{proxy="bk_redis_master",server="R1"} 0
haproxy_server_status{proxy="bk_redis_master",server="R2"} 1
#HELP haproxy_server_current_sessions Current number of active sessions.
#TYPE haproxy_server_current_sessions gauge
haproxy_server_current_sessions{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_current_sessions{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_current_sessions{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_current_sessions{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_current_sessions{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_current_sessions{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_current_sessions{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_current_sessions{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_current_sessions{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_current_sessions{proxy="bk_redis_master",server="R0"} 0
haproxy_server_current_sessions{proxy="bk_redis_master",server="R1"} 0
haproxy_server_current_sessions{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_max_sessions Maximum observed number of active sessions.
#TYPE haproxy_server_max_sessions gauge
haproxy_server_max_sessions{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_max_sessions{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_max_sessions{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_max_sessions{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_max_sessions{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_max_sessions{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_max_sessions{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_max_sessions{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_max_sessions{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_max_sessions{proxy="bk_redis_master",server="R0"} 0
haproxy_server_max_sessions{proxy="bk_redis_master",server="R1"} 0
haproxy_server_max_sessions{proxy="bk_redis_master",server="R2"} 3
#HELP haproxy_server_limit_sessions Configured session limit.
#TYPE haproxy_server_limit_sessions gauge
haproxy_server_limit_sessions{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_limit_sessions{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_limit_sessions{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_limit_sessions{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_limit_sessions{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_limit_sessions{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_limit_sessions{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_limit_sessions{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_limit_sessions{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_limit_sessions{proxy="bk_redis_master",server="R0"} 0
haproxy_server_limit_sessions{proxy="bk_redis_master",server="R1"} 0
haproxy_server_limit_sessions{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_sessions_total Total number of sessions.
#TYPE haproxy_server_sessions_total counter
haproxy_server_sessions_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_sessions_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_sessions_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_sessions_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_sessions_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_sessions_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_sessions_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_sessions_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_sessions_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_sessions_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_sessions_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_sessions_total{proxy="bk_redis_master",server="R2"} 164
#HELP haproxy_server_max_session_rate Maximum observed number of sessions per second.
#TYPE haproxy_server_max_session_rate gauge
haproxy_server_max_session_rate{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_max_session_rate{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_max_session_rate{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_max_session_rate{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_max_session_rate{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_max_session_rate{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_max_session_rate{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_max_session_rate{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_max_session_rate{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_max_session_rate{proxy="bk_redis_master",server="R0"} 0
haproxy_server_max_session_rate{proxy="bk_redis_master",server="R1"} 0
haproxy_server_max_session_rate{proxy="bk_redis_master",server="R2"} 9
#HELP haproxy_server_last_session_seconds Number of seconds since last session assigned to server/backend.
#TYPE haproxy_server_last_session_seconds gauge
haproxy_server_last_session_seconds{proxy="check_if_redis_is_master_0",server="R0"} -1
haproxy_server_last_session_seconds{proxy="check_if_redis_is_master_0",server="R1"} -1
haproxy_server_last_session_seconds{proxy="check_if_redis_is_master_0",server="R2"} -1
haproxy_server_last_session_seconds{proxy="check_if_redis_is_master_1",server="R0"} -1
haproxy_server_last_session_seconds{proxy="check_if_redis_is_master_1",server="R1"} -1
haproxy_server_last_session_seconds{proxy="check_if_redis_is_master_1",server="R2"} -1
haproxy_server_last_session_seconds{proxy="check_if_redis_is_master_2",server="R0"} -1
haproxy_server_last_session_seconds{proxy="check_if_redis_is_master_2",server="R1"} -1
haproxy_server_last_session_seconds{proxy="check_if_redis_is_master_2",server="R2"} -1
haproxy_server_last_session_seconds{proxy="bk_redis_master",server="R0"} -1
haproxy_server_last_session_seconds{proxy="bk_redis_master",server="R1"} -1
haproxy_server_last_session_seconds{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_current_queue Current number of queued requests.
#TYPE haproxy_server_current_queue gauge
haproxy_server_current_queue{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_current_queue{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_current_queue{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_current_queue{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_current_queue{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_current_queue{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_current_queue{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_current_queue{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_current_queue{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_current_queue{proxy="bk_redis_master",server="R0"} 0
haproxy_server_current_queue{proxy="bk_redis_master",server="R1"} 0
haproxy_server_current_queue{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_max_queue Maximum observed number of queued requests.
#TYPE haproxy_server_max_queue gauge
haproxy_server_max_queue{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_max_queue{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_max_queue{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_max_queue{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_max_queue{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_max_queue{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_max_queue{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_max_queue{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_max_queue{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_max_queue{proxy="bk_redis_master",server="R0"} 0
haproxy_server_max_queue{proxy="bk_redis_master",server="R1"} 0
haproxy_server_max_queue{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_queue_limit Configured maxqueue for the server (0 meaning no limit).
#TYPE haproxy_server_queue_limit gauge
haproxy_server_queue_limit{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_queue_limit{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_queue_limit{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_queue_limit{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_queue_limit{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_queue_limit{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_queue_limit{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_queue_limit{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_queue_limit{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_queue_limit{proxy="bk_redis_master",server="R0"} 0
haproxy_server_queue_limit{proxy="bk_redis_master",server="R1"} 0
haproxy_server_queue_limit{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_bytes_in_total Current total of incoming bytes.
#TYPE haproxy_server_bytes_in_total counter
haproxy_server_bytes_in_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_bytes_in_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_bytes_in_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_bytes_in_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_bytes_in_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_bytes_in_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_bytes_in_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_bytes_in_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_bytes_in_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_bytes_in_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_bytes_in_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_bytes_in_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_bytes_out_total Current total of outgoing bytes.
#TYPE haproxy_server_bytes_out_total counter
haproxy_server_bytes_out_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_bytes_out_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_bytes_out_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_bytes_out_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_bytes_out_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_bytes_out_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_bytes_out_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_bytes_out_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_bytes_out_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_bytes_out_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_bytes_out_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_bytes_out_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_http_queue_time_average_seconds Avg. queue time for last 1024 successful connections.
#TYPE haproxy_server_http_queue_time_average_seconds gauge
haproxy_server_http_queue_time_average_seconds{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_http_queue_time_average_seconds{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_http_queue_time_average_seconds{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_http_queue_time_average_seconds{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_http_queue_time_average_seconds{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_http_queue_time_average_seconds{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_http_queue_time_average_seconds{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_http_queue_time_average_seconds{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_http_queue_time_average_seconds{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_http_queue_time_average_seconds{proxy="bk_redis_master",server="R0"} 0
haproxy_server_http_queue_time_average_seconds{proxy="bk_redis_master",server="R1"} 0
haproxy_server_http_queue_time_average_seconds{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_http_connect_time_average_seconds Avg. connect time for last 1024 successful connections.
#TYPE haproxy_server_http_connect_time_average_seconds gauge
haproxy_server_http_connect_time_average_seconds{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_http_connect_time_average_seconds{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_http_connect_time_average_seconds{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_http_connect_time_average_seconds{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_http_connect_time_average_seconds{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_http_connect_time_average_seconds{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_http_connect_time_average_seconds{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_http_connect_time_average_seconds{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_http_connect_time_average_seconds{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_http_connect_time_average_seconds{proxy="bk_redis_master",server="R0"} 0
haproxy_server_http_connect_time_average_seconds{proxy="bk_redis_master",server="R1"} 0
haproxy_server_http_connect_time_average_seconds{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_http_response_time_average_seconds Avg. response time for last 1024 successful connections.
#TYPE haproxy_server_http_response_time_average_seconds gauge
haproxy_server_http_response_time_average_seconds{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_http_response_time_average_seconds{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_http_response_time_average_seconds{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_http_response_time_average_seconds{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_http_response_time_average_seconds{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_http_response_time_average_seconds{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_http_response_time_average_seconds{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_http_response_time_average_seconds{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_http_response_time_average_seconds{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_http_response_time_average_seconds{proxy="bk_redis_master",server="R0"} 0
haproxy_server_http_response_time_average_seconds{proxy="bk_redis_master",server="R1"} 0
haproxy_server_http_response_time_average_seconds{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_http_total_time_average_seconds Avg. total time for last 1024 successful connections.
#TYPE haproxy_server_http_total_time_average_seconds gauge
haproxy_server_http_total_time_average_seconds{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_http_total_time_average_seconds{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_http_total_time_average_seconds{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_http_total_time_average_seconds{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_http_total_time_average_seconds{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_http_total_time_average_seconds{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_http_total_time_average_seconds{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_http_total_time_average_seconds{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_http_total_time_average_seconds{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_http_total_time_average_seconds{proxy="bk_redis_master",server="R0"} 0
haproxy_server_http_total_time_average_seconds{proxy="bk_redis_master",server="R1"} 0
haproxy_server_http_total_time_average_seconds{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_connection_attempts_total Total number of connection establishment attempts.
#TYPE haproxy_server_connection_attempts_total counter
haproxy_server_connection_attempts_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_connection_attempts_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_connection_attempts_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_connection_attempts_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_connection_attempts_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_connection_attempts_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_connection_attempts_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_connection_attempts_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_connection_attempts_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_connection_attempts_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_connection_attempts_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_connection_attempts_total{proxy="bk_redis_master",server="R2"} 164
#HELP haproxy_server_connection_reuses_total Total number of connection reuses.
#TYPE haproxy_server_connection_reuses_total counter
haproxy_server_connection_reuses_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_connection_reuses_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_connection_reuses_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_connection_reuses_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_connection_reuses_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_connection_reuses_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_connection_reuses_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_connection_reuses_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_connection_reuses_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_connection_reuses_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_connection_reuses_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_connection_reuses_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_responses_denied_total Total number of denied responses.
#TYPE haproxy_server_responses_denied_total counter
haproxy_server_responses_denied_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_responses_denied_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_responses_denied_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_responses_denied_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_responses_denied_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_responses_denied_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_responses_denied_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_responses_denied_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_responses_denied_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_responses_denied_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_responses_denied_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_responses_denied_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_connection_errors_total Total number of connection errors.
#TYPE haproxy_server_connection_errors_total counter
haproxy_server_connection_errors_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_connection_errors_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_connection_errors_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_connection_errors_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_connection_errors_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_connection_errors_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_connection_errors_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_connection_errors_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_connection_errors_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_connection_errors_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_connection_errors_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_connection_errors_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_response_errors_total Total number of response errors.
#TYPE haproxy_server_response_errors_total counter
haproxy_server_response_errors_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_response_errors_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_response_errors_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_response_errors_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_response_errors_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_response_errors_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_response_errors_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_response_errors_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_response_errors_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_response_errors_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_response_errors_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_response_errors_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_retry_warnings_total Total number of retry warnings.
#TYPE haproxy_server_retry_warnings_total counter
haproxy_server_retry_warnings_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_retry_warnings_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_retry_warnings_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_retry_warnings_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_retry_warnings_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_retry_warnings_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_retry_warnings_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_retry_warnings_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_retry_warnings_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_retry_warnings_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_retry_warnings_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_retry_warnings_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_redispatch_warnings_total Total number of redispatch warnings.
#TYPE haproxy_server_redispatch_warnings_total counter
haproxy_server_redispatch_warnings_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_redispatch_warnings_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_redispatch_warnings_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_redispatch_warnings_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_redispatch_warnings_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_redispatch_warnings_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_redispatch_warnings_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_redispatch_warnings_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_redispatch_warnings_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_redispatch_warnings_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_redispatch_warnings_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_redispatch_warnings_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_failed_header_rewriting_total Total number of failed header rewriting warnings.
#TYPE haproxy_server_failed_header_rewriting_total counter
haproxy_server_failed_header_rewriting_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_failed_header_rewriting_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_failed_header_rewriting_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_failed_header_rewriting_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_failed_header_rewriting_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_failed_header_rewriting_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_failed_header_rewriting_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_failed_header_rewriting_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_failed_header_rewriting_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_failed_header_rewriting_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_failed_header_rewriting_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_failed_header_rewriting_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_client_aborts_total Total number of data transfers aborted by the client.
#TYPE haproxy_server_client_aborts_total counter
haproxy_server_client_aborts_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_client_aborts_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_client_aborts_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_client_aborts_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_client_aborts_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_client_aborts_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_client_aborts_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_client_aborts_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_client_aborts_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_client_aborts_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_client_aborts_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_client_aborts_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_server_aborts_total Total number of data transfers aborted by the server.
#TYPE haproxy_server_server_aborts_total counter
haproxy_server_server_aborts_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_server_aborts_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_server_aborts_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_server_aborts_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_server_aborts_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_server_aborts_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_server_aborts_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_server_aborts_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_server_aborts_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_server_aborts_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_server_aborts_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_server_aborts_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_weight Service weight.
#TYPE haproxy_server_weight gauge
haproxy_server_weight{proxy="check_if_redis_is_master_0",server="R0"} 1
haproxy_server_weight{proxy="check_if_redis_is_master_0",server="R1"} 1
haproxy_server_weight{proxy="check_if_redis_is_master_0",server="R2"} 1
haproxy_server_weight{proxy="check_if_redis_is_master_1",server="R0"} 1
haproxy_server_weight{proxy="check_if_redis_is_master_1",server="R1"} 1
haproxy_server_weight{proxy="check_if_redis_is_master_1",server="R2"} 1
haproxy_server_weight{proxy="check_if_redis_is_master_2",server="R0"} 1
haproxy_server_weight{proxy="check_if_redis_is_master_2",server="R1"} 1
haproxy_server_weight{proxy="check_if_redis_is_master_2",server="R2"} 1
haproxy_server_weight{proxy="bk_redis_master",server="R0"} 1
haproxy_server_weight{proxy="bk_redis_master",server="R1"} 1
haproxy_server_weight{proxy="bk_redis_master",server="R2"} 1
#HELP haproxy_server_check_failures_total Total number of failed check (Only counts checks failed when the server is up).
#TYPE haproxy_server_check_failures_total counter
haproxy_server_check_failures_total{proxy="check_if_redis_is_master_0",server="R0"} 1
haproxy_server_check_failures_total{proxy="check_if_redis_is_master_0",server="R1"} 1
haproxy_server_check_failures_total{proxy="check_if_redis_is_master_0",server="R2"} 1
haproxy_server_check_failures_total{proxy="check_if_redis_is_master_1",server="R0"} 1
haproxy_server_check_failures_total{proxy="check_if_redis_is_master_1",server="R1"} 1
haproxy_server_check_failures_total{proxy="check_if_redis_is_master_1",server="R2"} 1
haproxy_server_check_failures_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_check_failures_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_check_failures_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_check_failures_total{proxy="bk_redis_master",server="R0"} 1
haproxy_server_check_failures_total{proxy="bk_redis_master",server="R1"} 1
haproxy_server_check_failures_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_check_up_down_total Total number of UP->DOWN transitions.
#TYPE haproxy_server_check_up_down_total counter
haproxy_server_check_up_down_total{proxy="check_if_redis_is_master_0",server="R0"} 1
haproxy_server_check_up_down_total{proxy="check_if_redis_is_master_0",server="R1"} 1
haproxy_server_check_up_down_total{proxy="check_if_redis_is_master_0",server="R2"} 1
haproxy_server_check_up_down_total{proxy="check_if_redis_is_master_1",server="R0"} 1
haproxy_server_check_up_down_total{proxy="check_if_redis_is_master_1",server="R1"} 1
haproxy_server_check_up_down_total{proxy="check_if_redis_is_master_1",server="R2"} 1
haproxy_server_check_up_down_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_check_up_down_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_check_up_down_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_check_up_down_total{proxy="bk_redis_master",server="R0"} 1
haproxy_server_check_up_down_total{proxy="bk_redis_master",server="R1"} 1
haproxy_server_check_up_down_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_downtime_seconds_total Total downtime (in seconds) for the service.
#TYPE haproxy_server_downtime_seconds_total counter
haproxy_server_downtime_seconds_total{proxy="check_if_redis_is_master_0",server="R0"} 101
haproxy_server_downtime_seconds_total{proxy="check_if_redis_is_master_0",server="R1"} 101
haproxy_server_downtime_seconds_total{proxy="check_if_redis_is_master_0",server="R2"} 101
haproxy_server_downtime_seconds_total{proxy="check_if_redis_is_master_1",server="R0"} 101
haproxy_server_downtime_seconds_total{proxy="check_if_redis_is_master_1",server="R1"} 101
haproxy_server_downtime_seconds_total{proxy="check_if_redis_is_master_1",server="R2"} 101
haproxy_server_downtime_seconds_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_downtime_seconds_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_downtime_seconds_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_downtime_seconds_total{proxy="bk_redis_master",server="R0"} 100
haproxy_server_downtime_seconds_total{proxy="bk_redis_master",server="R1"} 100
haproxy_server_downtime_seconds_total{proxy="bk_redis_master",server="R2"} 0
#HELP haproxy_server_check_last_change_seconds Number of seconds since the last UP<->DOWN transition.
#TYPE haproxy_server_check_last_change_seconds gauge
haproxy_server_check_last_change_seconds{proxy="check_if_redis_is_master_0",server="R0"} 101
haproxy_server_check_last_change_seconds{proxy="check_if_redis_is_master_0",server="R1"} 101
haproxy_server_check_last_change_seconds{proxy="check_if_redis_is_master_0",server="R2"} 101
haproxy_server_check_last_change_seconds{proxy="check_if_redis_is_master_1",server="R0"} 101
haproxy_server_check_last_change_seconds{proxy="check_if_redis_is_master_1",server="R1"} 101
haproxy_server_check_last_change_seconds{proxy="check_if_redis_is_master_1",server="R2"} 101
haproxy_server_check_last_change_seconds{proxy="check_if_redis_is_master_2",server="R0"} 102
haproxy_server_check_last_change_seconds{proxy="check_if_redis_is_master_2",server="R1"} 102
haproxy_server_check_last_change_seconds{proxy="check_if_redis_is_master_2",server="R2"} 102
haproxy_server_check_last_change_seconds{proxy="bk_redis_master",server="R0"} 100
haproxy_server_check_last_change_seconds{proxy="bk_redis_master",server="R1"} 100
haproxy_server_check_last_change_seconds{proxy="bk_redis_master",server="R2"} 102
#HELP haproxy_server_current_throttle Current throttle percentage for the server, when slowstart is active, or no value if not in slowstart.
#TYPE haproxy_server_current_throttle gauge
haproxy_server_current_throttle{proxy="check_if_redis_is_master_0",server="R0"} 100
haproxy_server_current_throttle{proxy="check_if_redis_is_master_0",server="R1"} 100
haproxy_server_current_throttle{proxy="check_if_redis_is_master_0",server="R2"} 100
haproxy_server_current_throttle{proxy="check_if_redis_is_master_1",server="R0"} 100
haproxy_server_current_throttle{proxy="check_if_redis_is_master_1",server="R1"} 100
haproxy_server_current_throttle{proxy="check_if_redis_is_master_1",server="R2"} 100
haproxy_server_current_throttle{proxy="check_if_redis_is_master_2",server="R0"} 100
haproxy_server_current_throttle{proxy="check_if_redis_is_master_2",server="R1"} 100
haproxy_server_current_throttle{proxy="check_if_redis_is_master_2",server="R2"} 100
haproxy_server_current_throttle{proxy="bk_redis_master",server="R0"} 100
haproxy_server_current_throttle{proxy="bk_redis_master",server="R1"} 100
haproxy_server_current_throttle{proxy="bk_redis_master",server="R2"} 100
#HELP haproxy_server_loadbalanced_total Total number of times a service was selected, either for new sessions, or when redispatching.
#TYPE haproxy_server_loadbalanced_total counter
haproxy_server_loadbalanced_total{proxy="check_if_redis_is_master_0",server="R0"} 0
haproxy_server_loadbalanced_total{proxy="check_if_redis_is_master_0",server="R1"} 0
haproxy_server_loadbalanced_total{proxy="check_if_redis_is_master_0",server="R2"} 0
haproxy_server_loadbalanced_total{proxy="check_if_redis_is_master_1",server="R0"} 0
haproxy_server_loadbalanced_total{proxy="check_if_redis_is_master_1",server="R1"} 0
haproxy_server_loadbalanced_total{proxy="check_if_redis_is_master_1",server="R2"} 0
haproxy_server_loadbalanced_total{proxy="check_if_redis_is_master_2",server="R0"} 0
haproxy_server_loadbalanced_total{proxy="check_if_redis_is_master_2",server="R1"} 0
haproxy_server_loadbalanced_total{proxy="check_if_redis_is_master_2",server="R2"} 0
haproxy_server_loadbalanced_total{proxy="bk_redis_master",server="R0"} 0
haproxy_server_loadbalanced_total{proxy="bk_redis_master",server="R1"} 0
haproxy_server_loadbalanced_total{proxy="bk_redis_master",server="R2"} 0

[chart/redis-ha][BUG]

Describe the bug
When installing / upgrading to chart 4.10.9 I am getting an upgrade failed:

"Error: UPGRADE FAILED: template: redis-ha/templates/redis-ha-announce-service.yaml:51:26: executing "redis-ha/templates/redis-ha-announce-service.yaml" at <.Values.exporter.portName>: can't evaluate field Values in type int"

I have added portName: exporter-port to the values.yaml.

It is related with this commit:
2387d16#diff-acd5d40eb79ed4d6802c58c274eca0ba8c3223045ef2fa0a22949c9b59383b50

Should be probably
{{ $root.Values.exporter.portName }}
instead of {{ .Values.exporter.portName }}

Seems a scoping issue.

To Reproduce
Steps to reproduce the behavior:

  1. Install the new chart v4.10.9 with exporter and metrics enabled

Expected behavior
Expect the installation to work.

[chart/redis-ha][BUG] redis-ha-statefulset writes invalid labels

Describe the bug
redis-ha-statefulset breaks string labels that look like booleans. The cause is that it writes labels without quotes: https://github.com/DandyDeveloper/charts/blob/master/charts/redis-ha/templates/redis-ha-statefulset.yaml#L35

To Reproduce
Steps to reproduce the behavior:

  1. Set a string label:
    redis-ha:
      labels:
        dns: "true"
    
  2. Try to deploy redis-ha
  3. redis-ha transforms that to a boolean label:
    labels:
      app: redis-ha
      dns: true
    
  4. Deployment fails with an error similar to:
    kapp: Error: Applying update statefulset/foo-redis-ha-server (apps/v1) namespace: foo:
      Updating resource statefulset/foo-redis-ha-server (apps/v1) namespace: foo:
        StatefulSet in version "v1" cannot be handled as a StatefulSet: v1.StatefulSet.Spec:     v1.StatefulSetSpec.Template: v1.PodTemplateSpec.ObjectMeta: v1.ObjectMeta.Labels:
          ReadString: expects " or n, but found t, error found in #10 byte of ...|y","dns":true,"kapp.|..., bigger context ...|":"redis-ha","dns":true,"kapp.k14s.io/association":"v1.43bdfa7e6010c97|... (reason: BadRequest)
    

Expected behavior

Quote the labels when writing them so that strings can't turn into booleans.

[chart/redis-ha][REQUEST] Support ipv6

Is your feature request related to a problem? Please describe.
I have ipv6 Kubernetes cluster and tried to install redis with redis-ha helm chart.
It was failed and here is issues:

  • redis:6.0.7-apline image doesn't bind ipv6
  • haproxy doesn't bind ipv6

Describe the solution you'd like

  • use redis:6.0.7 image instead of redis:6.0.7-alpine
  • in templates/_configs.tpl file, change all binding like bind :8888 to bind ipv6@:8888

It seems enough to configure ipv6 in value file.
In the future, consideration for ipv4/ipv6 dual stack is also required.

[chart/redis-ha][BUG] PodDisruptionBudget applies to test pods

Describe the bug
The PodDisruptionBudget selector matches the labels not only on the StatefulSet but also on the two test containers:

To Reproduce
Steps to reproduce the behavior:

  1. Deploy redis-ha with these values:
redis-ha:
  haproxy:
    enabled: true
  labels:
    dns: "true"
  metrics:
    enabled: true
  podDisruptionBudget:
    # maxUnavailable: 1
    minAvailable: 2
  replicas: 3
  1. Check the pod labels:
$ kubectl get pods --show-labels | grep redis-ha
samproxy-redis-ha-configmap-test             0/1     Completed   0          1h    app=redis-ha,foo/app=samproxy,release=samproxy
samproxy-redis-ha-server-0                   2/2     Running     0          1h    app=redis-ha,foo/app=samproxy,release=samproxy,samproxy-redis-ha=replica
samproxy-redis-ha-server-1                   2/2     Running     0          1h    app=redis-ha,foo/app=samproxy,release=samproxy,samproxy-redis-ha=replica
samproxy-redis-ha-server-2                   2/2     Running     0          12d   app=redis-ha,foo/app=samproxy,release=samproxy,samproxy-redis-ha=replica
samproxy-redis-ha-service-test               0/1     Completed   0          1h    app=redis-ha,foo/app=samproxy,release=samproxy
  1. Check the PodDisruptionBudget:
$ kubectl describe pdb samproxy-redis-ha-pdb                                   
Name:           samproxy-redis-ha-pdb
Namespace:      opentelemetry
Min available:  2
Selector:       app=redis-ha,foo/app=samproxy,release=samproxy
Status:
    Allowed disruptions:  1
    Current:              3
    Desired:              2
    Total:                5
Events:                   <none>

Expected behavior
The total pods for the budget should be 3, not 5.

Additional context
We use kapp to deploy helm templates. I have a possibly related feature request for custom labels: #40

[chart/redis-ha][BUG] - incorrect master IP being returned by sentinel on startup

Describe the bug
Incorrect master IP is returned by sentinel iff current master-pod is killed, and the new pod starts up before the sentinels can elect a new master. This leads to a timeout when attempting to connect to the cluster, and finally results in a forced failover.

To Reproduce
Steps to reproduce the behavior:

  1. Kill current master pod
  2. Exec onto new pending pod -> config-init container
  3. Run redis-cli -h app-redis-redis-ha -p 26379 sentinel get-master-addr-by-name mymaster | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
  4. The ip returned will be of the old master pod (attempting to run redis-cli -h "$MASTER" ping will timeout as expected)

Expected behavior
On pod startup, above command should return IP of newly elected master, leading to a correct addition to the cluster (without the forced failover needing to occur -> as this throws errors on most clients)

Additional context
Perhaps add a pause here to ensure that the correct IP is returned?

[chart/redis-ha] Non-templated use of masterGroupName in _configs.tpl

masterGroupName supports templates, however there is a use of it which does not support them. That is here:
https://github.com/DandyDeveloper/charts/blob/master/charts/redis-ha/templates/_configs.tpl#L120

identify_master() {
    echo "Identifying redis master (get-master-addr-by-name).."
    echo "  using sentinel ({{ template "redis-ha.fullname" . }}), sentinel group name ({{ .Values.redis.masterGroupName }})"
    echo "  $(date).."
<SNIP>

As you can see inside the second echo above - the bare value is referenced, rather than using the redis-ha.masterGroupName definition.

[chart/redis-ha] Support external servicemonitor

Is your feature request related to a problem? Please describe.
I want to use the redis-ha chart with an external servicemonitor.
Currently I can't differentiate between the two default services that get created because they have the same labels.
Now my grafana dashboard tries to give me metrics for the redis-ha-announce as well.

Describe the solution you'd like
I want to have a label (either per default or manually settable) to be able to find the correct redis service with a labelSelector.

Describe alternatives you've considered
Create a label when the exporter is enabled, regardless of the servicemonitor.
With this approach the current label servicemonitor: enabled should probably be renamed or an additional labels should be provided.

[chart/redis-ha-4.5.3][BUG] sha1sum is not defined

Describe the bug
Error: parse error in "redis-ha/templates/redis-ha-statefulset.yaml": template: redis-ha/templates/redis-ha-statefulset.yaml:107: function "sha1sum" not defined

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Additional context
Add any other context about the problem here.

[chart/redis-ha][REQUEST] Custom labels for all pods

Is your feature request related to a problem? Please describe.
My team assigns network policies and associates pod ownership with our team using labels. redis-ha offers custom labels for some pods but not all. For example, redis-ha-statefulset pods support custom labels.

These do not:

  • redis-haproxy-deployment
  • test-redis-ha-configmap
  • test-redis-ha-pod

Describe the solution you'd like
The chart should apply the same custom labels to all pods including:

  • redis-haproxy-deployment
  • test-redis-ha-configmap
  • test-redis-ha-pod

Describe alternatives you've considered
Manually editing pods or deployments after creation is a bad idea.

Standard labels allow for network policy flexibility but not for associating team ownership on a multi-team cluster.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.