Coder Social home page Coder Social logo

haproxytech / helm-charts Goto Github PK

View Code? Open in Web Editor NEW
142.0 17.0 115.0 520 KB

Helm chart for HAProxy Kubernetes Ingress Controller

License: Apache License 2.0

Shell 43.76% Mustache 56.24%
helm-charts kubernetes ingress-controller kubernetes-cluster haproxy

helm-charts's Introduction

HAProxy

HAProxy Helm Charts

GitHub CircleCI Artifact Hub

This repository hosts official HAProxy Technologies Helm Charts for deploying HAProxy Load Balancer and Ingress controller on Kubernetes.

Changelogs

Changelog for Helm charts in this repository are maintained automatically at ArtifactHub separately for HAProxy and Ingress controller

Changelog for the packaged projects are available separately for HAProxy and HAProxy Technologies Ingress controller, with release notes and other documentation available at their respective project pages.

Before you begin

Setup a Kubernetes Cluster

The quickest way to setup a Kubernetes cluster is with Azure Kubernetes Service, AWS Elastic Kubernetes Service or Google Kubernetes Engine using their respective quick-start guides.

For setting up Kubernetes on other cloud platforms or bare-metal servers refer to the Kubernetes getting started guide.

Install Helm

Get the latest Helm release.

Add Helm chart repo

Once you have Helm installed, add the repo as follows:

helm repo add haproxytech https://haproxytech.github.io/helm-charts
helm repo update

HAProxy Helm charts can be also found on ArtifactHub.

Search and install charts

helm search repo haproxytech/
helm install my-release haproxytech/<chart>

NOTE: For instructions on how to install a chart follow instructions in its README.md.

Contributing

We welcome all contributions. Please refer to guidelines on how to make a contribution.

helm-charts's People

Contributors

aep avatar aiharos avatar brianrudolf-ep avatar dawnflash avatar denismaggior8 avatar dkorunic avatar efx-pxt1 avatar flabatut avatar hdurand0710 avatar infusible avatar ivanmatmati avatar jbertozzi avatar jgranieczny avatar jkyberneees avatar kirrmann avatar knatsakis avatar mecampbellsoup avatar memberit avatar mo3m3n avatar mohsenmottaghi avatar mugioka avatar oktalz avatar prometherion avatar pupseba avatar rufusnufus avatar sharifm-informatica avatar tomkeur avatar ufou avatar yaworski-joseph-bah avatar yellowmegaman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Disable defaultBackend?

Is there a way to disable creating the defaultBackend templates? I have another ingress controller providing the default backend and would like to avoid the creation of extra objects and resources.

[kubernetes-ingress] Allow existing secret for imagePullSecret

Problem

Currently registry credentials must be specified in plaintext in values.yaml

controller:
  imageCredentials:
    registry: ...
    username: ...
    password: ...

Suggestion

Allow specifying existing registry secret so credentials can be managed outside of values.yaml

controller:
  image:
    repository: ...
    tag: ...
    pullPolicy: ...
    pullSecrets: [ ... ]

[kubernetes-ingress] extraVolumes and extraVolumeMounts for main container

Problem

kubernetes-ingress helm chart does not expose extraVolumes and extraVolumeMounts for main container. This makes it not possible to run in cluster with readOnlyRootFilesystem as directories are not writable and need to be mounted with emptyDir volume

  • /etc/haproxy
  • /tmp
  • /var/state/haproxy

Current chart includes extraVolumes but it's only for extra containers

Solution

expose extraVolumes and extraVolumeMounts for main container

defaultTLSSecret is only valid if it is in the Release Namespace

Currently defaultTLSSecret is only valid if the user provides the name of a secret in Release Namespace as we can see here

- --default-ssl-certificate={{ .Release.Namespace }}/{{ .Values.controller.defaultTLSSecret.secret }}

While the controller can fetch the secret from any NS provided that the input is in the NS/secretName format.
This is related to haproxytech/kubernetes-ingress#300
So I am opening this issue to report if it is possible for a user to provide a secret format 'NS/secretName'

Document changes

It would be nice, to have a changelog for the helm charts or document its changes in the release page.

Currently, we have Kubernetes-Ingress installed in version 1.12.1 and see that there is an upgrade available for 1.15.1 but we don't know the changes and if or how it affects our setup.

loadbalancerIP has no effect

Maybe I am understanding this parameter completely wrong...

I have to set the listening IP of the ingress to a certain IP address. The loadbalancer used is MetalLB, the IP in question is configured to route trafic into the cluster.

In my understanding, loadbalancerIP sets the IP used by the ingress, but this seems wrong. The external IP used by the ingress is the IP of the node where the ingress controller was started, no matter what the parameter is set.

So either I my interpretation of loadbalancerIP is wrong, or it has no effect.

stand alone haproxy application doesn't pick custom haproxy config in chart 1.7.0.

Hi Team. When we try to deploy haproxy using helm chart 1.7.0, it doesn't pick up the config given in the values.yaml. From the chart configuration, I could see the custom haproxy config should be stored as configmap and mounted to pod location /usr/local/etc/haproxy/haproxy.cfg using the deployment.yaml. This is not happening in the version. You can use the below steps to replicate the issue.

helm install --values haproxy/values.yaml haproxy-level-1 haproxytech/haproxy --debug 

helm install --values haproxy/values.yaml haproxy-level-1 haproxytech/haproxy --debug --version 1.6.0

Cannot install haproxy with "admin" role in namespace

I am not a cluster admin. But have admin rights in a namespace.
Trying to install haproxy 1.1.4 chart.
Got a helm error like
Unable to continue with install: could not get information about the resource: podsecuritypolicies.policy "my-haproxy" is forbidden: User "myname" cannot get resource "podsecuritypolicies" in API group "policy" at the cluster scope

https://kubernetes.io/docs/concepts/policy/pod-security-policy/ claims that a PodSecurityPolicy is a cluster-wide resource so I am unable to install this chart not being a cluster admin.

I should be able to turn off use of PodSecurityPolicy resource.

Support for dnsPolicy

It would be nice to introduce a support for dnsPolicy.

When using hostNetwork=true, change to dnsPolicy: ClusterFirstWithHostNet. See pod's dns policy for details. Having hostNetwork=true the pod gets the DNS of the node and it is not able to resolve internal services, for example kubernetes.default.svc.cluster.local

cannot start the ingress as loadbalancer instead of NodePort

Hello,

I am setting up a cluster on bare metal and based on the documentation it should be possible to have ha proxy as the ingress controller without using NodePort.
I want ha proxy to bind to the host's port 80 and 443

I am running the following command to do so:

$helm install haproxy-controller --namespace haproxy-controller -f haproxy-ingress-values.yaml haproxytech/kubernetes-ingress --set service.type=LoadBalancer --set kind=DaemonSet

but getting the following result:

NAME: haproxy-controller
LAST DEPLOYED: Sun Mar 14 17:01:29 2021
NAMESPACE: haproxy-controller
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
HAProxy Kubernetes Ingress Controller has been successfully installed.

Controller image deployed is: "haproxytech/kubernetes-ingress:1.5.3".
Your controller is of a "Deployment" kind. Your controller service is running as a "NodePort" type.

even with the override of the service type it is still spinning up as node port.
Am I missing something?

please advise

The release namespace of the defaultTLSSecret is missing

Trying to deploy with a custom defaultTLSSecret, noted that the release namespace is missing. So the container crashes.

{{- if and .Values.controller.defaultTLSSecret.enabled -}}
{{- if .Values.controller.defaultTLSSecret.secret }}
          - --default-ssl-certificate={{ .Values.controller.defaultTLSSecret.secret }}
{{- else }}
          - --default-ssl-certificate={{ .Release.Namespace }}/{{ template "kubernetes-ingress.defaultTLSSecret.fullname" . }}
{{- end }}
{{- end }}

[kubernetes-ingress] allow disabling startupProbe

Hi,

Problem

Current chart compatibility is limited to k8s 1.16+ due to the use of StartupProbe. We have production clusters running older version of k8s (1.14) which can't use haproxy chart.

Suggestion

  • Allow disabling StartupProbe in values.yaml so that it can be used in older clusters.
  • Include minimum k8s version required in README.md

Workaround

Currently we deploy Traefik controller instead.

Thank you

Latest haproxy chart is not released

There were changes made to haproxy chart - that add extraVolume, but seems like the chart isn't released with the new version
related commit - 272db4d

as of now the latest version on haproxy chart is 1.3.0 and it doesn't have the above mentioned changes

tcp services not added to haproxy config

Hi,

We are just trying haproxy out using this chart (latest release) on k8s 1.16.8, specifically we wish to use it for deploying a TCP service and use the load balancing least conn algorithm along side pod max conn, however even though we have added an controller.config and can see the cm created, the port does not appear in the haproxy.cfg - is this supposed to work? Or do we have to override the haproxy.cfg somehow? Maybe I missed something in the docs about this?

Please let me know if you need more info?
Thanks

ConfigMap:

apiVersion: v1
data:
  "1935": default/rtmp-broadcast:1935
kind: ConfigMap
metadata:
  creationTimestamp: "2020-05-03T16:02:22Z"
  labels:
    app.kubernetes.io/instance: rtmp
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: kubernetes-ingress
    app.kubernetes.io/version: 1.4.2
    helm.sh/chart: kubernetes-ingress-1.1.3
  name: rtmp-kubernetes-ingress
  namespace: default
  resourceVersion: "141774"
  selfLink: /api/v1/namespaces/default/configmaps/rtmp-kubernetes-ingress
  uid: 43b28d96-7c94-4a50-a962-de2e10dca1e0

Service:

apiVersion: v1
kind: Service
metadata:
  annotations:
    haproxy.org/load-balance: leastconn
    haproxy.org/pod-maxconn: "3"
  creationTimestamp: "2020-05-03T15:28:17Z"
  labels:
    app.kubernetes.io/instance: rtmp
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: kubernetes-ingress
    app.kubernetes.io/version: 1.4.2
    helm.sh/chart: kubernetes-ingress-1.1.3
  name: rtmp-kubernetes-ingress
  namespace: default
  resourceVersion: "142650"
  selfLink: /api/v1/namespaces/default/services/rtmp-kubernetes-ingress
  uid: ef2be52a-03bb-40fc-b09a-c2893e03d4a5
spec:
  clusterIP: 10.109.6.30
  externalTrafficPolicy: Cluster
  loadBalancerIP: 1.2.3.4
  ports:
  - name: http
    nodePort: 30299
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    nodePort: 31449
    port: 443
    protocol: TCP
    targetPort: https
  - name: stat
    nodePort: 30126
    port: 1024
    protocol: TCP
    targetPort: stat
  - name: rtmp-tcp
    nodePort: 30746
    port: 1935
    protocol: TCP
    targetPort: rtmp
  selector:
    app.kubernetes.io/instance: rtmp
    app.kubernetes.io/name: kubernetes-ingress
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 1.2.3.4

Setting port of service to different port than that of controller container not possible

The controller-deployment.yaml declares the container ports for the ingress controller with help of the (TCP) ports defined for the service. This is the definition in the template file:

          {{- range .Values.controller.service.tcpPorts }}
            - name: {{ .name }}-tcp
              containerPort: {{ .port }}
              protocol: TCP
          {{- end }}

Shouldn't the containerPort be mapped to .targetPort?

As it is now it is not possible to use a different port on the service than that of the controller.

Feature Request: stick-table annotation

Hi everyone,

What's the best way to create a stick-table backend? I see that I can use a backend-config-snippet, but how would I go about creating a stick-table that will be available for access across all backends in the config?

I kind of think it would be nice if there was an annotation just for creating stick-table backends.

Timeouts are set to a non-zero value: 'client', 'connect', 'server'.

Hi,

I have just installed HAProxy Kubernetes Ingress Controller version 1.7.0 with values version 1.17.4.

The only values I have overridden are:

controller:
  unprivileged: true
  ingressClass: "haproxy"
  PodDisruptionBudget:
    enable: true
    minAvailable: 1
  defaultTLSSecret:
    secret: mysecret
  publishService:
    enabled: false
  service:
    type: LoadBalancer

Its looks likes it als start, but I'm getting the following logs:

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-aux-cfg: executing...
[cont-init.d] 01-aux-cfg: exited 0.
[cont-init.d] done.
[services.d] starting services
**[WARNING]  (210) : config : missing timeouts for frontend 'https'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING]  (210) : config : missing timeouts for frontend 'http'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING]  (210) : config : missing timeouts for frontend 'healthz'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING]  (210) : config : missing timeouts for frontend 'stats'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.**
[WARNING]  (210) : Removing incomplete section 'peers localinstance' (no peer named 'haproxy-kubernetes-ingress-77769c9fdd-t4p2p').
[NOTICE]   (210) : New worker #1 (227) forked
2021/10/13 14:48:19


2021/10/13 14:48:19 HAProxy Ingress Controller v1.7.0 ca16d5f
2021/10/13 14:48:19 Build from: https://github.com/haproxytech/kubernetes-ingress
2021/10/13 14:48:19 Build date: 2021-10-08T12:20:39

2021/10/13 14:48:19 ConfigMap: ingress/haproxy-kubernetes-ingress
2021/10/13 14:48:19 Ingress class: haproxy
2021/10/13 14:48:19 Empty Ingress class: false
2021/10/13 14:48:19 Publish service:
2021/10/13 14:48:19 Default backend service: ingress/haproxy-kubernetes-ingress-default-backend
2021/10/13 14:48:19 Default ssl certificate: ingress/mysecret
2021/10/13 14:48:19 Frontend HTTP listening on: 0.0.0.0:80
2021/10/13 14:48:19 Frontend HTTPS listening on: 0.0.0.0:443
2021/10/13 14:48:19 Controller sync period: 5s

2021/10/13 14:48:19 Running on haproxy-kubernetes-ingress-77769c9fdd-t4p2p
[services.d] done.
2021/10/13 14:48:19 haproxy.go:35 Running with HAProxy version 2.4.7-b5e51a5 2021/10/04 - https://haproxy.org/
2021/10/13 14:48:19 haproxy.go:44 Starting HAProxy with /etc/haproxy/haproxy.cfg
2021/10/13 14:48:19 controller.go:117 Running on Kubernetes version: v1.19.13 linux/amd64
2021/10/13 14:48:19 INFO    crmanager.go:75 Global CR defined in API core.haproxy.org
2021/10/13 14:48:19 INFO    crmanager.go:75 Defaults CR defined in API core.haproxy.org
2021/10/13 14:48:19 INFO    crmanager.go:75 Backend CR defined in API core.haproxy.org
2021/10/13 14:48:25 INFO    monitor.go:221 Auxiliary HAProxy config '/etc/haproxy/haproxy-aux.cfg' updated
2021/10/13 14:48:25 INFO    ingress.go:132 Setting http default backend to 'ingress-haproxy-kubernetes-ingress-default-backend-http'
2021/10/13 14:48:25 INFO    controller.go:248 HAProxy reloaded
[WARNING]  (210) : Reexecuting Master process
**[WARNING]  (210) : config: Can't get version of the global server state file '/var/state/haproxy/global'.
[WARNING]  (227) : Proxy https stopped (cumulated conns: FE: 0, BE: 0).
[WARNING]  (227) : Proxy http stopped (cumulated conns: FE: 0, BE: 0).
[WARNING]  (227) : Proxy healthz stopped (cumulated conns: FE: 0, BE: 0).
[WARNING]  (227) : Proxy stats stopped (cumulated conns: FE: 0, BE: 0).
[WARNING]  (227) : Stopping frontend GLOBAL in 0 ms.**
[NOTICE]   (210) : New worker #1 (251) forked
[WARNING]  (210) : Former worker #1 (227) exited with code 0 (Exit)
2021/10/13 15:10:10 INFO    controller.go:248 HAProxy reloaded
**[WARNING]  (210) : Reexecuting Master process
[WARNING]  (210) : config: Can't get version of the global server state file '/var/state/haproxy/global'.
[WARNING]  (251) : Proxy healthz stopped (cumulated conns: FE: 261, BE: 0).
[WARNING]  (251) : Proxy http stopped (cumulated conns: FE: 1085, BE: 0).
[WARNING]  (251) : Proxy https stopped (cumulated conns: FE: 1033, BE: 0).
[WARNING]  (251) : Proxy stats stopped (cumulated conns: FE: 1062, BE: 0).
[WARNING]  (251) : Stopping frontend GLOBAL in 0 ms.
[WARNING]  (251) : Stopping backend ingress-haproxy-kubernetes-ingress-default-backend-http in 0 ms.**
[NOTICE]   (210) : New worker #1 (267) forked
[WARNING]  (210) : Former worker #1 (251) exited with code 0 (Exit)

Can anyone help me?

Thx in advance!

Jeff

Can't scale deployment more than 1 replica

If I set replicaCount to 1, I have a successful deployment and everything is functional.
However, if I set it to 2 or more, one pod will start up, and the others will eventually CrashLoopBackoff because they fail their healthcheck.

Upon further investigation (shell on the pod, logs, shell on an ephemeral pod to confirm, etc.), it turns out that the other pods simply don't start haproxy. It's not listening on any ports, and an inspection of top shows it's not running.

Logs from the bad pod do not indicate any error:

2020/05/05 08:46:43
 _   _    _    ____
| | | |  / \  |  _ \ _ __ _____  ___   _
| |_| | / _ \ | |_) | '__/ _ \ \/ / | | |
|  _  |/ ___ \|  __/| | | (_) >  <| |_| |
|_| |_/_/   \_\_|   |_|  \___/_/\_\\__, |
 _  __     _                       |___/             ___ ____
| |/ /   _| |__   ___ _ __ _ __   ___| |_ ___  ___  |_ _/ ___|
| ' / | | | '_ \ / _ \ '__| '_ \ / _ \ __/ _ \/ __|  | | |
| . \ |_| | |_) |  __/ |  | | | |  __/ ||  __/\__ \  | | |___
|_|\_\__,_|_.__/ \___|_|  |_| |_|\___|\__\___||___/ |___\____|


2020/05/05 08:46:43 HAProxy Ingress Controller v1.4.2 46127a7

2020/05/05 08:46:43 Build from: [email protected]:haproxytech/kubernetes-ingress.git
2020/05/05 08:46:43 Build date: 2020-04-03T20:10:18

2020/05/05 08:46:43 ConfigMap: haproxy-ingress/haproxy-ingress-kubernetes-ingress
2020/05/05 08:46:43 Ingress class: internal
2020/05/05 08:46:43 Publish service: haproxy-ingress/haproxy-ingress-kubernetes-ingress
2020/05/05 08:46:43 Default backend service: haproxy-ingress/haproxy-ingress-kubernetes-ingress-default-backend
2020/05/05 08:46:43 Default ssl certificate: haproxy-ingress/haproxy-default-tls
2020/05/05 08:46:43 Running with  HA-Proxy version 2.0.14 2020/04/02 - https://haproxy.org/
2020/05/05 08:46:43 Starting HAProxy with /etc/haproxy/haproxy.cfg
2020/05/05 08:46:43 Running on haproxy-ingress-kubernetes-ingress-659b898986-hjpk9
[WARNING] 125/084643 (24) : Can't open server state file '/var/state/haproxy/global': No such file or directory
2020/05/05 08:46:43 Running on Kubernetes version: v1.17.4+k3s1 linux/amd64

Compared to the healthy pod:

2020/05/05 06:59:20
 _   _    _    ____
| | | |  / \  |  _ \ _ __ _____  ___   _
| |_| | / _ \ | |_) | '__/ _ \ \/ / | | |
|  _  |/ ___ \|  __/| | | (_) >  <| |_| |
|_| |_/_/   \_\_|   |_|  \___/_/\_\\__, |
 _  __     _                       |___/             ___ ____
| |/ /   _| |__   ___ _ __ _ __   ___| |_ ___  ___  |_ _/ ___|
| ' / | | | '_ \ / _ \ '__| '_ \ / _ \ __/ _ \/ __|  | | |
| . \ |_| | |_) |  __/ |  | | | |  __/ ||  __/\__ \  | | |___
|_|\_\__,_|_.__/ \___|_|  |_| |_|\___|\__\___||___/ |___\____|


2020/05/05 06:59:20 HAProxy Ingress Controller v1.4.2 46127a7

2020/05/05 06:59:20 Build from: [email protected]:haproxytech/kubernetes-ingress.git
2020/05/05 06:59:20 Build date: 2020-04-03T20:10:18

2020/05/05 06:59:20 ConfigMap: haproxy-ingress/haproxy-ingress-kubernetes-ingress
2020/05/05 06:59:20 Ingress class: internal
2020/05/05 06:59:20 Publish service: haproxy-ingress/haproxy-ingress-kubernetes-ingress
2020/05/05 06:59:20 Default backend service: haproxy-ingress/haproxy-ingress-kubernetes-ingress-default-backend
2020/05/05 06:59:20 Default ssl certificate: haproxy-ingress/haproxy-default-tls
2020/05/05 06:59:20 Running with  HA-Proxy version 2.0.14 2020/04/02 - https://haproxy.org/
2020/05/05 06:59:20 Starting HAProxy with /etc/haproxy/haproxy.cfg
2020/05/05 06:59:20 Running on haproxy-ingress-kubernetes-ingress-659b898986-42qmr
[WARNING] 125/065920 (23) : Can't open server state file '/var/state/haproxy/global': No such file or directory
2020/05/05 06:59:20 Running on Kubernetes version: v1.17.4+k3s1 linux/amd64
[NOTICE] 125/065920 (35) : New worker #1 (36) forked
2020/05/05 06:59:27 Confiugring default_backend haproxy-ingress-kubernetes-ingress-default-backend from ingress DefaultService
2020/05/05 06:59:27 successful update of LoadBalancer status of ingress home-assistant/home-assistant
2020/05/05 06:59:27 successful update of LoadBalancer status of ingress ***/***
2020/05/05 06:59:27 successful update of LoadBalancer status of ingress ***/***
2020/05/05 06:59:27 successful update of LoadBalancer status of ingress ***/***
2020/05/05 06:59:27 successful update of LoadBalancer status of ingress ***/***
2020/05/05 06:59:27 successful update of LoadBalancer status of ingress ***/***
2020/05/05 06:59:28 successful update of LoadBalancer status of ingress ***/***
2020/05/05 06:59:28 successful update of LoadBalancer status of ingress ***/***
2020/05/05 06:59:28 successful update of LoadBalancer status of ingress ***/***
2020/05/05 06:59:29 successful update of LoadBalancer status of ingress ***/***
2020/05/05 06:59:29 HAProxy reloaded
[NOTICE] 125/065929 (48) : New worker #1 (49) forked
2020/05/05 08:46:04 Modified: ***/*** - SRV_rGwUZ - ready
2020/05/05 08:46:04 Modified: ***/*** - SRV_AAnIs - maint
2020/05/05 08:46:04 Modified: ***/*** - SRV_P8INE - ready
2020/05/05 08:46:04 Modified: ***/***- SRV_klqWx - maint
2020/05/05 09:00:45 Modified: ***/*** - SRV_rGwUZ - maint
2020/05/05 09:00:45 Modified: ***/*** - SRV_P8INE - maint
2020/05/05 09:00:54 Modified: ***/*** - SRV_zYmkr - ready
2020/05/05 09:00:54 Modified: ***/*** - SRV_38MlW - ready

Here's my full helm values:

---
kubernetes-ingress:
  controller:
    kind: Deployment
    replicaCount: 1
    ingressClass: internal
    defaultTLSSecret:
      enabled: true
      secret: haproxy-ingress/haproxy-default-tls
    service:
      type: NodePort
      labels:
        app: haproxy-ingress
    publishService:
      enabled: true
    nodeSelector:
      kubernetes.io/arch: amd64
  defaultBackend:
    replicaCount: 1
    nodeSelector:
      kubernetes.io/arch: amd64

haproxy template files daemonset.yaml and deployment.yaml do not allow to configure metadata.annotations using values

Both daemonset.yaml and deployment.yaml template files do not allow to configure metadata.annotations using values.

What we have:

metadata:
  name: {{ include "haproxy.fullname" . }}
  labels:
    {{- include "haproxy.labels" . | nindent 4 }}
spec:

What we expect:

metadata:
  name: {{ include "haproxy.fullname" . }}
  labels:
    {{- include "haproxy.labels" . | nindent 4 }}
  {{- if .Values.deploymentAnnotations }}
  annotations:
{{ toYaml .Values.deploymentAnnotations | indent 4 }}
  {{- end }}
spec:

pod-maxconn doesn't work

Chart version: 1.12.3

I'm pretty sure my config is correct. I can see the annotation on the service.

# helm values

controller:
  service:
    type: LoadBalancer
    annotations:
      haproxy.org/pod-maxconn: 5 # testing low number

My understanding is that with the above config, no pod will receive more than 5 concurrent connections. Any more connections will be queued up until there's an open slot. However as I test it, I can make hundreds of concurrent connections to each pod.

Support for extra labels on deployments and replicasets

The ability to add extra labels to the standard chart labels on deployments and replicasets.
Specific use-case being kube-monkey which requires labels on the deployment in addition
to the pods which is already catered for.

feature request: Allow specifying existing configmap for haproxy.cfg

Right now the haproxy.cfg can only be specified via config: block in values.yaml. The issue I have with this is that values.yaml cannot use any Helm templating, so if I want to generate a config dynamically e.g.

backend be_main
  server {{ .Release.name }}-mybackend

it's not possible to do that.

If there were a config option to re-using an existing configmap instead of the chart-generated one, you could generate a custom configmap in the root chart and pass it through.

[kubernetes-ingress] Unable to start with unprivileged config

Problem

Pod cannot start when setting unprivileged flag in values file

2020/12/21 08:25:18 HAProxy Ingress Controller v1.4.10 6f9c630

2020/12/21 08:25:18 Build from: [email protected]:haproxytech/kubernetes-ingress.git
2020/12/21 08:25:18 Build date: 2020-11-20T10:33:03

2020/12/21 08:25:18 ConfigMap: default/haproxy-kubernetes-ingress
2020/12/21 08:25:18 Ingress class:
2020/12/21 08:25:18 Publish service:
2020/12/21 08:25:18 Default backend service: default/haproxy-kubernetes-ingress-default-backend
2020/12/21 08:25:18 Default ssl certificate: default/haproxy-kubernetes-ingress-default-cert
2020/12/21 08:25:18 Controller sync period: 5s
2020/12/21 08:25:18 PANIC   controller.go:241 mkdir /etc/haproxy/certs: permission denied

Reproduce

helm install haproxy haproxytech/kubernetes-ingress --set controller.unprivileged=true

Missing support priorityClassName and HPA

Hi, I'm using HAProxy helm chart and there are mainly 2 important features which I believe are missing :

  • Support for HPA (when using deployment controller)
  • Support for priorityClassName
    Are you planning to add these features ?

Support lifecycle additions on controller - for hooks

Be great if the chart had support for lifecycle to add hooks to the controller.
Personal example being a preStop for graceful shutdown.

controller:
  lifecycle:
    # graceful shutdown
    preStop:
      exec:
        command: ["/bin/sh", "-c", "kill -USR1 $(pidof haproxy); while killall -0 haproxy; do sleep 1; done"]

But generally useful to support.

haproxy.org/ssl-certificate annotation don't working

annotation haproxy.org/ssl-certificate ignored on harpoxy ingress via Daemonset.
Error:
https/v4: SSL handshake failure
ingress controller version:
haproxytech/kubernetes-ingress:1.6.1

config ingress:

apiVersion:  networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt
    haproxy.org/server-ssl: "true"
    haproxy.org/ssl-redirect: "false" 
    haproxy.org/server-proto: "h2"
    haproxy.org/ssl-certificate: "default/proxmox-secret"
    haproxy.org/ingress.class: "haproxy"
  name: proxmox-ingress
  namespace: default
spec:
  ingressClassName: "haproxy"
  rules:
  - host: proxmox.romanovs.ml
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: proxmox-ext
            port:
              number: 8006

if add block:

                tls:
                - hosts:
                  - proxmox.romanovs.ml
                  secretName: proxmox-secret

it works.

Question: how to discover LB IP address...

Hey folks,

I'm completely new to HAProxy and I'm trying to use it as an ingress controller in my k8s cluster.

Here is my setup:

  • 2 app deployments (each with a ClusterIP service - service1 & service2) in the default namespace
  • an ingress with 2 host rules (<root domain> -> service1; <sub>.<root domain> -> service2) also in the default namespace
  • haproxy's ingress controller deployed (through helm) in another namespace (system, but not kube-system):
helm install haproxy-ingress-controller haproxytech/kubernetes-ingress \
  --namespace system \
  --set controller.kind=DaemonSet \
  --set controller.ingressClass=haproxy

HAProxy IC does seem to discover the ingress I've deployed (I can see the services listed on the Stats Page haproxy-ingress-controller-service:1024), but I can't figure out what should my hosted zones (root and sub domains) point to... I tried adding A-records in my domain's hosted zones towards the NodePort service deployed by the helm chart (to it's IP), but it doesn't work - curl times out when making a request to any of my domains (DNS's have synced up, it's been hours already).

Apologies for the noob question and please let me know if you need any more info about my case.

Thanks.

kubernetes-ingress upgrade to 1.16.x nt working

Hi,
we have a kubernetes-ingress helm chart running in v 1.15.4 in a bare metal kubernetes with metallb (dont know if this matters).
Now, when we upgrade to any 1.16.x version, then adding an ingress gets stuck initializing. Only reverting back to 1.15.4 will give us back a working ingress.
I could not find any changes needed for upgrade. Maybe you have an idea why this happens,

Our valus yaml is pretty simple:

controller:
  service:
    type: LoadBalancer
  resources:
    requests:
      memory: 192Mi

HAProxy: allow configuring Environment variables

I need to adopt to dynamic Ipv6 Prefixes on a DSL line.

Current solution (without helm) is to have a configmap with IPv6Prefix=ffff:ffff:ffff:ffff that is updated from a daemon running on Kubernetes node. This Configmap is imported into a environment variable and HAProxy file uses it to listen to the right addresses.

I'd like to use something similar with Helm, easiest way seems to allow a yaml snippet to be inserted into env: section of the HAProxy container

Helm values.yaml could contain something like this:

env:
- name: IPv6Prefix
  valueFrom:
    configMapKeyRef:
      name: nafets-ipv6
      key: IPv6Prefix

ConfigMap TCP Services, option to add multiple server under backend

Currently, we can define tcp services in haproxy ingress controller as:

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp
  namespace: default
data:
  3306:              # Port where the frontend is going to listen to.
    tcp/mysql:3306   # Kuberntes service to use for the backend.
  389:
    tcp/ldap:389:ssl # ssl option will enable ssl offloading for target service.
  6379:
    tcp/redis:6379

If I have have service which gives option to define multiple tcp ports, there is no option to add multiple backends here. For example if using haproxy deployment instead of ingress I was able to do it as below:

backend dummy_tcp
   option tcp-check
   server-template demo_1883 10 demo.svc.cluster.local:1883 check
   server-template demo_1884 10 demo.svc.cluster.local:1884 check
   server-template demo_1885 10 demo.svc.cluster.local:1885 check

is there any way of doing the same in ingress controller.

RBAC too permissive for secrets and unused one resources

Using the latest Helm chart versions I noticed this too permissive verbs for Secret resources:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: "2020-06-30T08:42:50Z"
  labels:
    app.kubernetes.io/instance: haproxy-tech-ingress-1
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: kubernetes-ingress
    app.kubernetes.io/version: 1.4.5
    helm.sh/chart: kubernetes-ingress-1.1.6
  name: haproxy-tech-ingress-1-kubernetes-ingress
  resourceVersion: "619239126"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/haproxy-tech-ingress-1-kubernetes-ingress
  uid: 28564610-27f7-4506-b757-1c3edd926f74
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes # not needed
  - pods # not needed
  - services
  - namespaces
  - events
  - serviceaccounts # not needed
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - list
  - watch
  - create # not needed
  - patch  # not needed
  - update # not needed
- apiGroups:
  - extensions
  resources:
  - ingresses
  - ingresses/status
  verbs:
  - get
  - list
  - watch
  - update
- apiGroups:
  - networking.k8s.io/v1beta1
  resources:
  - ingresses
  - ingresses/status
  verbs:
  - get
  - list
  - watch

I checked out the HAProxy Tech Kubernetes Ingress Controller and it seems that .rules[1] (tl;dr; the one regarding Secret) can be dropped.

As well, since Node, Pod, and ServiceAccount subjects aren't used by the code-base, so they could be dropped.

Feature request: initContainer for performance tuning for `kubernetes-ingress` chart

This is a feature request and quite similar to this one raised for the stable NGINX Ingress Controller

Here at Namecheap, Inc. Cloud Team we're starting evaluating this Ingress Controller: we're running at scale several Ingress Controller classes, each one counting up to 20k virtual hosts.

According to our benchmarks, we used to do something like these to tune performances at initContainer level (so before the Pod is able to serve traffic):

sysctl -w net.core.somaxconn=redacted
sysctl -w net.ipv4.ip_local_port_range="redacted redacted"
sysctl -w net.ipv4.tcp_rmem="redacted   redacted   redacted"
sysctl -w net.ipv4.tcp_wmem="redacted   redacted   redacted"
sysctl -w net.ipv4.tcp_slow_start_after_idle=redacted
sysctl -w net.ipv4.tcp_mtu_probing=redacted
sysctl -w net.netfilter.nf_conntrack_max=redacted

Would be great to add a new list of Container key (.controller.initContainers) following the core/v1 API spec.

Implementing should be quite easy: just need to do toYaml .Values.initContainers with proper indent. Furthermore, is cross-compatible for both deployment strategies (Deployment or DaemonSet).

Desired values.yaml

controller:
  name: controller
  image:
    repository: haproxytech/kubernetes-ingress    # can be changed to use CE or EE Controller images
    tag: "{{ .Chart.AppVersion }}"
    pullPolicy: IfNotPresent

  initContainers:
  - name: sysctl
    image: "busybox:1.31.1-musl"
    command:
    - /bin/sh
    - -xc
    - |-
      sysctl -w net.core.somaxconn=redacted
      sysctl -w net.ipv4.ip_local_port_range="redacted redacted"
      sysctl -w net.ipv4.tcp_rmem="redacted   redacted   redacted"
      sysctl -w net.ipv4.tcp_wmem="redacted   redacted   redacted"
      sysctl -w net.ipv4.tcp_slow_start_after_idle=redacted
      sysctl -w net.ipv4.tcp_mtu_probing=redacted
      sysctl -w net.netfilter.nf_conntrack_max=redacted
    securityContext:
      privileged: true

Unable to deploy kubernetes-ingress >= 1.17.6 against kubernetes v1.17.x

Description

  • Starting with kubernetes-ingress 1.17.6, when trying to deploy release against kubernetes v1.17.x, the following exception occurs:
  Error: Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: unable to recognise "": no matches for kind "IngressClass" in version "networking.k8s.io/v1"

Versions

kubernetes: v1.17.16
helm: v3.5.4

Unlcear how to use SSL termination

Thanks a lot for providing haproxy for Kubernetes as standalone and not as ingress controller!

After reading https://github.com/haproxytech/helm-charts/blob/master/haproxy/README.md it is unclear to me how to terminate SSL traffic with it. Usually you would just add
bind :443 ssl cert /etc/foo.pem
but how I am supposed to do it with this helm chart? I would guess through Kubernetes secrets? But how do reference them in the config? Or does this not work with SSL traffic at all?

haproxy: feature request: support for configurable volumes

It'd be helpful to have a way to mount volumes in the haproxy Deployment and DaemonSet objects. I saw a similar GH issue but it was specifically for the ingress chart whereas I'm using the bare haproxy chart.

This would be useful to allow us to mount secrets or configmaps into the pod, for things like TLS certificates (e.g. server name host:port ssl crt /path/to/mounted/file.crt).

[kubernetes-ingress] Missing resource permissions on ClusterRole

Version

  • version: 1.17.0
  • appVersion: 1.7.0

Problem

Pod failed to start with following errors

2021/10/10 02:59:39 HAProxy Ingress Controller v1.7.0 ca16d5f
2021/10/10 02:59:39 Build from: https://github.com/haproxytech/kubernetes-ingress
2021/10/10 02:59:39 Build date: 2021-10-08T12:20:39

2021/10/10 02:59:39 ConfigMap: ns-ext/haproxy-kubernetes-ingress
2021/10/10 02:59:39 Ingress class:
2021/10/10 02:59:39 Empty Ingress class: false
2021/10/10 02:59:39 Publish service: ns-ext/haproxy-kubernetes-ingress
2021/10/10 02:59:39 Default backend service:
2021/10/10 02:59:39 Default ssl certificate:
2021/10/10 02:59:39 Frontend HTTP listening on: 0.0.0.0:80
2021/10/10 02:59:39 Frontend HTTPS listening on: 0.0.0.0:443
2021/10/10 02:59:39 Controller sync period: 5s

2021/10/10 02:59:39 Running on haproxy-kubernetes-ingress-59dc44d9c-7rxmd
2021/10/10 02:59:39 haproxy.go:35 Running with HAProxy version 2.4.7-b5e51a5 2021/10/04 - https://haproxy.org/
2021/10/10 02:59:39 haproxy.go:44 Starting HAProxy with /etc/haproxy/haproxy.cfg
[NOTICE]   (214) : New worker #1 (249) forked
2021/10/10 02:59:39 controller.go:117 Running on Kubernetes version: v1.22.2+k3s2 linux/amd64
E1010 02:59:39.486212     216 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:ns-ext:haproxy-kubernetes-ingress" cannot list resource "pods" in API group "" in the namespace "ns-ext"
E1010 02:59:39.488285     216 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:ns-ext:haproxy-kubernetes-ingress" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope

Workaround

Downgrade image tag to 1.6.7

Kubernetes-ingress chart should create ingressclass object if controller.ingressClass is not null

The preferred way to specify ingress class is through IngressClass object instead of annotation now - https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class. As stated in haproxytech/kubernetes-ingress#307 haproxy already supports this but IngressClass has to be created manually now. I tried to create below object and it indeed works.

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: haproxy
spec:
  controller: haproxy.org/ingress-controller

Ideally this object should be installed with the helm chart if controller.ingressClass is not null. Can this be implemented in this chart?

[controller image] Building HAProxy image with Lua module(s)

Hi guys -

How would you suggest going about building an HAProxy image with either:

  1. Lua module installable as a luarock, i.e. luarocks install pgmoon; or
  2. Custom lua module that I wrote locally (think COPY ./my-lua-module /src/ in a Dockerfile) or some custom GitHub repo (think RUN git clone https://github.com/foo/some-lua-module in Dockerfile)

In my case I have a fork of pgmoon that is not yet merged into the upstream that I would like to have available in my HAProxy image for some custom lua scripting.

Protect stats page with basic auth

Hi, the routes my HAProxy serves are all public, but I want the stats page behind basic auth.

Seems like all I need is this config

frontend stats
 # other default config
  stats auth hi:mom

but i can't find a way to get my stats auth hi:mom config there? There's a global-config-snippet, backend-config-snippet, but no frontend-config-snippet.

HAProxy errors when I put stats auth hi:mom in the global-config-snippet.

[ConfigMap] global-config-snippet value not written to haproxy.cfg file

Hi guys -

I am using the haproxytech/kubernetes-ingress Helm chart as a dependency in my chart.

Here are relevant values from my values.yaml file:

kubernetes-ingress:
  controller:
    image:
      repository: localhost:5000/kubernetes-ingress
      tag: latest
    config:
      global-config-snippet: |
        lua-load test.lua
    logging:
      traffic:
        address: stdout
        format: raw
        facility: daemon

You can see the Helm template here.

Here is how the resultant ConfigMap looks after I install my chart:

k8s-services git:(mc/cloud-charts) ✗ k get configmaps gatekeeper-kubernetes-ingress -o json | jq -r .data
{
  "global-config-snippet": "lua-load test.lua\n",
  "syslog-server": "address:stdout, facility:daemon, format:raw,"
}

So, things look fine to me - however, when I check the controller image's /etc/haproxy/haproxy.cfg file, I don't see the above global-config-snippet written to this file:

k8s-services git:(mc/cloud-charts) ✗ k exec -it gatekeeper-kubernetes-ingress-67846778b5-gtxhj -- /bin/sh
/ # cat /etc/haproxy/haproxy.cfg | grep lua-load

This seems like a bug, no?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.