centrifugal / helm-charts Goto Github PK
View Code? Open in Web Editor NEWOfficial Centrifugo Helm chart for Kubernetes
License: MIT License
Official Centrifugo Helm chart for Kubernetes
License: MIT License
Describe the bug
A clear and concise description of what the bug is.
Version of Helm and Kubernetes:
Helm Version:
$ helm version
version.BuildInfo{Version:"v3.13.3", GitCommit:"c8b948945e52abba22ff885446a1486cb5fd3474", GitTreeState:"clean", GoVersion:"go1.20.11"}
Kubernetes Version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:20:07Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"26+", GitVersion:"v1.26.13-eks-508b6b3", GitCommit:"26dbbc7674e3d15b579cec35cdd6aede55c1d426", GitTreeState:"clean", BuildDate:"2024-01-29T20:58:43Z", GoVersion:"go1.20.13", Compiler:"gc", Platform:"linux/amd64"}
Which version of the chart:
9.1.2
What happened:
I wasn't able to connect centrifugo to AWS memorydb because of the lack of redis_user environment variable that should be referenced from the secret
What you expected to happen:
Centrifugo needs to be able to work with AWS memorydb as it's working with any other redis or redis-cluster
How to reproduce it (as minimally and precisely as possible):
Try connecting your Centrifugo instance to memorydb by using the suggested env vars such as: CENTRIFUGO_REDIS_ADDRESS and CENTRIFUGO_REDIS_PASSWORD
<~--
This could be something like:
values.yaml (only put values which differ from the defaults)
key: value
helm install my-release centrifugal/centrifugo --version version --values values.yaml
-->
Anything else we need to know:
By adding that missing env var that would be referenced from the secret you'll be able to connect to any redis-cluster (including memorydb) using solely the env vars which are referenced from the secret
Hi, it is not clear from the documentation how to configure TLS with helm deployment. Is there any example that you can provide?
Thanks,
Describe the bug
When making use of the 'autoscaling' in a Helm chart for a Kubernetes deployment, the output is incorrect.
Version of Helm and Kubernetes:
Helm Version:
$ helm version
version.BuildInfo{Version:"v3.12.3", GitCommit:"3a31588ad33fe3b89af5a2a54ee1d25bfe6eaa5e", GitTreeState:"clean", GoVersion:"go1.20.7"}
Kubernetes Version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.1", GitCommit:"4c9411232e10168d7b050c49a1b59f6df9d7ea4b", GitTreeState:"clean", BuildDate:"202
3-04-14T13:14:41Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.6", GitCommit:"b39bf148cd654599a52e867485c02c4f9d28b312", GitTreeState:"clean", BuildDate:"202
2-09-22T05:53:51Z", GoVersion:"go1.18.6", Compiler:"gc", Platform:"linux/arm64"}
Which version of the chart:
latest centrifugo-11.0.2
What happened:
When using the chart with autoscaling enabled.
helm template cent ./charts/centrifugo --set autoscaling.enabled=true \
--set autoscaling.minReplicas=1 \
--set autoscaling.maxReplicas=3 \
--set autoscaling.cpu.enabled=true \
--set autoscaling.cpu.targetCPUUtilizationPercentage=80 \
--set autoscaling.memory.enabled=true \
--set autoscaling.memory.targetMemoryUtilizationPercentage=80 \
-a autoscaling/v1 \
--debug --dry-run
The resulting HorizontalPodAutoscaler is like this
apiVersion: "autoscaling/v1"
kind: HorizontalPodAutoscaler
metadata:
name: cent-centrifugo
namespace: default
labels:
helm.sh/chart: centrifugo-11.0.1
app.kubernetes.io/name: centrifugo
app.kubernetes.io/instance: cent
app.kubernetes.io/version: "5.0.2"
app.kubernetes.io/managed-by: Helm
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: cent-centrifugo
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
test: |
[autoscaling/v2beta2]
name: cpu
targetAverageUtilization:
- type: Resource
resource:
name: memory
targetAverageUtilization:
helm template cent ./charts/centrifugo --set autoscaling.enabled=true \
--set autoscaling.minReplicas=1 \
--set autoscaling.maxReplicas=3 \
--set autoscaling.cpu.enabled=true \
--set autoscaling.cpu.targetCPUUtilizationPercentage=80 \
--set autoscaling.memory.enabled=true \
--set autoscaling.memory.targetMemoryUtilizationPercentage=80 \
-a autoscaling/v2 \
--debug --dry-run
apiVersion: "autoscaling/v2"
kind: HorizontalPodAutoscaler
metadata:
name: cent-centrifugo
namespace: default
labels:
helm.sh/chart: centrifugo-11.0.1
app.kubernetes.io/name: centrifugo
app.kubernetes.io/instance: cent
app.kubernetes.io/version: "5.0.2"
app.kubernetes.io/managed-by: Helm
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: cent-centrifugo
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
test: |
[autoscaling/v2]
name: cpu
target:
type: Utilization
averageUtilization:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization:
As you can notived the missing utlization values in both api version (autoscaling/v2 and autoscaling/v2beta1)
How to reproduce it:
See above
Anything else we need to know:
The issue introduced by the PR #64
Which changes the utilization values position in values.yaml but not in the actual template.
As it still referring to Values.autoscaling.targetCPUUtilizationPercentage
instead of Values.autoscaling.cpu.targetCPUUtilizationPercentage
.
Another Issue the template uses targetCPUUtilizationPercentage
for the memory target utilization
{{- if .Values.autoscaling.memory.enabled }}
- type: Resource
resource:
name: memory
{{- if .Capabilities.APIVersions.Has "autoscaling/v2" }}
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- else }}
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- end }}
Describe the bug
While you can configure the health handler path for Centrifugo through the config key health_handler_prefix
, the liveness and readiness probes' paths are hard-coded as /health
in the deployment template:
https://github.com/centrifugal/helm-charts/blob/master/charts/centrifugo/templates/deployment.yaml#L154-L165
Version of Helm and Kubernetes:
Helm Version:
$ helm version
version.BuildInfo{Version:"v3.10.1", GitCommit:"9f88ccb6aee40b9a0535fcc7efea6055e1ef72c9", GitTreeState:"clean", GoVersion:"go1.18.7"}
Kubernetes Version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-03-01T02:23:41Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.15-eks-a5565ad", GitCommit:"af0b16a331376384fe84445fe2f8d4cb468b4148", GitTreeState:"clean", BuildDate:"2023-06-16T17:35:37Z", GoVersion:"go1.19.10", Compiler:"gc", Platform:"linux/amd64"}
Which version of the chart:
10.0.3
What happened:
Liveness / readiness probes are failing when a different health path is configured.
What you expected to happen:
Be able to configure the liveness and readiness probe paths or if the value for health_handler_prefix
is not empty, use that value for probes path, too.
How to reproduce it (as minimally and precisely as possible):
values.yaml
config:
health_handler_prefix: /some-prefix/health
helm install my-release centrifugal/centrifugo --version version --values values.yaml
In full values i dont see history settings, is possible to add?
https://github.com/centrifugal/helm-charts/blob/master/charts/centrifugo/values.yaml
Describe the bug
When making use of the 'deploymentStrategy' in a Helm chart for a Kubernetes deployment, the output format is incorrect. The 'strategy' and 'type' fields, under spec strategy, appear on the same line.
Version of Helm and Kubernetes:
Helm Version:
$ helm version
version.BuildInfo{Version:"v3.12.3", GitCommit:"3a31588ad33fe3b89af5a2a54ee1d25bfe6eaa5e", GitTreeState:"clean", GoVersion:"go1.20.7"}
Kubernetes Version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.1", GitCommit:"4c9411232e10168d7b050c49a1b59f6df9d7ea4b", GitTreeState:"clean", BuildDate:"202
3-04-14T13:14:41Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.6", GitCommit:"b39bf148cd654599a52e867485c02c4f9d28b312", GitTreeState:"clean", BuildDate:"202
2-09-22T05:53:51Z", GoVersion:"go1.18.6", Compiler:"gc", Platform:"linux/arm64"}
Which version of the chart:
latest centrifugo-11.0.1
What happened:
When using deploymentStrategy, the resulting output had 'strategy' and 'type' on the same line under spec strategy.
Which makes output invalid
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: centrifugo
app.kubernetes.io/instance: cent
strategy: type: RollingUpdate
What you expected to happen:
I expected 'strategy' and 'type' to be on separate lines, like so:
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: centrifugo
app.kubernetes.io/instance: cent
strategy:
type: RollingUpdate
How to reproduce it:
deploymentStrategy
to the values.yaml
filehelm template cent ./charts/centrifugo --set deploymentStrategy.type=RollingUpdate --debug
Anything else we need to know:
The issue seems to be due to the use of 'indent' instead of 'nindent' in this piece of the Helm chart:
{{- if .Values.deploymentStrategy }}
strategy:
{{- toYaml .Values.deploymentStrategy | indent 4 }}
{{- end }}
Looks like we have a problem with port names which prevent Istio to properly pass WS traffic:
https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/
We can:
Needs investigation to confirm the issue โ since Istio should select protocol automatically https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/#automatic-protocol-selection ?
Is your feature request related to a problem? Please describe.
If .Values.existingSecret
is used and all the sensitive data is created outside of Helm I need to be able to restart the pods in case any changes to the secret are detected. One of the options is to use Reloader which triggers rollout when notices changes to the secret.
Describe the solution you'd like
To enable it there should be an option to add custom annotation to Deployment.
Is your feature request related to a problem? Please describe.
Enhance the hpa template and improve the autoscaling so it could autoscale by Prometheus metrics
Describe the solution you'd like
Add the ability to use autoscalingTemplate
Is your feature request related to a problem? Please describe.
It should be possible to expose internalService (port 9000) by ingress
Describe the solution you'd like
Create dedicated ingress for internalService (port 9000)
Describe the bug
After deploy the helm chart in DO Kubernetes, I can't access to admin panel, API or other service, I always get the message "404 page not found"
Version of Helm and Kubernetes:
Helm Version:
{
"Version":"v3.9.0",
"GitCommit":"7ceeda6c585217a19a1131663d8cd1f7d641b2a7",
"GitTreeState":"clean",
"GoVersion":"go1.17.5"
}
Kubernetes Version:
Client Version: version.Info
{
"Major":"1",
"Minor":"24",
"GitVersion":"v1.24.0",
"GitCommit":"4ce5a8954017644c5420bae81d72b09b735c21f0",
"GitTreeState":"clean",
"BuildDate":"2022-05-03T13:46:05Z",
"GoVersion":"go1.18.1",
"Compiler":"gc",
"Platform":"windows/amd64"
}
Kustomize Version: v4.5.4
Server Version: version.Info
{
"Major":"1",
"Minor":"23",
"GitVersion":"v1.23.9",
"GitCommit":"c1de2d70269039fe55efb98e737d9a29f9155246",
"GitTreeState":"clean",
"BuildDate":"2022-07-13T14:19:57Z",
"GoVersion":"go1.17.11",
"Compiler":"gc",
"Platform":"linux/amd64"
}
Which version of the chart:
9.0.0
What happened:
When access to https://host.com:8000 I always get the message "404 page not found"
What you expected to happen:
Access to https://host.com:8000 and load admin UI
How to reproduce it (as minimally and precisely as possible):
Deploy the last helm chart with ingress. I tried port forwarding and got the same error.
Anything else we need to know:
When I logged into the pod and run "./centrifugo --health" I get
2022-07-28 16:38:55 [FTL] ListenAndServe: listen tcp: address :tcp://10.245.143.84:8000: too many colons in address
Is chart ignores metrics.serviceMonitor.release
option?
values.yaml:
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: "kube-prometheus-stack"
release: "kube-prometheus-stack"
helm install centrifugo -f values.yaml centrifugal/centrifugo --debug --dry-run > test.yaml
test.yaml:
kind: ServiceMonitor
metadata:
name: centrifugo
namespace: kube-prometheus-stack
labels:
helm.sh/chart: centrifugo-7.2.1
app.kubernetes.io/name: centrifugo
app.kubernetes.io/instance: centrifugo
app.kubernetes.io/version: "3.1.0"
app.kubernetes.io/managed-by: Helm
spec:
endpoints:
- port: internal
interval: 30s
selector:
matchLabels:
app.kubernetes.io/name: centrifugo
app.kubernetes.io/instance: centrifugo
namespaceSelector:
matchNames:
- default
Sorry if that is misunderstanding of docs, but I was expecting it here:
metadata:
name: centrifugo
namespace: kube-prometheus-stack
labels:
helm.sh/chart: centrifugo-7.2.1
app.kubernetes.io/name: centrifugo
app.kubernetes.io/instance: centrifugo
app.kubernetes.io/version: "3.1.0"
app.kubernetes.io/managed-by: Helm
release: "kube-prometheus-stack"
Works as metrics.serviceMonitor.additionalLabels
Centrifugo v3 should be released very soon. We need to adapt the Helm chart for it soon after release.
v3 status is tracked in centrifugal/centrifugo#447
Describe the bug
The config sections for Redis in the 'scaling with redis' section are no longer accurate (at least for the Redis Sentinel example)
An incorrect password-less Redis instance is created, and the config flags for centrifugo v4.0.0 are incorrect for redis in sentinel mode
Which version of the chart:
v9.0.0
What happened:
Followed the example scale with redis documentation and centrifugo would not connect to Redis
What you expected to happen:
Following the example starts a minimal working centrifugo cluster
How to reproduce it (as minimally and precisely as possible):
values.yaml
config:
engine: redis
redis_master_name: <REDIS MASTER SET NAME>
redis_sentinels: <REDIS SENTINELS SERVICE>
helm install centrifugo-ex centrifugal/centrifugo --version version --values values.yaml
Centrifugo will try and connect to a redis instance on localhost because the config keys are not used in v4.0.0 so will fail to start
Suggested Fix
Redis needs different flags for the password-less example given
Replace;
helm install redis bitnami/redis --set usePassword=false --set cluster.enabled=true --set sentinel.enabled=true
with
helm install redis bitnami/redis --set auth.enabled=false --set cluster.enabled=true --set sentinel.enabled=true
That launches a correct password-less Redis sentinel cluster
The config keys for redis sentinel are incorrect too and are not recognised and used by centrifugo
Replace
config.redis_master_name
with config.redis_sentinel_master_name
And
config.redis_sentinels
with config.redis_sentinel_address
Hi
I am passing secrets using existingSecret
It looks that way:
admin_password=somepass
admin_secret=somesecret
api_key=somekey
token_hmac_secret_key=sometoken
What is expected? =) These 4 values in container's env.
What actually happens? admin_password
& admin_secret
are passed correctly, api_key
& token_hmac_secret_key
are missing
Whats the root cause? The following IF statements expect any value in .Values.secrets.apiKey
and .Values.secrets.tokenHmacSecretKey
. Same for every next secret
https://github.com/centrifugal/helm-charts/blob/master/charts/centrifugo/templates/deployment.yaml#L73
https://github.com/centrifugal/helm-charts/blob/master/charts/centrifugo/templates/deployment.yaml#L80
Describe the bug
The README file states that the existingSecret
must contain the keys described "below", which are in camel case. However, the chart extract secret values with snake case keys.
Version of Helm and Kubernetes:
Helm Version:
$ helm version
version.BuildInfo{Version:"v3.9.0", GitCommit:"7ceeda6c585217a19a1131663d8cd1f7d641b2a7", GitTreeState:"clean", GoVersion:"go1.18.2"}
Kubernetes Version:
$ kubectl version
Client Version: v1.24.0
Kustomize Version: v4.5.4
Server Version: v1.21.12-eks-a64ea69
Which version of the chart: 7.4.2
What happened: Got an error with secret value fetching, since my keys were in camel case and the chart expects snake case
What you expected to happen: I don't think the choice of case matters, as long as it is consistent. If camel case is used under the secrets
property, camel case should be expected in the existingSecret
as well.
How to reproduce it (as minimally and precisely as possible): Deploy the chart with a existingSecret
that has the same keys under the secret
property.
Anything else we need to know: Not really, thanks for the chart and making our lives easier. This is nitpicking, and from the container logs the solution was obvious!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.