Coder Social home page Coder Social logo

ot-container-kit / helm-charts Goto Github PK

View Code? Open in Web Editor NEW
46.0 10.0 79.0 847 KB

A repository which that will contain helm charts with best and security practices.

Home Page: https://ot-container-kit.github.io/helm-charts

Mustache 97.88% Smarty 2.12%
helm artifact cncf github kubernetes openshift redis redis-operator

helm-charts's Introduction

A helm repository that has a variety of helm charts for helping people to deploy stack inside Kubernetes cluster with best and security practices. One of the main motives of creating these charts is that person can easily deploy the stack or application inside the Kubernetes cluster without getting into the complexity.

Helm must be installed to use the charts. Please refer to Helm's documentation to get started.

Once Helm is set up properly, add the repo as follows:

helm repo add ot-helm https://ot-container-kit.github.io/helm-charts

You can then run helm search repo ot-helm to see the charts.

Helm Charts List

Currently supported helm charts are:-

Pre-Requisities

  • Kubernetes >=1.15.X
  • Helm >=3.0.X

Installing Helm

Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.

To install Helm, refer to the Helm install guide and ensure that the helm binary is in the PATH of your shell.

Adding Repo

helm repo add ot-helm https://ot-container-kit.github.io/helm-charts

Please refer to the Quick Start guide if you wish to get running in just a few commands, otherwise the Using Helm Guide provides detailed instructions on how to use the Helm client to manage packages on your Kubernetes cluster.

Useful Helm Client Commands:

  • View available charts: helm search repo
  • Install a chart: helm install my-release ot-helm/<package-name>
  • Upgrade your application: helm upgrade

Contact Information

This project is managed by OpsTree Solutions. For any queries or suggestions, you can reach out to us at [email protected].

Join our Slack Channel: #redis-operator.

helm-charts's People

Contributors

adbucur avatar amkartashov avatar armujahid avatar ashwani-opstree avatar deepakgupta97 avatar dm3ch avatar dpersson avatar estork09 avatar genofire avatar iamabhishek-dubey avatar m4r1u2 avatar olivier-st-pierre avatar raphaelzoellner avatar rishops avatar sadath-12 avatar salleman33 avatar sandy724 avatar shubham-cmyk avatar tripathishikha1 avatar udit-purplle avatar whzghb avatar xdvpser avatar youvegotmoxie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Redis-cluster-0.7.0 failed to enable ServiceMonitor

What happened?

# cat value.yaml
serviceMonitor:
  enabled: true

# helm upgrade redis-cluster ot-helm/redis-cluster -f value.yaml
Error: UPGRADE FAILED: error validating "": error validating data: [ValidationError(ServiceMonitor.spec.selector): unknown field "app" in com.coreos.monitoring.v1.ServiceMonitor.spec.selector, ValidationError(ServiceMonitor.spec.selector): unknown field "redis_setup_type" in com.coreos.monitoring.v1.ServiceMonitor.spec.selector, ValidationError(ServiceMonitor.spec.selector): unknown field "role" in com.coreos.monitoring.v1.ServiceMonitor.spec.selector]

Reason?

# helm template redis-cluster ot-helm/redis-cluster -f value.yaml
...
spec:
  selector:
    matchLabels:
    app: redis-cluster-follower
    redis_setup_type: follower
    role: follower
...
spec:
  selector:
    matchLabels:
    app: redis-cluster-leader
    redis_setup_type: leader
    role: leader
...

It should be:

...
spec:
  selector:
    matchLabels:
      app: redis-cluster-follower
      redis_setup_type: cluster
      role: follower
...
spec:
  selector:
    matchLabels:
      app: redis-cluster-leader
      redis_setup_type: cluster
      role: leader
...

Redis-Operator: CRDs not updated/installed when upgrading from 0.5.0 to newer versions

When upgrading the redis-operator from 0.5.0 to 0.6.0 the rediscluster CRD is not added.

Release "redis-operator" has been upgraded. Happy Helming!
NAME: redis-operator
LAST DEPLOYED: Thu Sep  2 12:08:33 2021
NAMESPACE: redis-operator
STATUS: deployed
REVISION: 2
TEST SUITE: None

The running operator pod:

$ kubectl describe pod redis-operator-6f7dd898d9-bhf92 -n redis-operator
Name:         redis-operator-6f7dd898d9-bhf92
Namespace:    redis-operator
Priority:     0
Node:         worker-3/10.100.0.44
Start Time:   Thu, 02 Sep 2021 12:14:13 +0200
Labels:       name=redis-operator
              pod-template-hash=6f7dd898d9
Annotations:  cni.projectcalico.org/podIP: 10.42.2.110/32
              cni.projectcalico.org/podIPs: 10.42.2.110/32
Status:       Running
IP:           10.42.2.110
IPs:
  IP:           10.42.2.110
Controlled By:  ReplicaSet/redis-operator-6f7dd898d9
Containers:
  redis-operator:
    Container ID:  docker://18689117ef5b777a7a19ae4037bfddc8ad99c74e48b15d90fd2ef75998e18730
    Image:         quay.io/opstree/redis-operator:v0.6.0
    Image ID:      docker-pullable://quay.io/opstree/redis-operator@sha256:c7d9f2f991818aeb2f6de1faafb08922fe88d2f4b3b0e4e023039d1c34f9f1d9
    Port:          <none>
    Host Port:     <none>
    Command:
      /manager
    Args:
      --leader-elect
    State:          Running
      Started:      Thu, 02 Sep 2021 12:17:12 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Sep 2021 12:16:05 +0200
      Finished:     Thu, 02 Sep 2021 12:16:26 +0200
    Ready:          True
    Restart Count:  4
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:        100m
      memory:     100Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from redis-operator-token-cqjd2 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  redis-operator-token-cqjd2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  redis-operator-token-cqjd2
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m5s                 default-scheduler  Successfully assigned redis-operator/redis-operator-6f7dd898d9-bhf92 to worker-3
  Normal   Pulled     3m3s                 kubelet            Successfully pulled image "quay.io/opstree/redis-operator:v0.6.0" in 1.252186747s
  Normal   Pulled     2m39s                kubelet            Successfully pulled image "quay.io/opstree/redis-operator:v0.6.0" in 1.249386139s
  Normal   Pulled     2m4s                 kubelet            Successfully pulled image "quay.io/opstree/redis-operator:v0.6.0" in 1.299940511s
  Normal   Started    73s (x4 over 3m2s)   kubelet            Started container redis-operator
  Normal   Pulled     73s                  kubelet            Successfully pulled image "quay.io/opstree/redis-operator:v0.6.0" in 1.231768981s
  Warning  BackOff    22s (x6 over 2m17s)  kubelet            Back-off restarting failed container
  Normal   Pulling    7s (x5 over 3m4s)    kubelet            Pulling image "quay.io/opstree/redis-operator:v0.6.0"
  Normal   Created    6s (x5 over 3m3s)    kubelet            Created container redis-operator
  Normal   Pulled     6s                   kubelet            Successfully pulled image "quay.io/opstree/redis-operator:v0.6.0" in 1.264089192s

Installed opstree CRDs:

$ kubectl get crd | grep "opstreelabs"
redis.redis.redis.opstreelabs.in                      2021-09-02T10:10:58Z

Same issue when trying to move directly to 0.7.0.

Maybe it's an issue in how helm works with the crds folder?!

Helm version:

$ helm version --short
v3.6.3+gd506314

The only way I got the CRDs was:

  1. uninstall the old version of the operator including deletion of the CRDs and then installing a new version (not an option for clusters with running redis instances)
  2. installing the CRDs manually (not nice but seems to work)

Cannot connect to cluster using python redis client with password

Hi, I can login into the cluster using redis-cli with password.
But when using python client to login, I got this error

redis.exceptions.ResponseError: AUTH called without any password configured for the default user. Are you sure your configuration is correct?

I found this article telling that "requirepass" is not set in the redis.conf file.

I can fix this by using YAML config file described here but I'm wondering if we can do this via helm chart?

Thank you

Failed to install redis standalone

When I try to install redis following the documentation, I get this error.

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Redis.spec.kubernetesConfig): unknown field "serviceType" in in.opstreelabs.redis.redis.v1beta1.Redis.spec.kubernetesConfig

The above error is caused by running the following command:

helm upgrade redis ot-helm/redis -f redis-values.yaml --install --namespace ot-operators

redis-operator chart v0.13 has error in CRDs

While trying to install redis-operator version 0.13.0, I receive the following error:

Helm install failed: failed to parse CRDs from crds/redis.yaml: error parsing : error converting YAML to JSON: yaml: line 1920: could not find expected ':'

redis-operator 0.14.0 uses 0.12.0 images

Hi Guys,

This may be as designed, yet I think it looks more like typo - redis-operator chart v0.14.0 has image tag v0.12.0.

Also seems like redis-sentinel images aren't public. Is that intentional?

Thank you for the great work you do and take care,
Atanas

Override services label selector or ignore helm related labels

As per this comment, i am opening this issue here: OT-CONTAINER-KIT/redis-operator#461 (comment)

I've written a helm chart which heavily relies on bitnamis common helm chart for naming and selectors.

This includes the labels on the redis cluster resource labels which seem to be used on the service resource as well.

The result is that it has a helm.sh/chart: redis-cluster-<chart-version> label.

Is your feature request related to a problem? Please describe.

Describe the solution you'd like
In an ideal world, i'd like to be able to override the complete set of labels used as selector on the service object on .spec.kubernetesConfig.service.labels of the redis cluster resource.

Describe alternatives you've considered
Manually defining a template declaration for labels to use which would not include helm.sh/chart as a key.

What version of redis-operator are you using?
redis-operator version: v0.11.0, however i checked the current CRD and it does not look like i could set labels manually on the service object

Additional context

appVersion in redis-X

maybe the appVersion should be in redis-cluster e.g. the redis image (not the redis-operator) ....

Add CRD TLS options to helm charts

Hi, thanks for all the great work you're doing!

Is it a good idea to add the TLS options which are available in the CRD in the helm charts as well?
TLS options in the CRD reference: https://ot-redis-operator.netlify.app/docs/crd-reference/redis-api/#tlsconfig

E.g. something like:

redisCluster:
  redisTLS:
    secretName: redis-tls-cert
    ca: ca.key
    cert: tls.crt
    key: tls.key

or

redisCluster:
  TLS:
    ca: ca.key
    cert: tls.crt
    key: tls.key
    secret:
      secretName: redis-tls-cert

TLS setup

Hi,

I want to add the TLS option while deploying redis-cluster and loading the certificates and the key from a secret.
Can you please add an example of how to set it up?

Thanks!!

Getting error in logging-operator.

Hi Team,

Getting following error in logging-operator while deploying in EKS cluster.

1.6663438138635333e+09 INFO controller-runtime.metrics Metrics server is starting to listen {"addr": ":8080"} 1.6663438138642874e+09 INFO setup starting manager 1.666343813864724e+09 INFO Starting server {"path": "/metrics", "kind": "metrics", "addr": "[::]:8080"} 1.6663438138647642e+09 INFO Starting server {"kind": "health probe", "addr": "[::]:8081"} I1021 09:16:53.864836 1 leaderelection.go:248] attempting to acquire leader lease logging/c5f74071.logging.opstreelabs.in... E1021 09:16:53.866937 1 leaderelection.go:330] error retrieving resource lock logging/c5f74071.logging.opstreelabs.in: configmaps "c5f74071.logging.opstreelabs.in" is forbidden: User "system:serviceaccount:logging:loggin-operator" cannot get resource "configmaps" in API group "" in the namespace "logging" E1021 09:16:57.690997 1 leaderelection.go:330] error retrieving resource lock logging/c5f74071.logging.opstreelabs.in: configmaps "c5f74071.logging.opstreelabs.in" is forbidden: User "system:serviceaccount:logging:loggin-operator" cannot get resource "configmaps" in API group "" in the namespace "logging"
Please help me with this.

failed to install redis

Hi,
I open a new issue because the previous has been closed (#26)
the problem is not fixed.
redis-operator CRD doesn't contains serviceType. Redis template does ! I don't see anything in recents commits about this problem...

Redis-Operator "controller.rediscluster Reconciler error"

Hi,

I have installed the lates redis-operator 0.10.0 via helm. After deploying a new redis cluster via helm I received following error in the operator and only leader pods were created.

1.6565810734977188e+09 ERROR controller.rediscluster Reconciler error {"reconciler group": "redis.redis.opstreelabs.in", "reconciler kind": "RedisCluster", "name": "redis-cluster", "namespace": "default", "error": "poddisruptionbudgets.policy "redis-cluster-leader" is forbidden: User "system:serviceaccount:default:redis-operator" cannot get resource "poddisruptionbudgets" in API group "policy" in the namespace "default""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227

Looks like the missing right for the operator.

Pod restart failing when use redis-cluster

Hi

I am using your helm-chart to setup redis cluster. we noticed 2 issues when pod is restarted.

Scenario 1: when PV is dynamically created

when pod is restated it will be in pending state due to earlier created pod(redis-cluster-follower-0) PVC attached to dynamically created PV.
Solution : Manually edit pvc(redis-cluster-follower-0) finalizers section so new PV is created and attached to PVC and then pod will come up.
finalizers:
- kubernetes.io/pvc-protection

Scenario 2: PV created before initiating helm-chart.
In this scenario when pod is restarted pod will come up successfully with out any issue but cluster is broken between the nodes and we see below error in the operator logs.

2021-11-03T19:51:57.827Z INFO controller_redis Pod Counted successfully {"Request.RedisManager.Namespace": "dft-qa", "Request.RedisManager.Name": "dft-qa-redis-cluster", "Count": 0, "Container Name": "dft-qa-redis-cluster-leader"}
2021-11-03T19:52:17.636Z ERROR controller_redis Could not execute command {"Request.RedisManager.Namespace": "dft-qa", "Request.RedisManager.Name": "dft-qa-redis-cluster", "Command": ["redis-cli", "--cluster", "add-node", "100.67.1.166:6379", "100.67.1.165:6379", "--cluster-slave"], "Output": "", "Error": "", "error": "Internal error occurred: error executing command in container: container is not created or running"}

Solution : Manually delete all files(dump.rdb, appendonly.aof, nodes.conf) under persistence volume and restart the pod.

Do you have any solution for this?

Thanks

Affinity not working?

I'm not sure if this works

I get following error when I specify it and try to upgrade the helm chart:

Error: UPGRADE FAILED: error validating "": error validating data: [ValidationError(RedisCluster.spec.redisFollower): unknown field "affinity" in in.opstreelabs.redis.redis.v1beta1.RedisCluster.spec.redisFollower, ValidationError(RedisCluster.spec.redisLeader): unknown field "affinity" in in.opstreelabs.redis.redis.v1beta1.RedisCluster.spec.redisLeader]

Implementation of ACL configfile

I can't find a way to integrate an external aclfile to configure users using the helm chart for a redis-cluster. Is there a way to integrate it? I thought about using the externalConfig parameter in the values.yaml, but there's no documentation for this parameter.

Redis operator is crashlooping after upgrade to v0.14.1 helm chart

I have a test setup with one cluster. I had to install it on version 0.12.0 for both operator and cluster nodes because version 0.13 was not installable due to #64. I have kept the operator up-to-date as the issue was with updating the cluster only. Now after upgrading to 0.14.1(for the operator) and 0.14.0 for the cluster chart the cluster is running normally, but the operator is crashlooping
here is the log:
redis-operator-f565f7b8b-cvxhf_redis-operator(2).log
I can see two types of errors:

  1. CRD issues with the new sentinel feature
  2. changes in the statefulset which should be fixable by recreating the sts

Redis cluster installation error when using redis-operator 0.10.2

Hi,

I've installed the latest redis-operator chart:

$ helm ls -aA

NAME                   	NAMESPACE        	REVISION	UPDATED                               	STATUS  	CHART                          	APP VERSION
redis-operator         	redis-operator   	1       	2022-04-27 13:14:30.963308 +0300 EEST 	deployed	redis-operator-0.10.2          	0.10.0

But the cluster installation command is failing then:

$ helm install test ot-helm/redis-cluster --set redisCluster.clusterSize=3 --namespace redis-operator

Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(RedisCluster.spec.kubernetesConfig): missing required field "serviceType" in in.opstreelabs.redis.redis.v1beta1.RedisCluster.spec.kubernetesConfig

Maybe you can introduce some tests to ensure the new version is usable?

Unable to connect to Redis External Load Balancer Service

I have created an EKS cluster with the Managed Node Groups.

Recently, I have deployed Redis as an external Load Balancer service.
I am trying to to set up an authenticated connection to it via NodeJS and Python microservices but I am getting Connection timeout error.
However, I am able to enter into the deployed redis container and execute the redis commands.
Also, I was able to do the same when I deployed Redis on GKE.

Have I missed some network configurations to allow traffic from external resources?
The subnets which the EKS node is using are all public.
Also, while creating the Amazon EKS node role, I have attached 3 policies to this role as suggested in the doc -

  • AmazonEKSWorkerNodePolicy
  • AmazonEC2ContainerRegistryReadOnly
  • AmazonEKS_CNI_Policy

It was also mentioned that -
We recommend assigning the policy to the role associated to the Kubernetes service account instead of assigning it to this role.
Will attaching this to the Kubernetes service account, solve my problem ?
Also, here is the guide that I used for deploying redis -
https://ot-container-kit.github.io/redis-operator/guide/setup.html#redis-standalone

Failed to install redis chart

Hi,
I got this error :
Error: Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Redis.spec.kubernetesConfig): unknown field "serviceType" in in.opstreelabs.redis.redis.v1beta1.Redis.spec.kubernetesConfig

Thanks

Default installation for redis-setup raise error validating data error

Hello, this are commans I am trying

kubectl create namespace redis-operator
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
helm repo update
helm upgrade redis-operator ot-helm/redis-operator --install --namespace redis-operator
helm upgrade redis-cluster ot-helm/redis-setup \
  --set setupMode="cluster" --set cluster.size=3 \
  --install --namespace redis-operator

Last one raise
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in Redis.metadata.labels.app.kubernetes.io/version

Please help

Timeline for release supporting v0.13.0

I was just curious as to the timeline for the helm chart to support v0.13.0 as there are some features we'd like to implement.

Is there a pre-release/devel branch anywhere that work can be tracked or leveraged?

redis-operator chart has broken CRDs

Redis-operator v0.10.0 have got unresolved git conflicts in CRDs defenition (one of cases - https://github.com/OT-CONTAINER-KIT/helm-charts/blob/main/charts/redis-operator/crds/redis-cluster.yaml#L3527).

This causes errors during installation:

❯ helm upgrade redis-operator ot-helm/redis-operator --install --namespace redis-operator
Release "redis-operator" does not exist. Installing it now.
Error: failed to install CRD crds/redis-cluster.yaml: error parsing : error converting YAML to JSON: yaml: line 3428: could not find expected ':'

@iamabhishek-dubey FYI, I see that you are an author of v0.10.0 commit in master, so maybe you would be interested

Error: failed to download "ot-helm/redis-operator"

Following the documentation instructions:
helm upgrade redis-operator ot-helm/redis-operator --install --namespace redis-operator does not work because it looks like the latest redis-operator-0.9.0 is not available as a release on GitHub. I can see Redis 0.9.0 but not Redis Operator

Error: failed to fetch https://github.com/OT-CONTAINER-KIT/helm-charts/releases/download/redis-operator-0.9.0/redis-operator-0.9.0.tgz : 404 Not Found

Both redis-cluster and redis fail to install as documented.

Goal

Install Redis Operator to the GCP sandbox cluster and record steps.

Operator

Following Redis Operator Installation

Release "redis-operator" does not exist. Installing it now.
NAME: redis-operator
LAST DEPLOYED: Mon Jun 28 14:53:06 2021
NAMESPACE: redis-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None

Listing releases matching ^redis-operator$
redis-operator  redis-operator  1               2021-06-28 14:53:06.986523827 +0300 MSK deployed        redis-operator-0.5.0    0.5.0      


UPDATED RELEASES:
NAME             CHART                    VERSION
redis-operator   ot-helm/redis-operator     0.5.0

denis@L560:~/huma/phoenix-server-deployments/k8s/cluster$ kubectl create secret generic redis-secret \
>         --from-literal=password=password -n operator-sandbox
secret/redis-secret created
denis@L560:~/huma/phoenix-server-deployments/k8s/cluster$ helm upgrade redis-cluster ot-helm/redis-cluster \
>   --set redisCluster.clusterSize=3 --install --namespace operator-sandbox
Release "redis-cluster" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "RedisCluster" in version "redis.redis.opstreelabs.in/v1beta1"
denis@L560:~/huma/phoenix-server-deployments/k8s/cluster$ helm upgrade redis ot-helm/redis --install --namespace operator-sandbox
Release "redis" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Redis.spec): unknown field "kubernetesConfig" in in.opstreelabs.redis.redis.v1beta1.Redis.spec, ValidationError(Redis.spec): missing required field "global" in in.opstreelabs.redis.redis.v1beta1.Redis.spec, ValidationError(Redis.spec): missing required field "mode" in in.opstreelabs.redis.redis.v1beta1.Redis.spec, ValidationError(Redis.spec): missing required field "service" in in.opstreelabs.redis.redis.v1beta1.Redis.spec]

@iamabhishek-dubey hey I am stuck 👎
have you any clue for me?
Thanks in advance!

PDB is misdefined for redis cluster

When trying to create the default redis cluster configuration as mentioned here: https://ot-container-kit.github.io/redis-operator/guide/setup.html#redis-cluster Only the leaders get created as the operator is attempting to create a PDB with both maxUnavailable and minAvailable and failing

redis-operator-8488b858d6-5bchd redis-operator {"level":"error","ts":1666968931.3251204,"logger":"controller.rediscluster","msg":"Reconciler error","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisCluster","name":"redis-cluster","namespace":"kong","error":"PodDisruptionBudget.policy \"redis-cluster-leader\" is invalid: spec: Invalid value: policy.PodDisruptionBudgetSpec{MinAvailable:(*intstr.IntOrString)(0xc00743b800), Selector:(*v1.LabelSelector)(0xc00743b820), MaxUnavailable:(*intstr.IntOrString)(0xc00743b840)}: minAvailable and maxUnavailable cannot be both set","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/int                                             ernal/controller/controller.go:227"}

Reference for only one of those two being allowed in a PDB: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
"You can specify only one of maxUnavailable and minAvailable in a single PodDisruptionBudget"

cluster ends up attempting to form with addresses on k8s and fails in 0.12 charts, but works in 0.10 charts

What am I doing - installing the redis-operator/redis-cluster on rke2 on-prem cluster
the earlier v0.10 of the setup works fine - is able to find and successfully use pod ip addresses
the current 0.12 charts have this problem. I wanted to use 0.12 for the promised ease of setting up persistenceEnabled = no

BTW - how to turn persistence off in 0.10 charts? Is it not supported? Is there another way to affect what ends up in the redis.conf files?

cluster formation fails with this error:

{
  "level": "error",
  "ts": 1670004561.9619787,
  "logger": "controller_redis",
  "msg": "Could not execute command",
  "Request.RedisManager.Namespace": "redis12",
  "Request.RedisManager.Name": "redis-cluster",
  "Command": [
    "redis-cli",
    "--cluster",
    "create",
    "redis-cluster-leader-0.redis-cluster-leader-headless.redis12.svc:6379",
    "redis-cluster-leader-1.redis-cluster-leader-headless.redis12.svc:6379",
    "redis-cluster-leader-2.redis-cluster-leader-headless.redis12.svc:6379",
    "--cluster-yes"
  ],
  "Output": "",
  "Error": "Could not connect to Redis at redis-cluster-leader-2.redis-cluster-leader-headless.redis12.svc:6379: Name does not resolve\n",
  "error": "command terminated with exit code 1",
  "stacktrace": "redis-operator/k8sutils.ExecuteRedisClusterCommand
    /workspace/k8sutils/redis.go:116\nredis-operator/controllers.(*RedisClusterReconciler).Reconcile\n\t/workspace/controllers/rediscluster_controller.go:123
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
      /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
      /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266
    sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
      /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"
}

Helm chart for Redis standalone deployed but nothing happens

I have deployed a new redis standalone using this chart and we are using redis custom image from here : https://hub.docker.com/r/redislabs/redismod

We want most of the Redis modules to get installed together so we are using the above custom Redis Image.

After deployment we get successful message :

NAME: redis
LAST DEPLOYED: Fri Mar 18 12:49:40 2022
NAMESPACE: ai-db
STATUS: deployed
REVISION: 1
TEST SUITE: None

But nothing happens no new deployments or pods gets created.

helm ls lists the release :

NAME          	NAMESPACE	REVISION	UPDATED                               	STATUS  	CHART                	APP VERSION
redis         	ai-db    	1       	2022-03-18 12:49:40.886558 +0530 IST  	deployed	redis-0.9.0          	0.9.0

Any idea what is happening with deployment?

redisCluster.leaderServiceType maybe should redisCluster.leader.ServiceType

redisCluster.leaderServiceType --->redisCluster.leader.ServiceType

---
redisCluster:
  clusterSize: 3
  image: quay.io/opstree/redis
  tag: v6.2.5
  imagePullPolicy: IfNotPresent
  # redisSecret:
  #   secretName: redis-secret
  #   secretKey: password
  leader:
    replicas: 3
    serviceType: ClusterIP

and i have try --set redisCluster.leaderServiceType and --set redisCluster.leader.ServiceType , and the service Type still is ClusterIP

root@node01:/var/lib/sealos/data/default/rootfs# kubectl -n redis-operator  get svc
NAME                              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
redis-cluster-follower            ClusterIP   10.96.0.207   <none>        6379/TCP,9121/TCP   9m51s
redis-cluster-follower-headless   ClusterIP   None          <none>        6379/TCP            9m51s
redis-cluster-leader              ClusterIP   10.96.0.50    <none>        6379/TCP,9121/TCP   9m51s
redis-cluster-leader-headless     ClusterIP   None          <none>        6379/TCP            9m51s

Failed to install redis chart

  Error: Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Redis.spec.kubernetesConfig): unknown field "serviceType" in in.opstreelabs.redis.redis.v1beta1.Redis.spec.kubernetesConfig

Redis-Operator 0.10.2 Error

redis-operator:0.10.2
redis-cluster:0.10.0
I install redis-cluster error occurred:

ERROR   controller-runtime.manager.controller.rediscluster  Reconciler error        {"reconciler group": "redis.redis.opstreelabs.in", "reconciler kind": "RedisCluster", "name": "redis-cluster", "namespace": "redis", "error": "Service \"redis-cluster-leader-headless\" is invalid: [spec.ipFamily: Invalid value: \"null\": field is immutable, spec.ipFamily: Required value]"}

What do you need to do about error?

I get unknown field "selector" when uncommenting the storageSpec

Hi,

I am getting following error

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Redis.spec.storage.volumeClaimTemplate): 
unknown field "selector" in in.opstreelabs.redis.redis.v1beta1.Redis.spec.storage.volumeClaimTemplate

When do helm upgrade --install redis ot-helm/redis-setup \ --dry-run --namespace redis-operator -f ./values.yaml

slightly modiefied values.yaml, I am just trying to enable storageSpec

---
name: redis-cluster

setupMode: standalone

cluster: {}

global:
  image: quay.io/opstree/redis
  tag: v2.0
  imagePullPolicy: IfNotPresent
  password: "Opstree@1234"
  resources:
    {}
    # requests:
    #   cpu: 100m
    #   memory: 128Mi
    # limits:
    #   cpu: 100m
    #   memory: 128Mi

exporter:
  enabled: true
  image: quay.io/opstree/redis-exporter
  tag: "2.0"
  imagePullPolicy: IfNotPresent
  resources:
    {}
    # requests:
    #   cpu: 100m
    #   memory: 128Mi
    # limits:
    #   cpu: 100m
    #   memory: 128Mi

# priorityClassName: "-"

nodeSelector:
  {}
  # memory: medium

storageSpec:
  volumeClaimTemplate:
    spec:
      storageClassName: standard
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 1Gi
    selector: {}

securityContext:
  {}
  # runAsUser: 1000

affinity:
  {}
  # nodeAffinity:
  #   requiredDuringSchedulingIgnoredDuringExecution:
  #     nodeSelectorTerms:
  #     - matchExpressions:
  #       - key: disktype
  #         operator: In
  #         values:
  #         - ssd

Thank your charts!

Support multi-arch images

Hi 🙂

Sorry if this isn't the right repo for this type of issue. Hoping it is 🤞

I noticed that for the redis-operator images you're building latest for amd64, then building separate images for arm-64 support. Unless I'm mistaken, a better way to build for multiple architectures is to just tag them the same and upload multiple versions to the same registry under the same tag. Most registries will handle this automatically.

For an example, see the cert-manager images:

image

Each of their images supports 5 different architectures for the same tag, which means I can tell Karpenter to use either amd64 or arm64 for the same image tag and not run into problems.

Would you be open to implementing this for the redis-operator and redis images?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.