Coder Social home page Coder Social logo

Comments (78)

sreedharbukya avatar sreedharbukya commented on May 29, 2024 39

I got same issue. Please check the following first. I was not even able to list the release under this usual command

helm list -n <name-space>

this was responding empty. So funny behavior from helm.

 kubectl config get-contexts

make sure your context is set for correct kuberenetes cluster.

then next step is

helm history <release> -n <name-space> --kube-context <kube-context-name>

try applying the rollback to above command.

helm rollback <release> <revision> -n <name-space> --kube-context <kube-context-nam>

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024 33

Yes.

The helm-controller is scheduled to see the same refactoring round as the source-controller recently did, in which reconciliation logic in the broadest sense will be improved, and long standing issues will be taken care of. I expect to start on this at the beginning of next week.

from helm-controller.

monotek avatar monotek commented on May 29, 2024 19

Use the following command to also see charts in all namespaces and also the ones where installation is in progress.

helm list -Aa

from helm-controller.

alex-berger avatar alex-berger commented on May 29, 2024 9

This problem still persists. It is very annoying that we cannot rely on GitOps to eventually converge a cluster to the expected state as it is stuck on affected HelmRelease objects, stuck in Helm upgrade failed: another operation (install/upgrade/rollback) is in progress. Or to phrase it with other words, FluxCD based GitOps setup with HelmReleases is not self-healing and needs a lot of (unpredictable) manual interventions to run helm rollback ... && flux reconcile hr ... commands in order to fix things.

Is there anything that prevents us from adding a new feature to the helm-controller to detect stuck (locked) HelmReleases and automatically fix them by rolling them back immediately followed by a reconciliation?

from helm-controller.

sbernheim avatar sbernheim commented on May 29, 2024 5

I'm adding this comment to restate/clarify the remaining problem so it will hopefully be easier to identify and resolve.

Helm obtains a mutex lock on the chart install, so any HelmRelease resources under reconciliation at the moment the Helm Controller crashes (or runs out of memory or the node on which it is running crashes) will get stuck in a deadlocked PENDING state as nothing will subsequently remove the lock. When the next Helm Controller pod starts, it attempts to resolve the HelmRelease for the deadlocked chart and encounter the Helm upgrade failed: another operation (install/upgrade/rollback) is in progress error.

See helm/helm#4558 and helm/helm#9180 for more details about Helm mutex lock issue.

From @hiddeco 's earlier comment - assuming that it will be some time before this is fixed in Helm itself, the likely workaround within Helm Controller would be:

Detect the pending state in the controller, assume that we are the sole actors over the release (and can safely ignore the pessimistic lock), and fall back to the configured remediation strategy for the release to attempt to perform an automatic rollback (or uninstall).

@stefanprodan 's PR #239 mitigates a crash condition that would leave releases stuck in the PENDING state, but does not provide a resolution path for those releases.

from helm-controller.

rustrial avatar rustrial commented on May 29, 2024 4

Same here, and we constantly manually apply helm rollback ... && flux reconcile ... to fix it. What about adding a flag to HelmRelease to opt-in to a self healing approach, where the helm-controller would recognise HelmReleases in this state and automatically apply a rollback to them.

from helm-controller.

mjnagel avatar mjnagel commented on May 29, 2024 4

We've experienced issues where some of our release get stuck on:

"Helm upgrade failed: another operation (install/upgrade/rollback) is in progress"

@sharkztex this is the same problem I commonly see. The workarounds I know of are:

# example w/ kiali
HR_NAME=kiali
HR_NAMESPACE=kiali
kubectl get secrets -n ${HR_NAMESPACE} | grep ${HR_NAME}
# example output:
sh.helm.release.v1.kiali.v1                                       helm.sh/release.v1                    1      18h
sh.helm.release.v1.kiali.v2                                       helm.sh/release.v1                    1      17h
sh.helm.release.v1.kiali.v3                                       helm.sh/release.v1                    1      17m
# Delete the most recent one:
kubectl delete secret -n ${HR_NAMESPACE} sh.helm.release.v1.${HR_NAME}.v3

# suspend/resume the hr
flux suspend hr -n ${HR_NAMESPACE} ${HR_NAME}
flux resume hr -n ${HR_NAMESPACE} ${HR_NAME}

Alternatively you can use helm rollback:

HR_NAME=kiali
HR_NAMESPACE=kiali

# Run a helm history command to get the latest release before the issue (should show deployed)
helm history ${HR_NAME} -n ${HR_NAMESPACE} 
# Use that revision in this command
helm rollback ${HR_NAME} <revision> -n ${HR_NAMESPACE} 
flux reconcile hr bigbang -n ${HR_NAMESPACE} 

from helm-controller.

mfamador avatar mfamador commented on May 29, 2024 2
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: kube-prometheus-stack
spec:
  releaseName: kube-prometheus-stack
  chart:
    spec:
      chart: kube-prometheus-stack
      sourceRef:
        kind: HelmRepository
        name: prometheus-community
        namespace: flux-system
      version: "14.0.1"
  interval: 1h0m0s
  timeout: 30m
  install:
    remediation:
      retries: 3
  values:
    kubeStateMetrics:
      enabled: false

We've disabled the kube-state-metrics on kube-prometheus-stack chart, but same result

from helm-controller.

stefanprodan avatar stefanprodan commented on May 29, 2024 2

@niclarkin thank you so much for testing this. After running my own tests last night on several clusters I have bumped the default deadline to 30 seconds, my tests were focused on CNI upgrades and API network failures. I've also changed the flag names. I'll add these flags to all toolkit controllers and the new defaults will be available in the next flux release.

from helm-controller.

monotek avatar monotek commented on May 29, 2024 2

I've resetup my whole fluxcd repo, inspired by the helm example repo (https://github.com/fluxcd/flux2-kustomize-helm-example).

I have several kustomizations for infra tools, google config connector, cert-manager and monitoring charts now, which depend on each other and also make use of health checks. My monitoring namespace charts have some helm dependencies, so other monitoring charts are installed after kube-prometheus-stack chart, hoping this would lower the pressure on the k8s api.

Nevertheless k8s api dies shortly after kube-prometheus-stack is installed, kustomize controller is restartet and the helmrelease stays forever with this state:

k -n monitoring get helmreleases.helm.toolkit.fluxcd.io 
NAME                           READY   STATUS                                                                             AGE
kube-prometheus-stack          False   Helm upgrade failed: another operation (install/upgrade/rollback) is in progress   106m
prometheus-blackbox-exporter   False   dependency 'monitoring/kube-prometheus-stack' is not ready                         106m
prometheus-mysql-exporter      False   HelmChart 'infra/monitoring-prometheus-mysql-exporter' is not ready                106m
prometheus-postgres-exporter   False   dependency 'monitoring/kube-prometheus-stack' is not ready                         106m

Therefore the kustomize healt check, testing for the kube-prometheus-stack helmrelase to be ready, is also stuck.

 k get kustomizations.kustomize.toolkit.fluxcd.io 
NAME           READY     STATUS                                                            AGE
cert-manager   True      Applied revision: main/1e59bc6d254a87ecb3b9f1e273840054603b8bd9   120m
cnrm-system    True      Applied revision: main/1e59bc6d254a87ecb3b9f1e273840054603b8bd9   120m
flux-system    True      Applied revision: main/1e59bc6d254a87ecb3b9f1e273840054603b8bd9   120m
infra          True      Applied revision: main/1e59bc6d254a87ecb3b9f1e273840054603b8bd9   120m
monitoring     Unknown   reconciliation in progress                                        120m
remote-apps    False     dependency 'flux-system/monitoring' is not ready                  120m
tls-cert       True      Applied revision: main/1e59bc6d254a87ecb3b9f1e273840054603b8bd9   120m

So again it does not look like its a kustomize controller problem but it would still be nice if we could recover from that without doing something by hand.

Deleting the helmrelease and running flux reconcile kustomization monitoring solved it.

from helm-controller.

stefanprodan avatar stefanprodan commented on May 29, 2024 2

@marcocaberletti we fixed the OOM issues, we'll do a flux release today. Please see: https://github.com/fluxcd/helm-controller/blob/main/CHANGELOG.md#0122

from helm-controller.

Cobesz avatar Cobesz commented on May 29, 2024 2

I got same issue. Please check the following first. I was not even able to list the release under this usual command

helm list -n <name-space>

this was responding empty. So funny behavior from helm.

 kubectl config get-contexts

make sure your context is set for correct kuberenetes cluster.

then next step is

helm history <release> -n <name-space> --kube-context <kube-context-name>

try applying the rollback to above command.

helm rollback <release> <revision> -n <name-space> --kube-context <kube-context-nam>

This saved me my chrismas eve dinner, thank you so much!

from helm-controller.

davidkarlsen avatar davidkarlsen commented on May 29, 2024 1

Try k describe helmreleases <therelease> and look at the events. In my case I believe it was caused by:

Events:
  Type    Reason  Age                  From             Message
  ----    ------  ----                 ----             -------
  Normal  info    47m (x3 over 47m)    helm-controller  HelmChart 'flux-system/postgres-operator-postgres-operator' is not ready
  Normal  error   26m (x4 over 42m)    helm-controller  reconciliation failed: Helm upgrade failed: timed out waiting for the condition

I did a helm upgrade by hand, and then it reconciled in flux too.

from helm-controller.

mfamador avatar mfamador commented on May 29, 2024 1

Same here, from what I could see the flux controllers are crashing while reconciling the helmreleases and the charts stay with pending status.
Screenshot 2021-03-10 at 21 09 48

❯ helm list -Aa
NAME              	NAMESPACE   	REVISION	UPDATED                                	STATUS         	CHART                       	APP VERSION
flagger           	istio-system	1       	2021-03-10 20:53:41.632527436 +0000 UTC	deployed       	flagger-1.6.4               	1.6.4
flagger-loadtester	istio-system	1       	2021-03-10 20:53:41.523101293 +0000 UTC	deployed       	loadtester-0.18.0           	0.18.0
istio-operator    	istio-system	1       	2021-03-10 20:54:52.180338043 +0000 UTC	deployed       	istio-operator-1.7.0
loki              	monitoring  	1       	2021-03-10 20:53:42.29377712 +0000 UTC 	pending-install	loki-distributed-0.26.0     	2.1.0
prometheus-adapter	monitoring  	1       	2021-03-10 20:53:50.218395164 +0000 UTC	pending-install	prometheus-adapter-2.12.1   	v0.8.3
prometheus-stack  	monitoring  	1       	2021-03-10 21:08:35.889548922 +0000 UTC	pending-install	kube-prometheus-stack-14.0.1	0.46.0
tempo             	monitoring  	1       	2021-03-10 20:53:42.279556436 +0000 UTC	pending-install	tempo-distributed-0.8.5     	0.6.0

And the helm releases:

Every 5.0s: kubectl get helmrelease -n monitoring                                                                                                                                    tardis.Home: Wed Mar 10 21:14:39 2021

NAME                 READY   STATUS                                                                             AGE
loki                 False   Helm upgrade failed: another operation (install/upgrade/rollback) is in progress   20m
prometheus-adapter   False   Helm upgrade failed: another operation (install/upgrade/rollback) is in progress   20m
prometheus-stack     False   Helm upgrade failed: another operation (install/upgrade/rollback) is in progress   16m
tempo                False   Helm upgrade failed: another operation (install/upgrade/rollback) is in progress   20m

After deleting a helmrelease so that it can be recreated again, the kustomize-controller is crashing:

kustomize-controller-689774778b-rqhsq manager E0310 21:17:29.520573       6 leaderelection.go:361] Failed to update lock: Put "https://10.0.0.1:443/apis/coordination.k8s.io/v1/namespaces/flux-system/leases/7593cc5d.fluxcd.io": context deadline exceeded
kustomize-controller-689774778b-rqhsq manager I0310 21:17:29.520663       6 leaderelection.go:278] failed to renew lease flux-system/7593cc5d.fluxcd.io: timed out waiting for the condition
kustomize-controller-689774778b-rqhsq manager {"level":"error","ts":"2021-03-10T21:17:29.520Z","logger":"setup","msg":"problem running manager","error":"leader election lost"}

helm uninstall for the pending-install releases seems to solve the problem some times, but most of the times the controllers are still crashing:

helm-controller-75bcfd86db-4mj8s manager E0310 22:20:31.375402       6 leaderelection.go:361] Failed to update lock: Put "https://10.0.0.1:443/apis/coordination.k8s.io/v1/namespaces/flux-system/leases/5b6ca942.fluxcd.io": context deadline exceeded
helm-controller-75bcfd86db-4mj8s manager I0310 22:20:31.375495       6 leaderelection.go:278] failed to renew lease flux-system/5b6ca942.fluxcd.io: timed out waiting for the condition
helm-controller-75bcfd86db-4mj8s manager {"level":"error","ts":"2021-03-10T22:20:31.375Z","logger":"setup","msg":"problem running manager","error":"leader election lost"}
- helm-controller-75bcfd86db-4mj8s › manager
+ helm-controller-75bcfd86db-4mj8s › manager
helm-controller-75bcfd86db-4mj8s manager {"level":"info","ts":"2021-03-10T22:20:41.976Z","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
helm-controller-75bcfd86db-4mj8s manager {"level":"info","ts":"2021-03-10T22:20:41.977Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
helm-controller-75bcfd86db-4mj8s manager {"level":"info","ts":"2021-03-10T22:20:41.977Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
helm-controller-75bcfd86db-4mj8s manager {"level":"info","ts":"2021-03-10T22:20:41.977Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
helm-controller-75bcfd86db-4mj8s manager {"level":"info","ts":"2021-03-10T22:20:41.977Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
helm-controller-75bcfd86db-4mj8s manager {"level":"info","ts":"2021-03-10T22:20:41.977Z","logger":"setup","msg":"starting manager"}
helm-controller-75bcfd86db-4mj8s manager I0310 22:20:41.977697       7 leaderelection.go:243] attempting to acquire leader lease flux-system/5b6ca942.fluxcd.io...
helm-controller-75bcfd86db-4mj8s manager {"level":"info","ts":"2021-03-10T22:20:41.977Z","msg":"starting metrics server","path":"/metrics"}
helm-controller-75bcfd86db-4mj8s manager I0310 22:21:12.049163       7 leaderelection.go:253] successfully acquired lease flux-system/5b6ca942.

from helm-controller.

pjastrzabek avatar pjastrzabek commented on May 29, 2024 1

For all folks who do not experience helm controller crashes.. could you try adding bigger timeout to HelmRelease
timeout: 30m

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024 1

For the others in this issue:

The problem you are all running into has been around in Helm for awhile, and is most of the time related to Helm not properly restoring / updating the release state for some timeouts that may happen during the rollout of a release.

The reason you are seeing this more frequently compared to earlier versions of Helm is due to the introduction of helm/helm#7322 in v3.4.x.

There are three options that would eventually resolve this for you all:

  1. Rely on it being fixed in the Helm core, an attempt is being made in helm/helm#9180, but it will likely take some time before there is consensus there about what the actual fix would look like.
  2. Detect the pending state in the controller, assume that we are the sole actors over the release (and can safely ignore the pessimistic lock), and fall back to the configured remediation strategy for the release to attempt to perform an automatic rollback (or uninstall).
  3. Patch the Helm core, as others have done in e.g. werf/3p-helm@ea7631b, so that it is suited to our needs. I am however not a big fan of maintaining forks, and much more in favor of helping fix it upstream.

Until we have opted for one of those options (likely to be option 2), and your issue isn't due to the controller(s) crashing, you may want to set your timeout to a more generous value as already suggested in #149 (comment). That should minimize the chances of running into Helm#4558.

from helm-controller.

mfamador avatar mfamador commented on May 29, 2024 1

Not sure if it's of interest, and sorry for the spam, but it might give a clue of what might be happening.

Updated the cluster from 1.18.14 to 1.19.7, and added a new node to have more resources, killed all pods on kube-system so that tunnelfront was also restarted (it was already restarted when updating the cluster).

Was able to install the helm release, but the controllers crashed the same, leaving it in pending-install again.
Installing it manually with ❯ helm install manual-prometheus -n monitoring prometheus-community/kube-prometheus-stack worked just fine.

❯ helm list -Aa
NAME                   	NAMESPACE   	REVISION	UPDATED                                	STATUS         	CHART                       	APP VERSION
manual-prometheus      	monitoring  	1       	2021-03-12 09:53:43.33732 +0000 UTC    	deployed       	kube-prometheus-stack-14.0.1	0.46.0
kube-prometheus-stack  	monitoring  	1       	2021-03-12 09:32:23.673133881 +0000 UTC	pending-install	kube-prometheus-stack-14.0.1	0.46.0

from helm-controller.

stefanprodan avatar stefanprodan commented on May 29, 2024 1

@mfamador @florinherbert note that kube-prometheus-stack comes with an exporter called kube-state-metrics that runs tones of queries against the API. My guess is that it DDOSes the Kubernetes API in such a way that both AKS and kubeadm control planes are crashing 🙃

from helm-controller.

artem-nefedov avatar artem-nefedov commented on May 29, 2024 1

Started to run into this with flux 0.19.1. The only thing that seem to work as a fix is manually deleting helmrelease.
My setup is: Kustomization (interval: 2m0s) creates -> HelmRelease (interval: 5m0s, infinite retries).

from helm-controller.

marcocaberletti avatar marcocaberletti commented on May 29, 2024 1

Same issue for me after the upgrade to flux v0.21.0, on the 11/02 in the plot below.
I notice a huge increase in memory usage for the helm-controller, so often the pod is OOM-killed and Helm releases stay in pending-upgrade status.

Screenshot from 2021-11-11 16-53-55

from helm-controller.

stefanprodan avatar stefanprodan commented on May 29, 2024 1

Helm itself places a lock when it starts an upgrade, if you kill Helm while doing it, it leaves the lock in place preventing any further upgrade operations. Doing a rollback is very expensive and can have grave consequences for charts with statefulsets or charts that contain hooks which perform db migrations and other state altering operations. We'll need to find a way to remove the lock without affecting the deployed workloads.

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024

Is this happening so often that it would be possible to enable the --log-level=debug flag for awhile so we get better insight into what Helm exactly does?

from helm-controller.

seaneagan avatar seaneagan commented on May 29, 2024

Not sure if it's related, but one potential source of releases getting stuck in pending-* would be non-graceful termination of the controller pods while a release action (install/upgrade/rollback) is in progress. I see that the controller-runtime has some support for that, not sure if we need to do anything to integrate with or test that, it seems like at least lengthening the default termination grace period (currently 10 seconds) may make sense.

from helm-controller.

seaneagan avatar seaneagan commented on May 29, 2024

I also think it would useful if Helm separated the deployment status from the wait status, and allowed running the wait as a standalone functionality, and thus recovery from waits that failed or were interrupted. I'll try to get an issue created for that.

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024

Not sure if it's related, but one potential source of releases getting stuck in pending-* would be non-graceful termination of the controller pods while a release action (install/upgrade/rollback) is in progress.

Based on feedback from another user, it does not seem to be related to pod restarts all the time, but still waiting on logs to confirm this. I tried to build in some behavior to detect a "stuck version" in #166.

Technically, without Helm offering full support for a context that can be cancelled, the graceful shutdown period would always require a configuration value equal to the highest timeout a HelmRelease has. I tried to advocate for this (the context support) in helm/helm#7958, but due to the implementation difficulties this never got off the ground and ended up as a request to create a HIP.

from helm-controller.

Athosone avatar Athosone commented on May 29, 2024

We got the same issue in the company I work for.
We created a small tools that does helm operation. The issue occurs when we self update the tool it self. There is a race condition, if the old pod dies before helm is able to update the status of the release we end up in the exact same state.

We discovered fluxcd and we wanted to use it,
I wonder how Flux handles this ?

from helm-controller.

brianpham avatar brianpham commented on May 29, 2024

Running into this same issue when updating datadog in one of our cluster. Any suggestions on how to handle this?

{"level":"error","ts":"2021-02-01T18:54:59.609Z","logger":"controller.helmrelease","msg":"Reconciler error","reconciler group":"helm.toolkit.fluxcd.io","reconciler kind":"HelmRelease","name":"datadog","namespace":"datadog","error":"Helm upgrade failed: another operation (install/upgrade/rollback) is in progress"}

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024

Can you provide additional information on the state of the release (as provided by helm), and what happened during the upgrade attempt (did for example the controller pod restart)?

from helm-controller.

brianpham avatar brianpham commented on May 29, 2024

The controller pod did not restart. I just see a bunch of the error above in the helm-controller log messages.

I did notice this though when digging into the history a bit. I noticed that it tried upgrading the chart on the 26th and failed. Which was one I probably saw that error message that there was another operation in progress.

➜  git:(main) helm history datadog --kube-context -n datadog                
REVISION        UPDATED                         STATUS          CHART           APP VERSION     DESCRIPTION      
1               Fri Jan 22 23:19:33 2021        superseded      datadog-2.6.12  7               Install complete 
2               Fri Jan 22 23:29:34 2021        deployed        datadog-2.6.12  7               Upgrade complete 
3               Tue Jan 26 04:13:46 2021        pending-upgrade datadog-2.6.13  7               Preparing upgrade

I was able to do a rollback to revision 2 and then ran the helm reconcile and it seems to have went through just now.

from helm-controller.

monotek avatar monotek commented on May 29, 2024

I see the contantly on GKE at the moment.

Especially if i try to recreate a cluster from scratch.
All the flux pods are dying constantly, because k8s api can't be reached (kubectl refuses connection too).

{"level":"error","ts":"2021-02-22T14:16:27.377Z","logger":"setup","msg":"problem running manager","error":"leader election lost"}

The helm controller cant therefore also not reach the source controller:

Events:
  Type    Reason  Age                   From             Message
  ----    ------  ----                  ----             -------
  Normal  info    37m (x2 over 37m)     helm-controller  HelmChart 'infra/monitoring-kube-prometheus-stack' is not ready
  Normal  error   32m (x12 over 33m)    helm-controller  reconciliation failed: Helm upgrade failed: another operation (install/upgrade/rollback) is in progress
  Normal  error   32m (x13 over 33m)    helm-controller  Helm upgrade failed: another operation (install/upgrade/rollback) is in progress
  Normal  error   28m (x3 over 28m)     helm-controller  Helm upgrade failed: another operation (install/upgrade/rollback) is in progress
  Normal  error   28m (x3 over 28m)     helm-controller  reconciliation failed: Helm upgrade failed: another operation (install/upgrade/rollback) is in progress
  Normal  error   25m                   helm-controller  Get "http://source-controller.flux-system.svc.cluster.local./helmchart/infra/monitoring-kube-prometheus-stack/kube-prometheus-stack-13.5.0.tgz": dial tcp 10.83.240.158:80: connect: connection refused
  Normal  error   16m (x12 over 16m)    helm-controller  reconciliation failed: Helm upgrade failed: another operation (install/upgrade/rollback) is in progress
  Normal  error   5m50s (x18 over 17m)  helm-controller  Helm upgrade failed: another operation (install/upgrade/rollback) is in progress
  Normal  info    33s                   helm-controller  HelmChart 'infra/monitoring-kube-prometheus-stack' is not ready

Not sure if flux is the cause by flooding the k8s api and some limits are reached?
I try with another master versions (from 1.18.14-gke.1600 to 1.18.15-gke.1500) now. Lets see if it helps.
Edit: update did not help

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024

@monotek can you try setting the --concurrent flag on the helm-controller to a lower value (e.g. 2)?

from helm-controller.

monotek avatar monotek commented on May 29, 2024

I'm using the fluxcd terraform provider.
Does it support altering this value?

My pod args look like:

  - args:
    - --events-addr=http://notification-controller/
    - --watch-all-namespaces=true
    - --log-level=info
    - --log-encoding=json
    - --enable-leader-election

So i guess the default vlaue of 4 is used?

I've changed it via "kubectl edit deploy" for now.
Should i do this for the other controllers too?

The cluster has kind of settled, as there were no new fluxcd pod restarts for today.
Last installtion of a helm chart worked flawlessly, even without setting the value.

I'll give feedback if adjusting the value helps, if we get unstable master api again.

Thanks for your help :)

from helm-controller.

zkl94 avatar zkl94 commented on May 29, 2024

this is also happening in flux2. It seems to be the same problem and it is happening very frequently. I have to delete these failed helmreleases to recreate them and sometimes the recreation doesn't even work.

Before the helmreleases failed, I wasn't modifying them in VCS. But somehow they failed all of a sudden.

{"level":"debug","ts":"2021-02-27T09:40:37.480Z","logger":"events","msg":"Normal","object":{"kind":"HelmRelease","namespace":"chaos-mesh","name":"chaos-mesh","uid":"f29fe041-67c6-4e87-9d31-ae4b74a056a0","apiVersion":"helm.toolkit.fluxcd.io/v2beta1","resourceVersion":"278238"},"reason":"info","message":"Helm upgrade has started"} {"level":"debug","ts":"2021-02-27T09:40:37.498Z","logger":"controller.helmrelease","msg":"preparing upgrade for chaos-mesh","reconciler group":"helm.toolkit.fluxcd.io","reconciler kind":"HelmRelease","name":"chaos-mesh","namespace":"chaos-mesh"} {"level":"debug","ts":"2021-02-27T09:40:37.584Z","logger":"events","msg":"Normal","object":{"kind":"HelmRelease","namespace":"chaos-mesh","name":"chaos-mesh","uid":"f29fe041-67c6-4e87-9d31-ae4b74a056a0","apiVersion":"helm.toolkit.fluxcd.io/v2beta1","resourceVersion":"278238"},"reason":"error","message":"Helm upgrade failed: another operation (install/upgrade/rollback) is in progress"} {"level":"debug","ts":"2021-02-27T09:40:37.585Z","logger":"events","msg":"Normal","object":{"kind":"HelmRelease","namespace":"chaos-mesh","name":"chaos-mesh","uid":"f29fe041-67c6-4e87-9d31-ae4b74a056a0","apiVersion":"helm.toolkit.fluxcd.io/v2beta1","resourceVersion":"277102"},"reason":"error","message":"reconciliation failed: Helm upgrade failed: another operation (install/upgrade/rollback) is in progress"} {"level":"info","ts":"2021-02-27T09:40:37.712Z","logger":"controller.helmrelease","msg":"reconcilation finished in 571.408644ms, next run in 5m0s","reconciler group":"helm.toolkit.fluxcd.io","reconciler kind":"HelmRelease","name":"chaos-mesh","namespace":"chaos-mesh"} {"level":"error","ts":"2021-02-27T09:40:37.712Z","logger":"controller.helmrelease","msg":"Reconciler error","reconciler group":"helm.toolkit.fluxcd.io","reconciler kind":"HelmRelease","name":"chaos-mesh","namespace":"chaos-mesh","error":"Helm upgrade failed: another operation (install/upgrade/rollback) is in progress","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:252\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:215\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99"}

from helm-controller.

iacou avatar iacou commented on May 29, 2024

also saw this today... no previous revision to roll back to, so had to delete the helmrelease and start the reconciliation again.

from helm-controller.

avacaru avatar avacaru commented on May 29, 2024

I have encountered the same problem multiple times on different clusters. To fix the HelmRelease state I applied the workaround from this issue comment: helm/helm#8987 (comment) as deleting the HelmRelease could have unexpected consequences.

Some background that might be helpful in identifying the problem:

  • As part of a Jenkins pipeline I am upgrading the cluster (control plane and nodes) from 1.17 to 1.18, and immediately after that is finished I apply updated HelmRelease manifests -> reconciliation starts. Some manifests bring updates to existing releases, some bring in new releases (no previous Helm secret exists).
  • The helm-controller pod did not restart.

from helm-controller.

mfamador avatar mfamador commented on May 29, 2024

For all folks who do not experience helm controller crashes.. could you try adding bigger timeout to HelmRelease
timeout: 30m

(Now I realized that I've missed the NOT, the suggestion was only for the folks who are NOT experiencing crashes, clearly not meant for me :D ...)

Adding timeout: 30m to the HelmRelease with pending-install didn't prevent the controllers to crash.

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
  name: kube-prometheus-stack
  namespace: monitoring
spec:
  chart:
    spec:
      chart: kube-prometheus-stack
      sourceRef:
        kind: HelmRepository
        name: prometheus-community
        namespace: flux-system
      version: 14.0.1
  install:
    remediation:
      retries: 3
  interval: 1h0m0s
  releaseName: kube-prometheus-stack
  timeout: 30m

After: ❯ helm uninstall kube prometheus-stack -n monitoring all controllers start crashing (this is Azure AKS)

Every 5.0s: kubectl get pods -n flux-system                    tardis.Home: Thu Mar 11 22:05:35 2021

NAME                                           READY   STATUS             RESTARTS   AGE
helm-controller-5cf7d96887-nz9rm               0/1     CrashLoopBackOff   15         23h
image-automation-controller-686ffd758c-b9vwd   0/1     CrashLoopBackOff   29         27h
image-reflector-controller-85796d5c4d-dtvjq    1/1     Running            28         27h
kustomize-controller-689774778b-rqhsq          0/1     CrashLoopBackOff   30         26h
notification-controller-769876bb9f-cb25k       0/1     CrashLoopBackOff   25         27h
source-controller-c55db769d-fwc7h              0/1     Error              30         27h

and the release gets pending:

❯ helm list -a -n monitoring
NAME              	NAMESPACE 	REVISION	UPDATED                                	STATUS         	CHART                       	APP VERSION
kube-prometheus-stack  	monitoring	1       	2021-03-11 22:08:23.473944235 +0000 UTC	pending-install	kube-prometheus-stack-14.0.1	0.46.0

from helm-controller.

stefanprodan avatar stefanprodan commented on May 29, 2024

@mfamador your issue is different, based on “ Failed to update lock: Put "https://10.0.0.1:443/apis/coordination.k8s.io/v1/namespaces/flux-system/leases/5b6ca942.fluxcd.io": context deadline exceeded” I would say that your AKS network is broken or the Azure proxy for the Kubernetes API is crashing. Please reach to Azure support as this is not something we can fix for you.

from helm-controller.

mfamador avatar mfamador commented on May 29, 2024

You're probably right @stefanprodan, I'll reach them, but to clarify, this is a brand new AKS cluster, which has been destroyed and recreated from scratch multiple times, and always ending up with the Flux v2 crashing, most of the times when installing the kube-prometheus-stack helm chart, others with Loki or Tempo. We've been creating several AKS clusters and we're only seeing this when using Flux 2, find hard to believe that's an AKS problem.

from helm-controller.

stefanprodan avatar stefanprodan commented on May 29, 2024

@mfamador if Flux leader election times out then I don’t see how any other controller would work, we don’t do anything special here, leader election is implemented with upstream Kubernetes libraries. Check out the AKS FAQ, seems that Azure has serious architectural issues as they use some proxy called tunnelfront or aks-link that you need to restart from time to time 😱 https://docs.microsoft.com/en-us/azure/aks/troubleshooting

Check whether the tunnelfront or aks-link pod is running in the kube-system namespace using the kubectl get pods --namespace kube-system command. If it isn't, force deletion of the pod and it will restart.

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024

If on AKS the cluster API for some reason becomes overwhelmed by the requests (that should be cached, sane, and not cause much pressure on an average cluster), another thing you may want to try is to trim down on the concurrent processing for at least the Helm releases / helm-controller by tweaking the --concurrent flag as described in #149 (comment).

from helm-controller.

mfamador avatar mfamador commented on May 29, 2024

Thanks, @stefanprodan and @hiddeco, I'll give it a try

from helm-controller.

mfamador avatar mfamador commented on May 29, 2024

@hiddeco, the controllers are still crashing after setting --concurrent=1 on helm-controller, I'll try with another AKS version

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024

Then I think it is as @stefanprodan describes, and probably related to some CNI/tunnel front issue in AKS.

from helm-controller.

mfamador avatar mfamador commented on May 29, 2024

Thanks, @hiddeco, yes, I think you might be right, that brings me many concerns about using AKS in production, I'll try another CNI to see if it gets better.

from helm-controller.

florinherbert avatar florinherbert commented on May 29, 2024

We have the same problem on an on prem 1.20 kubeadm installed cluster (same kube-prometheus-stack, other helm charts like elastic or redis worked). What we can see is that kubernetes components are restarting (kube-apiserver, kube-controller-manager, kube-scheduler).

from helm-controller.

stefanprodan avatar stefanprodan commented on May 29, 2024

We have the same problem on an on prem 1.20 kubeadm installed cluster (same kube-prometheus-stack, other helm charts like elastic or redis worked)

If you use the Helm CLI to install kube-prometheus-stack does it also DDOS the Kubernetes API?

from helm-controller.

florinherbert avatar florinherbert commented on May 29, 2024

We have the same problem on an on prem 1.20 kubeadm installed cluster (same kube-prometheus-stack, other helm charts like elastic or redis worked)

If you use the Helm CLI to install kube-prometheus-stack does it also DDOS the Kubernetes API?

Yes it does, so it seems to be a different situation than #149 (comment) ; we are checking

from helm-controller.

mfamador avatar mfamador commented on May 29, 2024

That's a good guess! But wondering if that wouldn't also happen using Flux v1, and we've been using it with this exact same helm chart on another AKS clusters without any issues.
I've just also noticed that despite helm install seems to work perfectly and the helm releases get status deployed, it's probably also crashing the masters since the Flux v2 controllers are crashing the same while the chart is being installed.
That being said, I believe it's not an issue of Flux controllers, but perhaps they could be more resilient and handle some unresponsiveness from the API server? Or does this not make any sense?

from helm-controller.

stefanprodan avatar stefanprodan commented on May 29, 2024

but perhaps they could be more resilient and handle some unresponsiveness from the API server?

We use the same leader election as kube-controller-manager, and as you see that crashes also, kubelet does retries at pod level, I don't see why we should diverge from what upstream Kubernetes does.

from helm-controller.

mfamador avatar mfamador commented on May 29, 2024

Yes sure. I'm not sure though that the kube-controller-manager is really crashing since the masters are hidden to me on Azure, I've made a few helm uninstall and install, crashing the flux controllers, but from what I could see from Azure's logs the container id from kube-controller-manager remains the same.

from helm-controller.

florinherbert avatar florinherbert commented on May 29, 2024

In our case we tracked the problem to etcd (it wasn't going as far as running kube-state-metrics), than the helm-controller worked as expected.

from helm-controller.

monotek avatar monotek commented on May 29, 2024

If kube-state-metrics is the reason for the not responding k8s api, we should be able to test it by disabling all collectors when installing it. As kube-prometheus-stack used the kube-state-metrics helm chart as dependency config should be possible via: https://github.com/kubernetes/kube-state-metrics/blob/master/charts/kube-state-metrics/values.yaml#L114

from helm-controller.

niclarkin avatar niclarkin commented on May 29, 2024

We have been seeing the flavor of this problem in which the HelmController and SourceController are restarting. I wanted to share our investigations in case they are helpful to others.

Our scenario is one of system congestion during a large install of multiple HelmReleases. A lot of images are being pulled, the Kubernetes API is being heavily used, and API requests are timing out. What we want in this scenario is for the install to eventually succeed after a few retries but what we are getting is a stuck system that needs manual intervention.

For us, the problem is not intermittent or caused by a fundamentally broken cluster - it happens every time we spin up the charts. It's not caused by kube-state-metrics in our case - that container hasn't even finished pulling by the time we hit the issue.

A seeming workaround to avoid the controller restarts is to disable leader election. (This can be done by kubectl edit deployment on the helm-controller and source-controller deployments, and removing the --enable-leader-election.) It would be useful to know if anyone has any better workarounds.

We also needed to set install remediation retries. Sooner or later during the busy startup period, the Kubernetes API fails in a way that causes the Helm install to fail. Install remediation deals with that. Just setting a long timeout in the HelmRelease didn't seem to help - we still saw HelmReleases go into "helm install failed" with an etcdserver timeout error.

from helm-controller.

stefanprodan avatar stefanprodan commented on May 29, 2024

@niclarkin can you please give this a try #239 You can increase the leader election renewal deadline with --leader-election-deadline.

from helm-controller.

niclarkin avatar niclarkin commented on May 29, 2024

@niclarkin can you please give this a try #239 You can increase the leader election renewal deadline with --leader-election-deadline.

Thanks for an impressively swift response!

Your fix looks good to me. I didn't need to set --leader-election-deadline. If I understood correctly, your change also increases the default timeouts, and it seems these new defaults are now tolerant enough for our setup.

The method I used to test was:

  • Enable leader election on helm-controller.
  • Confirm I can repro the stuck install problem and see the log leaderelection.go ... error retrieving resource lock flux-system/5b6ca942.fluxcd.io ... context deadline exceeded
  • kubectl edit deployment to replace ghrc.io/fluxcd/helm-controller:v0.8.1 with docker.io/stefanprodan/helm-controller:le-config-7622dd9
  • Clean out my system, rerun install successfully, confirm no helm-controller restarts via kubectl get pods.

We'd still like one of the full fixes discussed earlier in #149 (comment), but #239 seems like a good way to mitigate the problem and reduce occurrence of restarts on heavily loaded clusters.

from helm-controller.

Cretezy avatar Cretezy commented on May 29, 2024

(Also posted this on helm/helm#8987):

One additional thing, ideally in our CD, with the --wait option, we'd like to wait until the current ongoing rollout to be finished.

Use case is:

  • User pushes commit A
  • CD starts
  • User pushes commit B
  • CD starts
  • A starts Helm upgrade
  • B starts Helm upgrade shortly after

Expected:

  • B waits for A to be done, then runs

Current:

  • B fails it's upgrade since A is still upgrading

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024

@Cretezy I do not see how this relates to the helm-controller, as we do not run repeated updates in parallel for a single HelmRelease, but queue them.

from helm-controller.

Cretezy avatar Cretezy commented on May 29, 2024

@hiddeco You're right, I'm a little unsure how I got to this repo. I thought I was still in the helm/ repos, sorry!

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024

The Helm PR linked in the comment above has been merged a couple of minutes ago. Once it has ended up in a release and in the controller, we need to re-evaluate the situation.

from helm-controller.

glen-uc avatar glen-uc commented on May 29, 2024

We haven't encountered this issue in flux 0.16.2 but started seeing this frequently after upgrading to 0.19.1

Basically, we have set up an ImageAutomation Object for our repo and we have configured a webhook receiver for each of our ECR images

In our staging Environment, we build and push bunch of images to ECR repo (one at a time) after each push to ECR we trigger the image webhook which in turns triggers ImageAutomation and applies the latest version of the image

What we noticed is if we push multiple images to ECR one after another in a short interval some of our helm releases give
Helm upgrade failed: another operation (install/upgrade/rollback) is in progress error

Currently concurrent is set to 4 (default), we are trying to set it to 2 and see if it resolves the above error

Note : Our helm controller is not crashing (it has zero restarts)

from helm-controller.

sbernheim avatar sbernheim commented on May 29, 2024

We haven't encountered this issue in flux 0.16.2 but started seeing this frequently after upgrading to 0.19.1

@glen-uc - Would you be able to run flux version while your kubeconfig is connected to the affected cluster and post back the version of Helm Controller you're running?

from helm-controller.

glen-uc avatar glen-uc commented on May 29, 2024

@sbernheim Here is the output of flux version and flux check

flux: v0.20.1
helm-controller: v0.12.1
image-automation-controller: v0.16.0
image-reflector-controller: v0.13.0
kustomize-controller: v0.16.0
notification-controller: v0.18.1
source-controller: v0.17.1
► checking prerequisites
✗ flux 0.20.1 <0.21.0 (new version is available, please upgrade)
✔ Kubernetes 1.21.3 >=1.19.0-0
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.12.1
✔ image-automation-controller: deployment ready
► ghcr.io/fluxcd/image-automation-controller:v0.16.0
✔ image-reflector-controller: deployment ready
► ghcr.io/fluxcd/image-reflector-controller:v0.13.0
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v0.16.0
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v0.18.1
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v0.17.1
✔ all checks passed

from helm-controller.

takirala avatar takirala commented on May 29, 2024

I also encountered this issue when i tried to upgrade flux. I don't see this issue on flux 0.17.2 but when i upgraded to 0.18.0 (and also tried with upgrading to 0.21.1) i observed the same behavior (helm controller restarted and some of the releases went into Helm upgrade failed: another operation (install/upgrade/rollback) is in progress state.

I did not try the workaround specified in #239 but I did try changing the concurrency to 1 but it did not help.

Is there a known release that uses flux v1beta2 api (so i presume above 0.17.2) that does not face this bug ? #239 is not part of any releases yet looks like

from helm-controller.

marcocaberletti avatar marcocaberletti commented on May 29, 2024

Great! Thanks!

from helm-controller.

kishoregv avatar kishoregv commented on May 29, 2024

We have experienced this behavior when helm upgrade fails and helm-controller fails to update the status on HelmRelease. Status update failure is due to HelmRelease object is modified.

At that point the HelmRelease stuck in Another operation is in progress even though there is no other operation in pending.

Maybe there should be retry on the status updates in helm controller.

from helm-controller.

stevejr avatar stevejr commented on May 29, 2024

@hiddeco / @stefanprodan - is it possible to get an update on this issue?

from helm-controller.

artem-nefedov avatar artem-nefedov commented on May 29, 2024

That's great news. I sometimes feel that Helm is second-class citizen in Flux world, it's about time it got some love.

Don't get me wrong, I love Flux. But, sometimes I do miss ArgoCD's approach where everything is managed through same object kind "Application", meaning Helm gets all the same benefits as any other deployment.

from helm-controller.

mjnagel avatar mjnagel commented on May 29, 2024

Just going to throw some context here on where I'm seeing this issue...

It seems consistently tied to a helm-controller crash during reconciliation and leaving the sh.helm.release.x secret in a bad state (i.e. the secret still shows a reconciliation in progress despite nothing happening). Originally some of the crashes we saw were due to OOM (since we were setting lower limits), bumping the limits partially resolved the issue. Currently the crashes/issues seem related to the k8s API being overwhelmed by new resources/changes. We have roughly 30 helmreleases some of which have large amounts of manifests (and certainly slow down kube api connections while being applied/reconciling). The logs in the helm-controller seem to indicate an inability to hit the kube-api resulting in the crash:

Failed to update lock: Put "https://<API IP>:443/apis/coordination.k8s.io/v1/namespaces/flux-system/leases/helm-controller-leader-election": context deadline exceeded

Not sure if this context is helpful or repetitive - I'd assume this behavior is already being tracked in relation to the refactoring mentioned above? I can provide additional logs and observations if that would help.

from helm-controller.

sharkztex avatar sharkztex commented on May 29, 2024

We've experienced issues where some of our release get stuck on:

"Helm upgrade failed: another operation (install/upgrade/rollback) is in progress"

The only way we know how to get passed this is by deleting the release, would the refactoring of the helm-controller address this or is there an alternative way to get the release rolled out without having to delete it?

helm-controller: v0.22.1
image-automation-controller: v0.23.2
image-reflector-controller: v0.19.1
kustomize-controller: v0.26.1
notification-controller: v0.24.0
source-controller: v0.24.4`

from helm-controller.

cgeisel avatar cgeisel commented on May 29, 2024

I'm currently experiencing this error, but I'm seeing an endless loop of helm releases every few seconds.

➜  ~ flux version
flux: v0.30.2
helm-controller: v0.21.0
kustomize-controller: v0.25.0
notification-controller: v0.23.5
source-controller: v0.24.4

I'm new to flux and helm, so I may be missing something obvious, but the output of some of the suggested commands does not have the expected results.

As you can see, one of my releases has 8702 revisions and counting.

➜  ~ helm list -A
NAME            	NAMESPACE     	REVISION	UPDATED                                	STATUS  	CHART                               	APP VERSION
089606735968    	ci-python-8aa8	1       	2022-08-26 17:09:01.725954063 +0000 UTC	deployed	irsa-service-account-0.1.0          	1.0.0
284950274094    	ci-python-8aa8	8702    	2022-09-19 23:30:38.61301278 +0000 UTC 	deployed	irsa-service-account-0.1.0          	1.0.0
karpenter       	karpenter     	1       	2022-06-02 20:13:18.670847828 +0000 UTC	deployed	karpenter-0.9.1                     	0.9.1
prd-default-east	gitlab-runner 	10      	2022-09-14 00:15:24.978728955 +0000 UTC	deployed	gitlab-runner-0.43.1                	15.2.1
splunk-connect  	splunk-connect	1       	2022-06-30 22:10:40.269231296 +0000 UTC	deployed	splunk-connect-for-kubernetes-1.4.15	1.4.15

As you can also see, the revision has gone up in between running these two commands:

➜  ~ helm history 284950274094 -n flux-system
REVISION	UPDATED                 	STATUS    	CHART                     	APP VERSION	DESCRIPTION
8697    	Mon Sep 19 23:28:36 2022	superseded	irsa-service-account-0.1.0	1.0.0      	Upgrade complete
8698    	Mon Sep 19 23:28:38 2022	superseded	irsa-service-account-0.1.0	1.0.0      	Upgrade complete
8699    	Mon Sep 19 23:29:37 2022	superseded	irsa-service-account-0.1.0	1.0.0      	Upgrade complete
8700    	Mon Sep 19 23:29:38 2022	superseded	irsa-service-account-0.1.0	1.0.0      	Upgrade complete
8701    	Mon Sep 19 23:30:37 2022	superseded	irsa-service-account-0.1.0	1.0.0      	Upgrade complete
8702    	Mon Sep 19 23:30:38 2022	superseded	irsa-service-account-0.1.0	1.0.0      	Upgrade complete
8703    	Mon Sep 19 23:31:37 2022	superseded	irsa-service-account-0.1.0	1.0.0      	Upgrade complete
8704    	Mon Sep 19 23:31:39 2022	superseded	irsa-service-account-0.1.0	1.0.0      	Upgrade complete
8705    	Mon Sep 19 23:32:37 2022	superseded	irsa-service-account-0.1.0	1.0.0      	Upgrade complete
8706    	Mon Sep 19 23:32:39 2022	deployed  	irsa-service-account-0.1.0	1.0.0      	Upgrade complete

There's no "good" revision to rollback to in the output. The latest one has status deployed but will be replaced shortly by a new revision.

In addition, when trying to suspend the helm release, I get this error message.

➜  ~ flux suspend hr -n flux-system 284950274094
✗ no HelmRelease objects found in flux-system namespace

Which confuses me, since the helm history command seems to think those revisions are part of the flux-system namespace.

from helm-controller.

danports avatar danports commented on May 29, 2024

I see this problem frequently, mainly with Helm upgrades that take a while to complete - e.g. kube-prometheus-stack, which takes 5-10 minutes to upgrade in one of my clusters. I almost never have this problem with upgrades that take only 1-2 minutes.

from helm-controller.

stefanprodan avatar stefanprodan commented on May 29, 2024

@alex-berger @danports the locking issue seems to happen only if helm-controller is OOMKILL (due to out of memory) or SIGKILL (if the node where it's running dies without evicting pods first). Is this the case for you?

from helm-controller.

danports avatar danports commented on May 29, 2024

I'll have to take a deeper look at logs/metrics to confirm, but that wouldn't surprise me, since I've been having intermittent OOM issues on the nodes where helm-controller runs. If that's the case, this seems less like a Flux bug and more like a node stability issue on my end, though a self-healing feature along the lines of what @alex-berger suggested would be nice.

from helm-controller.

alex-berger avatar alex-berger commented on May 29, 2024

Actually, I doubt this only happens on OOMKILL. We have very dynamic clusters, with Karpenter constantly replacing nodes (for good reasons), and thus it happens very often that the helm-controller Pods are evicted (and terminated) in the middle of long running HelmReleases. Note, long running HelmReleases are not uncommon, with high-availability setups, rolling upgrades of Deployments, DaemonSets and especially StatefulSets can takes dozens of minutes or even several hours.

My educated guess is, that this problem happens whenever there are HelmReleases in progress and the helm-controller Pod is forcefully terminated (SIGTERM, SIGKILL, OOMKILL, or any other unhandled singal). I am convinced that we have to anticipate this, though we can still try to improve helm-controller respectively helm to reduce the probability that it happens. Anyway, as this can still happen, we should have an automatic recovery capability built into the helm-controller (or helm) to make sure such HelmReleases are automatically recovered.

After all, GitOps should not have to rely on humans, waiting for pager calls just to manually running helm rollback ... && flux reconcile hr ....

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024

After #620 shutdown signals should now be handled properly, for OOMKILL we can not gracefully shut down in time. For which #628 will be an option until unlocking can be handled safely.

from helm-controller.

stefanprodan avatar stefanprodan commented on May 29, 2024

Actually, I doubt this only happens on OOMKILL. We have very dynamic clusters, with Karpenter constantly replacing nodes (for good reasons), and thus it happens very often that the helm-controller Pods are evicted (and terminated) in the middle of long running HelmReleases. Note, long running HelmReleases are not uncommon, with high-availability setups, rolling upgrades of Deployments, DaemonSets and especially StatefulSets can takes dozens of minutes or even several hours.

Until we figure out how to recover the Helm storage at restart, I suggest you move helm-controller to EKS Fargate or some dedicated node outside of Karpenter, this would allow helm-controller to perform hours long upgrades without interruption.

from helm-controller.

alex-berger avatar alex-berger commented on May 29, 2024

Moving helm-controller to EKS Fargate might mitigate the problem (a bit) and actually this is what we are currently working on.

However, as we are using Cilium (CNI) this also needs changes to the NetworkPolicy objects deployed by FluxCD, especially the allow-egress policy must be patched to allow for traffic from (non Cilium managed) Pods running on EKS Fargate to the (Cilium managed) Pods still running on EC2. We achieved this by changing it to something like this:

spec:
  podSelector: {}
  ingress:
    - {} # All ingress
  egress:
    - {} # All egress
  policyTypes:
    - Ingress
    - Egress

As you can see, the work-around using EKS Fargate weakens security as we had to open-up the NetworkPolicy quite a bit. In our case we can take the risk as we only run trusted workloads on those clusters. But for other users this might be a no-go.

from helm-controller.

hiddeco avatar hiddeco commented on May 29, 2024

Closing this in favor of #644. Thank you all!

from helm-controller.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.