Coder Social home page Coder Social logo

helm-controller's Introduction

helm-controller

NOTE: this repository has been recently (2020-10-06) moved out of the github.com/rancher org to github.com/k3s-io supporting the acceptance of K3s as a CNCF sandbox project.


A simple way to manage helm charts (v2 and v3) with Custom Resource Definitions in k8s.

Manifests and Deploying

The ./manifests folder contains useful YAML manifests to use for deploying and developing the Helm Controller. This simple YAML deployment creates a HelmChart CRD + a Deployment using the rancher/helm-controller container. The YAML might need some modifications for your environment so read below for Namespaced vs Cluster deployments and how to use them properly.

Namespaced Deploys

Use the deploy-namespaced.yaml to create a namespace and add the Helm Controller and CRD to that namespace locking down the Helm Controller to only see changes to CRDs within that namespace. This is defaulted to helm-controller so update the YAML to your needs before running kubectl create

Cluster Scoped Deploys

If you'd like your helm controller to watch the entire cluster for HelmChart CRD changes use the deploy-cluster-scoped.yaml deploy manifest. By default it will add the helm-controller to the kube-system so update metadata.namespace for your needs.

Uninstalling

To remove the Helm Controller run kubectl delete and pass the deployment YAML used using to create the Deployment -f parameter.

Developing and Building

The Helm Controller is easy to get running locally, follow the instructions for your needs and requires a running k8s server + CRDs etc. When you have a working k8s cluster, you can use ./manifests/crd.yaml to create the CRD and ./manifests/example-helmchart.yaml which runs the stable/traefik helm chart.

Locally

Building and running natively will start a daemon which will watch a local k8s API. See Manifests section above about how to create the CRD and Objects using the provided manifests.

go build -o ./bin/helm-controller
./bin/helm-controller --kubeconfig $HOME/.kube/config

docker/k8s

An easy way to get started with docker/k8s is to install docker for windows/mac and use the included k8s cluster. Once functioning you can easily build locally and get a docker container to pull the Helm Controller container and run it in k8s. Use make to launch a Linux container and build to create a container. Use the ./manifests/deploy-*.yaml definitions to get it into your cluster and update containers.image to point to your locally image e.g. image: rancher/helm-controller:dev

Options and Usage

Use ./bin/helm-controller help to get full usage details. The outside of a k8s Pod the most important options are --kubeconfig or --masterurl or it will not run. All options have corresponding ENV variables you could use.

Testing

go test ./...

License

Copyright (c) 2019 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

helm-controller's People

Contributors

aiyengar2 avatar bert-r avatar brandond avatar c3y1huang avatar ci-jie avatar dbaker-rh avatar dependabot[bot] avatar dweomer avatar erikwilson avatar galal-hussein avatar gbonnefille avatar github-actions[bot] avatar ibrokethecloud avatar ibuildthecloud avatar jtyr avatar luthermonson avatar macedogm avatar manuelbuil avatar matttrach avatar mkoperator avatar piotrminkina avatar raulcabello avatar smbecker avatar strongmonkey avatar tedyst avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-controller's Issues

klipper-helm image needs updating

The klipper-helm image has become stale:

  • Based on Alpine 3.8 which went end-of-support 2020-05-01
  • Built with golang 1.11 which has not been supported since 1.13 came out
  • Packages old versions of helm v2/v3

How to specify `--debug` to helm charts so installing with bad configuration is easier

Hello,

How can I enable --debug to helm controller so that I can debug failing charts.

For example I install apps like this:

helm upgrade --install dashboard-auth-proxy k8s-at-home/oauth2-proxy \
        --version=4.0.1 \
        --namespace system-apps \
        -f oauth2-values.yaml \
        --debug

This is one error I got with debug that I did not get with helm-controller:

client.go:108: [debug] creating 5 resource(s)
Error: Ingress.extensions "dashboard-auth-proxy-oauth2-proxy" is invalid: spec.rules[0].host: Invalid value: "map[host: xxxxxx paths:[/]]": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
helm.go:94: [debug] Ingress.extensions "dashboard-auth-proxy-oauth2-proxy" is invalid: spec.rules[0].host: Invalid value: "map[host:xxxxxx paths:[/]]": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

Helm controller swallowed the error and gave me nothing.
If there is a yaml issue I get the full output and the error printed.

I don't know if other params - like dry-run make sense.

Thanks,

Can't upgrade helm charts after deployment

I'm using flux to deploy a HelmChart CRD into my k3s setup. The initial deploy works fine, but i update the HelmChart manifest with a new version of a chart the controller doesn't detect this as an upgrade. It attempts to do a new install of the chart, with the same deployment name. Which fails. Using v2 or v3 helm. With:
Error: cannot re-use a name that is still in use

Is this a feature that is planned to be added?

Frequent job cycling deploying new revisions

The controller is periodically (every 10 mins?) kicking off the install job again, triggering a new upgrade and revision of the chart even though the version deployed is the same as the one previously.

I see this periodically in the controller log:
1 controller.go:135] error syncing 'helm-controller/traefik': handler helm-controller: DesiredSet - Replace Wait batch/v1, Kind=Job helm-controller/helm-install-traefik for helm-controller helm-controller/traefik, requeuing

Edit CRD Causes Errors

If you add a CRD like the following

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: traefik-test
  namespace: kube-system
spec:
  chart: stable/traefik
  version: 1.64.0
  set:
    rbac.enabled: "true"
    ssl.enabled: "true"

let the helm jobs finish and you will have a new traefik pod. now edit the CR and change the version to 1.65.0. The onchange comes through and it tries to spin up a job but since the job already exists from the previous run it will try to rewrite the existing one and some job fields are immutable and you will get something like the following:

E0716 12:58:30.843044       1 controller.go:118] error syncing 'helm-controller/grafana': handler helm-controller: failed to update helm-controller/helm-install-grafana batch/v1, Kind=Job for helm-controller helm-controller/grafana: Job.batch "helm-install-grafana" is invalid: spec.template: Invalid value: <<<large blob of job data>>>  field is immutable, requeuing

need to handle edits a bit nicer and figure out how to do official upgrades to charts.

Helm-Controller dosen't support booleans correctly

With this manifest, Helm-Controller parses the key acme.staging as a string "false", not as a boolean. Because of this, Helm-Controller will use the argument --set-string instead of the argument --set, which will treat false as a boolean.

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: traefik
  namespace: kube-system
spec:
  chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.77.1.tgz
  set:
    acme.staging: "false"

Controller delete the release during the upgrade process without warning

Hello,

We experienced a release deleting during upgrade, is it expected ?

elastic/cloud-on-k8s#4734 (comment)

kubectl logs -f helm-install-elastic-operator-jzwnf -n elastic-system 
CHART="${CHART//%\{KUBERNETES_API\}%/${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}}"
set +v -x
+ cp /var/run/secrets/kubernetes.io/serviceaccount/ca.crt /usr/local/share/ca-certificates/
+ update-ca-certificates
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
+ [[ '' != \t\r\u\e ]]
+ export HELM_HOST=127.0.0.1:44134
+ HELM_HOST=127.0.0.1:44134
+ helm_v2 init --skip-refresh --client-only --stable-repo-url https://charts.helm.sh/stable/
+ tiller --listen=127.0.0.1:44134 --storage=secret
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://charts.helm.sh/stable/ 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
++ jq -r '.Releases | length'
++ helm_v2 ls --all '^elastic-operator$' --output json
[main] 2021/08/04 12:19:18 Starting Tiller v2.17.0 (tls=false)
[main] 2021/08/04 12:19:18 GRPC listening on 127.0.0.1:44134
[main] 2021/08/04 12:19:18 Probes listening on :44135
[main] 2021/08/04 12:19:18 Storage driver is Secret
[main] 2021/08/04 12:19:18 Max history per release is 0
[storage] 2021/08/04 12:19:18 listing all releases with filter
+ V2_CHART_EXISTS=
+ [[ '' == \1 ]]
+ [[ '' == \v\2 ]]
+ [[ -n '' ]]
+ shopt -s nullglob
+ helm_content_decode
+ set -e
+ ENC_CHART_PATH=/chart/elastic-operator.tgz.base64
+ CHART_PATH=/elastic-operator.tgz
+ [[ ! -f /chart/elastic-operator.tgz.base64 ]]
+ return
+ [[ install != \d\e\l\e\t\e ]]
+ helm_repo_init
+ grep -q -e 'https\?://'
+ [[ helm_v3 == \h\e\l\m\_\v\3 ]]
+ [[ eck-operator == stable/* ]]
+ [[ -n https://helm.elastic.co ]]
+ helm_v3 repo add elastic-operator https://helm.elastic.co
"elastic-operator" has been added to your repositories
+ helm_v3 repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "elastic-operator" chart repository
Update Complete. ⎈Happy Helming!⎈
+ helm_update install --namespace elastic-system --repo https://helm.elastic.co --version 1.7.0
+ [[ helm_v3 == \h\e\l\m\_\v\3 ]]
++ tr '[:upper:]' '[:lower:]'
++ jq -r '"\(.[0].app_version),\(.[0].status)"'
++ helm_v3 ls --all -f '^elastic-operator$' --namespace elastic-system --output json
+ LINE=1.7.0,failed
+ IFS=,
+ read -r INSTALLED_VERSION STATUS _
+ VALUES=
+ [[ install = \d\e\l\e\t\e ]]
+ [[ 1.7.0 =~ ^(|null)$ ]]
+ [[ failed =~ ^(pending-install|pending-upgrade|pending-rollback)$ ]]
+ [[ failed == \d\e\p\l\o\y\e\d ]]
+ [[ failed =~ ^(deleted|failed|null|unknown)$ ]]
+ [[ helm_v3 == \h\e\l\m\_\v\3 ]]
+ helm_v3 uninstall elastic-operator --namespace elastic-system
release "elastic-operator" uninstalled
+ echo Deleted
+ helm_v3 install --namespace elastic-system --repo https://helm.elastic.co --version 1.7.0 elastic-operator eck-operator
Deleted
NAME: elastic-operator
LAST DEPLOYED: Wed Aug  4 12:19:23 2021
NAMESPACE: elastic-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Inspect the operator logs by running the following command:
   kubectl logs -n elastic-system sts/elastic-operator
+ exit

synchronisation of charts periodically

Do we have any flag in helmcharts.helm.cattle.io . that can say, check every 5 min for new helm release and apply automatically.
We are looking for a partially connected environment where
when ever a new release is pushed to help server it automatically propagates to edge

Add support for self-signed SSL certificates / adding a trusted CA to the jobs images

In case specifying the helm chart via

chart: https:///k8s-prod/cert-manager-v1.2.0.tgz

but use a self-signed certificate for that server - the helm chart download fails.

As a workaround we had to add the tgz via chartContent: but it would be much nicer if we could add a "trust this CA" or "trust the hosts CA" ..

Or is there any other way how we could address this "download helm charts from external HTTPS source or even from a registry with authentication"?

Improve support for boolean values

When using an HelmChart with a set like:

  set:
    concurrent: 10
    checkInterval: 30
    rbac.create: true

The controller fails with:

E0616 15:09:55.015104    3015 reflector.go:153] github.com/rancher/helm-controller/pkg/generated/informers/externalversions/factory.go:117: Failed to list *v1.HelmChart: v1.HelmChartList.Items: []v1.HelmChart: v1.HelmChart.Spec: v1.HelmChartSpec.Set: unmarshalerDecoder: json: cannot unmarshal bool into Go value of type int32, error found in #10 byte of ...|eate":true,...

I can overpass this issue by forcing the value to string:

set:
  concurrent: 10
  checkInterval: 30
  rbac.create: "true"

I suspect something goes wrong when converting a YAML boolean into JSON.

PS: I'm using K3OS v0.10.2.

HelmChart api object - comma problem

Hi,
I am using k3s cluster version v1.20.4+k3s1 and I am trying to apply a rancher HelmChart using this controller. It works, however I have a problem when I try to use a comma value in set. I am aware that helm requires to escape a comma sign, but I guess somehow when helm-install starts, it does something weird to this setting.

My Helm chart object looks like this:

kind: HelmChart
metadata:
  creationTimestamp: "2021-04-22T09:47:03Z"
  finalizers:
  - wrangler.cattle.io/helm-controller
  generation: 3
  name: rancher
  namespace: kube-system
  resourceVersion: "2177"
  uid: bd42926e-355a-49d6-ab28-b15331067363
spec:
  chart: rancher
  repo: https://releases.rancher.com/server-charts/stable
  set:
    hostname: rancher-test.com
    http_proxy: http://example-wproxy.test.com:8080
    https_proxy: https://example-wproxy.test.com:8443
    ingress.tls.source: secret
    no_proxy: 127.0.0.0/8\\,10.0.1.0/8
  targetNamespace: cattle-system
status:
  jobName: helm-install-rancher

and helm-install logs look like this:

helm_v3 install --namespace cattle-system --repo https://releases.rancher.com/server-charts/stable --set-string hostname=rancher-test.com --set-string http_proxy=http://example-wproxy.test.com:8080 --set-string https_proxy=https://example-wproxy.test.com:8443 --set-string ingress.tls.source=secret --set-string rancher rancher
Error: must either provide a name or specify --generate-name

From job logs it seems that no_proxy part is missing, there is nothing atfer last --set-string

And when I don’t escape comma i get an error about it:

helm_v3 install --namespace cattle-system --repo https://releases.rancher.com/server-charts/stable --set-string hostname=rancher-test.com --set-string http_proxy=http://example-wproxy.test.com:8080 --set-string https_proxy=https://example-wproxy.test.com:8443 --set-string ingress.tls.source=secret --set-string no_proxy=127.0.0.0/8,10.0.1.0/8 rancher rancher

Error: failed parsing --set-string data: key map "10" has no value

I also tried adding a different amount of \ and other " and more more, unfortunately with no success.

Is this a bug or is there maybe a workaround for this problem?

Too broad permissions granted to helm charts

I'm trying to narrow down the permissions needed to run helm-controller fully scoped in a namespace; but I hit a wall when I found:

E0907 17:29:05.730579       1 controller.go:117] error syncing 'helm-controller/traefik': handler helm-controller: failed to create helm-helm-controller-traefik rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding for helm-controller helm-controller/traefik: clusterrolebindings.rbac.authorization.k8s.io "helm-helm-controller-traefik" is forbidden: user "system:serviceaccount:helm-controller:helm-controller" (groups=["system:serviceaccounts" "system:serviceaccounts:helm-controller" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:

this is due to this line: https://github.com/rancher/helm-controller/blob/3e223ca9dc94607ea9b7deb7f632551230c4db32/pkg/helm/controller.go#L271

which wants to grant cluster-admin to every chart. This should rather be a namespaced Role, you agree?

HelmChart delete leaves orphans

Earlier I had issues getting helm charts to delete. Now with a fresh install, I am experiencing the opposite issue. Too buggy for use now.

Here is the chart:

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: openfaas
  namespace: openfaas
spec:
  chart: openfaas
  repo: 'https://openfaas.github.io/faas-netes'
  targetNamespace: openfaas
  valuesContent: |
    ---
    basic_auth: true
    faasIdler:
      dryRun: 'false'
    faasnetes:
      imagePullPolicy: IfNotPresent
      httpProbe: true
      readTimeout: 5m
      writeTimeout: 5m
    functionNamespace: openfaas-fn
    gateway:
      replicas: 2
      readTimeout: 15m
      scaleFromZero: true
      upstreamTimeout: 14m55s
      writeTimeout: 15m
    operator:
      create: true
    queueWorker:
      replicas: 2
      ackWait: 15m

Here is the log for the delete job. It exists very quickly and the job is never marked as completed.

2020-03-06T13:22:41.73077854Z + '[' helm_v3 == helm_v3 ']
2020-03-06T13:22:41.730808742Z + [[ openfaas == stable/* ]]
2020-03-06T13:22:41.730817716Z + '[' -n https://openfaas.github.io/faas-netes ']'
2020-03-06T13:22:41.730857079Z + helm_v3 repo add openfaas https://openfaas.github.io/faas-netes
2020-03-06T13:22:42.881154731Z "openfaas" has been added to your repositories
2020-03-06T13:22:42.884731895Z + helm_v3 repo update
2020-03-06T13:22:42.95460054Z Hang tight while we grab the latest from your chart repositories...
2020-03-06T13:22:43.420767524Z ...Successfully got an update from the "openfaas" chart repository
2020-03-06T13:22:43.420848117Z Update Complete. ⎈  Happy Helming!⎈
2020-03-06T13:22:43.424436598Z + helm_update delete
2020-03-06T13:22:43.424484672Z + '[' helm_v3 == helm_v3 ']'
2020-03-06T13:22:43.425375666Z ++ helm_v3 ls --all -f '^openfaas$' --output json
2020-03-06T13:22:43.425719158Z ++ jq -r '"\(.[0].app_version),\(.[0].status)"'
2020-03-06T13:22:43.425870562Z ++ tr '[:upper:]' '[:lower:]'
2020-03-06T13:22:43.557990486Z + LINE=,deployed
2020-03-06T13:22:43.558996474Z ++ echo ,deployed
2020-03-06T13:22:43.559189469Z ++ cut -f1 -d,
2020-03-06T13:22:43.56083381Z + INSTALLED_VERSION=
2020-03-06T13:22:43.561694435Z ++ echo ,deployed
2020-03-06T13:22:43.561859876Z ++ cut -f2 -d,
2020-03-06T13:22:43.563022109Z + STATUS=deployed
2020-03-06T13:22:43.563067569Z + '[' -e /config/values.yaml ']
2020-03-06T13:22:43.563080318Z + VALUES='--values /config/values.yaml'
2020-03-06T13:22:43.563095289Z + '[' delete = delete ']
2020-03-06T13:22:43.563126807Z + '[' -z '' ']
2020-03-06T13:22:43.563143487Z + exit

Initial Implementation definition of done

Definition of Done:

The readme must explain how to use this as a stand-alone controller in Kubernetes

The project must have a deploy directory that contains Kubernetes yaml files that can be used to:

  • Create the HelmChart CRD
  • Launch the Helm controller as a Kubernetes deployment
    User should just have to do kubectl apply -f <name of file in the deploy directory> to get this up and running

The project must include a working example that can walk a user through deploying this controller and using it to successfully deploy a helm chart

The project must have functional and passing Drone CI with unit tests for the controller

The project must be integrated with our drone publishing infrastructure so that when a git tag is pushed to the respository, an image named rancher/helm-controller:<git tag name> is created and pushed to dockerhub.

User must be able to run the controller such that it functions against only 1 specific namespace or against all namespaces

can't use private repos

Trying to use HelmChart resource, there is no username/password field to access private repositories.
I remember once seeing an advice to use:
spec.repo: https://:@

this flow isn't working - I think it's a bug in helm itself, and opened a ticket: helm/helm#9762
Is it possible to add a dedicate username/password fields? so this can work even if helm doesn't change?

helm deploy chart failed

  1. first deploy charts failed
  2. then deploy charts always

error log:

+ LINE=1.0,failed
++ echo 1.0,failed
++ cut -f1 -d,
+ INSTALLED_VERSION=1.0
++ echo 1.0,failed
++ cut -f2 -d,
+ STATUS=failed
+ '[' -e /config/values.yaml ']'
+ VALUES='--values /config/values.yaml'
+ '[' install = delete ']'
+ '[' -z 1.0 ']'
+ '[' failed = deployed ']'
+ '[' failed = failed ']'
+ '[' helm_v3 == helm_v3 ']'
+ helm_v3 uninstall dlake-default
Error: uninstall: Release not loaded: xxxx: release: not found

https://github.com/rancher/klipper-helm/blob/de4e6777f49295c83c004054c85a80439b068a41/entry#L44
change to $HELM uninstall $NAME || true ?

Chart deployment issues

Hello,

How do I install charts using this procedure. Unfortunately for me all the charts deployment are failing with similar error related to DNS resolution :(

here is my chart file

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: nginx
  namespace : test
spec:
  chart: nginx
  repo: https://helm.nginx.com/stable
  targetNamespace: test

here are the full error logs

k logs helm-install-nginx-6c94r -n test
CHART=$(sed -e "s/%{KUBERNETES_API}%/${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}/g" <<< "${CHART}")
set +v -x
+ cp /var/run/secrets/kubernetes.io/serviceaccount/ca.crt /usr/local/share/ca-certificates/
+ update-ca-certificates
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
+ '[' '' '!=' true ']'
+ export HELM_HOST=127.0.0.1:44134
+ HELM_HOST=127.0.0.1:44134
+ + helm_v2 init --skip-refresh --client-onlytiller
--listen=127.0.0.1:44134 --storage=secret
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
++ helm_v2 ls --all '^nginx$' --output json
++ jq -r '.Releases | length'
[main] 2020/12/14 20:28:06 Starting Tiller v2.16.10 (tls=false)
[main] 2020/12/14 20:28:06 GRPC listening on 127.0.0.1:44134
[main] 2020/12/14 20:28:06 Probes listening on :44135
[main] 2020/12/14 20:28:06 Storage driver is Secret
[main] 2020/12/14 20:28:06 Max history per release is 0
[storage] 2020/12/14 20:28:06 listing all releases with filter
+ EXIST=
+ '[' '' == 1 ']'
+ '[' '' == v2 ']'
+ shopt -s nullglob
+ helm_content_decode
+ set -e
+ ENC_CHART_PATH=/chart/nginx.tgz.base64
+ CHART_PATH=/nginx.tgz
+ '[' '!' -f /chart/nginx.tgz.base64 ']'
+ return
+ '[' install '!=' delete ']'
+ helm_repo_init
+ grep -q -e 'https\?://'
+ '[' helm_v3 == helm_v3 ']'
+ [[ nginx-ingress == stable/* ]]
+ '[' -n https://helm.nginx.com/stable ']'
+ helm_v3 repo add nginx https://helm.nginx.com/stable
**Error: looks like "https://helm.nginx.com/stable" is not a valid chart repository or cannot be reached: Get "https://helm.nginx.com/stable/index.yaml": dial tcp: lookup helm.nginx.com on 10.43.0.10:53: server misbehaving**

Also I noticed that the stable repo being used is still the old one and should be changed to https://charts.helm.sh/stable

HelmChartConfig deletion is not tracked

It appears the controller is currently silently ignoring the deletion of HelmChartConfig resource. It seems the the controller should track those, and redeploy (upgrade?) the corresponding Helm release to reflect the new desired state. Since the controller does track updates, a workaround is to update the deployed HelmChartConfig resource to one with "empty" values. But properly tracking deletions would be better, as it would match the intuitive expectations / principle of least surprise.

Support `valuesContent` from secret

In some instances, it is necessary to provide secret values to Helm. To aid with secret storage with GitOps, it would be nice to be able to support the supplying of values (either entire valuesContent or individual values) via secrets (secretRef)

Namespace deletion hanging

Hello all.

I'm experiencing namespace termination hang when using helmchart.helm.cattle.io CRD.
I'm Applying the following helm resource:

---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: rabbitmq
  namespace: candio-helm-controller-issue
spec:
  chart: https://charts.bitnami.com/bitnami/rabbitmq-8.18.0.tgz
  # https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq/#installing-the-chart
  valuesContent: |-
    replicaCount: 1
    auth:
      username: rabbit
      password: password
    persistence:
      enabled: true
      accessMode: ReadWriteOnce
      ## If you change this value, you might have
      ## to adjust `rabbitmq.diskFreeLimit` as well.
      size: 8Gi
    service:
      managerPortEnabled: false
    metrics:
      enabled: true
    volumePermissions:
      enabled: false
    clustering:
      forceBoot: true

this file creates the resource and works perfectly fine but the problem arise when trying to delete the namespace.

root@8:/home/broker# kubectl get pods -n candio-helm-controller-issue
NAME                             READY   STATUS      RESTARTS   AGE
helm-install-rabbitmq--1-sb88h   0/1     Completed   0          38s
rabbitmq-0                       1/1     Running     0          36s
root@dev-office-inference-8:/home/agot/broker# kubectl delete ns candio-helm-controller-issue
namespace "candio-helm-controller-issue" deleted
^C
root@8:/home/broker# kubectl get ns
NAME                           STATUS        AGE
broker                         Active        3d23h
candio-helm-controller-issue   Terminating   3m8s
root@8:/home/broker# kubectl api-resources --verbs=list --namespaced -o name \
>   | xargs -n 1 kubectl get --show-kind --ignore-not-found -n candio-helm-controller-issue
NAME                                AGE
helmchart.helm.cattle.io/rabbitmq   108s

K3s version

root@8:/home/broker# k3s --version
k3s version v1.22.2+k3s2 (3f5774b4)
go version go1.16.8

What should be the exact behavior?
Is there any documentation related to the CRD and the deletion process?

Any help here will be appreciated.

Helm controller in K3s repeatedly fails to update spec

In K3s v0.6.1 when using the following manifest:

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: rancher
  namespace: kube-system
spec:
  chart: rancher
  repo: https://releases.rancher.com/server-charts/stable/
  targetNamespace: cattle-system
  version: 2.2.3
  valuesContent: |
    hostname: "hostname"
    replicas: 1
    ingress:
      tls:
        source: rancher
E0628 01:04:44.904045   11305 controller.go:118] error syncing 'kube-system/rancher': handler helm-controller: failed to update kube-system/helm-install-rancher batch/v1, Kind=Job for helm-controller kube-system/rancher: Job.batch "helm-install-rancher" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-uid":"207bb58c-993e-11e9-a412-0242ac110002", "helmcharts.helm.cattle.io/chart":"rancher", "job-name":"helm-install-rancher"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:"values", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(0xc00bdc2940), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil)}}}, InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"helm", Image:"rancher/klipper-helm:v0.1.5", Command:[]string(nil), Args:[]string{"install", "--name", "rancher", "rancher", "--namespace", "cattle-system", "--repo", "https://releases.rancher.com/server-charts/stable/", "--version", "2.2.3"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"NAME", Value:"rancher", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"VERSION", Value:"2.2.3", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"REPO", Value:"https://releases.rancher.com/server-charts/stable/", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"VALUES_HASH", Value:"af557d270b396dbbd18244bb0940bbcccb99703c01a1566ef66f6d6afaa9bd13", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"NO_PROXY", Value:",10.42.0.0/16,10.43.0.0/16", ValueFrom:(*core.EnvVarSource)(nil)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:"values", ReadOnly:false, MountPath:"/config", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc009505e38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"helm-rancher", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00e8fcaf0), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}: field is immutable, requeuing

MountVolume.SetUp failed for volume "content" error if chart.Spec.ChartContent not set

Issue
If chart.Spec.ChartContent is empty (like for most charts which come from a repo), an empty ConfigMap "chart-content-%s" is created, causing the helm-install job to try to mount this empty ConfigMap as volume leading to an error.

setContentConfigMap() checks for nil, however a skeleton ConfigMap is returned by contentConfigMap() anyway causing the check to pass and the volume mount added to the job.Spec.

Proposed Fix:
Check in setContentConfigMap() if the ConfigMap.Data is empty and skip adding the volume mount to the job.

func contentConfigMap(chart *helmv1.HelmChart) *core.ConfigMap {
	configMap := &core.ConfigMap{
		TypeMeta: meta.TypeMeta{
			APIVersion: "v1",
			Kind:       "ConfigMap",
		},
		ObjectMeta: meta.ObjectMeta{
			Name:      fmt.Sprintf("chart-content-%s", chart.Name),
			Namespace: chart.Namespace,
		},
		Data: map[string]string{},
	}

	if chart.Spec.ChartContent != "" {
		key := fmt.Sprintf("%s.tgz.base64", chart.Name)
		configMap.Data[key] = chart.Spec.ChartContent
	}

	return configMap
}
func setContentConfigMap(job *batch.Job, chart *helmv1.HelmChart) *core.ConfigMap {
	configMap := contentConfigMap(chart)
	if configMap == nil {
		return nil
	}

	job.Spec.Template.Spec.Volumes = append(job.Spec.Template.Spec.Volumes, []core.Volume{
		{
			Name: "content",

[feature] valdiate the output of the helm run against the cluster

The helm controller right now hashes the values and won't ever re-run helm even if a resource originally created by the helm run is missing, ie a service. For context, Flux v2 does this.

I would expect/like to see that if I'm using this controller, that it has additional logic to ensure the result of the helm chart is always applied regardless if valuesContent has changed. This way I can depend on it being the source of truth. As it stands right now, once applied, if someone modifies a resource it created, it won't undo the modifications until the valuesContent or some other element of HelmChart is modified to trigger a helm run.

Since this can be coupled with the k3s auto-deploy manifests, this would be a necessary change to make it really gitops source of truth like model even though we aren't employing the git side of things here.

This is related to a problem/feature with the auto-deploy manifests as well k3s-io/k3s#3711

can't use private repos

Trying to use HelmChart resource, there is no username/password field to access private repositories.
I remember once seeing an advice to use:
spec.repo: https://:@

this flow isn't working - I think it's a bug in helm itself, and opened a ticket: helm/helm#9762
Is it possible to add a dedicate username/password fields? so this can work even if helm doesn't change?

Condition on subchart not respected

I'm trying to deploy Sonarqube chart with the (partial) HelmChart:

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: sonarqube
  namespace: kube-system
spec:
  chart: sonarqube
  repo: https://oteemo.github.io/charts
  targetNamespace: default
  valuesContent: |-
    database.type: h2
    postgresql.enabled: false
    mysql.enabled: false
    plugins:
      install:
        - https://github.com/mc1arke/sonarqube-community-branch-plugin/releases/download/1.1.1/sonarqube-community-branch-plugin-1.1.1.jar

But the postgres subchart is deployed, while it has a condition in requirements.yaml:

dependencies:
- name: postgresql
  version: 8.6.4
  repository: https://charts.bitnami.com/bitnami
  condition: postgresql.enabled

What I'm doing wrong?

Switch to using K3s for CI instead of RKE

Once k3s-io/k3s#1445 is closed, we should switch CI over to using a build of K3s in Docker to test helm-controller, instead of using RKE1 as we currently do. This would provide a more directly comparable environment to how helm-controller is most frequently run.

Cluster Scoped CRD doesn't work

When attempting to use the recommended cluster scoped CRD nothing will deploy. Initial error is that the service account for the install job doesn't work. Looking at the logs I see the following error:

E1103 18:28:32.943432 1 controller.go:135] error syncing 'docker-registry': handler helm-controller: failed to create helm--docker-registry rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding for helm-controller docker-registry: ClusterRoleBinding.rbac.authorization.k8s.io "helm--docker-registry" is invalid: subjects[0].namespace: Required value, handler helm-controller: an empty namespace may not be set when a resource name is provided, requeuing

Believe it is attempting to use the namespace from the HelmChart resource, but because it is cluster scoped no namespace exists.

This is running kubernetes 1.18.6

This is running v0.6.0 of the helm-controller.

[BUG] "pax_global_header" error due to helm v3.0.0

On k3os v0.10.0 i'm getting following error:

+ helm_v3 install --set server.dev.enabled=true vault https://github.com/hashicorp/vault-helm/archive/v0.5.0.tar.gz
Error: chart illegally contains content outside the base directory: "pax_global_header"

Most likely fixed in v3.0.1 through helm/helm#7085

Create targetNamespace if it does not exist

Helm v2 defaults to creating the namespace if it doesn't exist when installing a chart.

Helm v3 on the other hand does not do this as it's seems to be praxis in the k8s eco system not to error out on missing namespace. However It seems as though helm v3 added a --create-namespace flag that could be used to create the namespace. It's available since 3.2.0 if I understood it correctly.

Would you accept a PR that uses that flag as the default?

helm_v3 repo remove stable - Error: no repositories configured

I think this is related to k3s-io/klipper-helm#7 .
Only this time there is no repo installed.

It's curios that it installed the chart the first time.
It failed after I updated it and not it keeps on failing.

I'm using

lient Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-10T11:48:48Z", GoVersion:"go1.15.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4+k3s1", GitCommit:"2532c10faad43e2b6e728fdcc01662dc13d37764", GitTreeState:"clean", BuildDate:"2020-11-18T22:11:18Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
helm-install-cert-manager-n6c5n          0/1     CrashLoopBackOff   13         45m
helm-install-haproxy-ingress-d9rw5       0/1     CrashLoopBackOff   7          13m
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: haproxy-ingress
  namespace: kube-system
spec:
  chart: https://%{KUBERNETES_API}%/static/charts/haproxy-ingress-0.11.0.tgz
  helmVersion: v3
  targetNamespace: ingress-controller
  valuesContent: |-
    controller:
      hostNetwork: true
      kind: DaemonSet
      config:
        - name: use-proxy-protocol
          value: true
 k logs -f helm-install-haproxy-ingress-d9rw5 
CHART=$(sed -e "s/%{KUBERNETES_API}%/${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}/g" <<< "${CHART}")
set +v -x
+ cp /var/run/secrets/kubernetes.io/serviceaccount/ca.crt /usr/local/share/ca-certificates/
+ update-ca-certificates
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
+ export HELM_HOST=127.0.0.1:44134
+ HELM_HOST=127.0.0.1:44134
+ tiller --listen=127.0.0.1:44134 --storage=secret
+ helm_v2 init --skip-refresh --client-only
[main] 2020/11/24 21:21:04 Starting Tiller v2.16.8 (tls=false)
[main] 2020/11/24 21:21:04 GRPC listening on 127.0.0.1:44134
[main] 2020/11/24 21:21:04 Probes listening on :44135
[main] 2020/11/24 21:21:04 Storage driver is Secret
[main] 2020/11/24 21:21:04 Max history per release is 0
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
++ helm_v2 ls --all '^haproxy-ingress$' --output json
++ jq -r '.Releases | length'
[storage] 2020/11/24 21:21:04 listing all releases with filter
+ EXIST=
+ '[' '' == 1 ']'
+ '[' '' == v2 ']'
+ shopt -s nullglob
+ helm_content_decode
+ set -e
+ ENC_CHART_PATH=/chart/haproxy-ingress.tgz.base64
+ CHART_PATH=/haproxy-ingress.tgz
+ '[' '!' -f /chart/haproxy-ingress.tgz.base64 ']'
+ return
+ '[' install '!=' delete ']'
+ helm_repo_init
+ grep -q -e 'https\?://'
chart path is a url, skipping repo update
+ echo 'chart path is a url, skipping repo update'
+ helm_v3 repo remove stable
Error: no repositories configured
+ true
+ return
+ helm_update install --namespace ingress-controller
+ '[' helm_v3 == helm_v3 ']'
++ helm_v3 ls --all-namespaces --all -f '^haproxy-ingress$' --output json
++ jq -r '"\(.[0].app_version),\(.[0].status)"'
++ tr '[:upper:]' '[:lower:]'
+ LINE=v0.11,failed
++ echo v0.11,failed
++ cut -f1 -d,
+ INSTALLED_VERSION=v0.11
++ echo v0.11,failed
++ cut -f2 -d,
+ STATUS=failed
+ VALUES=
+ for VALUES_FILE in /config/*.yaml
+ VALUES=' --values /config/values-01_HelmChart.yaml'
+ '[' install = delete ']'
+ '[' -z v0.11 ']'
+ '[' failed = deployed ']'
+ '[' failed = failed ']'
+ '[' helm_v3 == helm_v3 ']'
+ helm_v3 uninstall haproxy-ingress
Error: uninstall: Release not loaded: haproxy-ingress: release: not found
k logs -f helm-install-cert-manager-n6c5n 
CHART=$(sed -e "s/%{KUBERNETES_API}%/${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}/g" <<< "${CHART}")
set +v -x
+ cp /var/run/secrets/kubernetes.io/serviceaccount/ca.crt /usr/local/share/ca-certificates/
+ update-ca-certificates
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
+ export HELM_HOST=127.0.0.1:44134
+ HELM_HOST=127.0.0.1:44134
+ helm_v2 init --skip-refresh --client-only
+ tiller --listen=127.0.0.1:44134 --storage=secret
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
++ helm_v2 ls --all '^cert-manager$' --output json
++ jq -r '.Releases | length'
[main] 2020/11/24 21:30:08 Starting Tiller v2.16.8 (tls=false)
[main] 2020/11/24 21:30:08 GRPC listening on 127.0.0.1:44134
[main] 2020/11/24 21:30:08 Probes listening on :44135
[main] 2020/11/24 21:30:08 Storage driver is Secret
[main] 2020/11/24 21:30:08 Max history per release is 0
[storage] 2020/11/24 21:30:08 listing all releases with filter
+ EXIST=
+ '[' '' == 1 ']'
+ '[' '' == v2 ']'
+ shopt -s nullglob
+ helm_content_decode
+ set -e
+ ENC_CHART_PATH=/chart/cert-manager.tgz.base64
+ CHART_PATH=/cert-manager.tgz
+ '[' '!' -f /chart/cert-manager.tgz.base64 ']'
+ return
+ '[' install '!=' delete ']'
+ helm_repo_init
+ grep -q -e 'https\?://'
+ echo 'chart path is a url, skipping repo update'
+ helm_v3 repo remove stable
chart path is a url, skipping repo update
Error: no repositories configured
+ true
+ return
+ helm_update install --namespace cert-manager
+ '[' helm_v3 == helm_v3 ']'
++ helm_v3 ls --all-namespaces --all -f '^cert-manager$' --output json
++ jq -r '"\(.[0].app_version),\(.[0].status)"'
++ tr '[:upper:]' '[:lower:]'
+ LINE=v1.1.0,failed
++ echo v1.1.0,failed
++ cut -f1 -d,
+ INSTALLED_VERSION=v1.1.0
++ echo v1.1.0,failed
++ cut -f2 -d,
+ STATUS=failed
+ VALUES=
+ for VALUES_FILE in /config/*.yaml
+ VALUES=' --values /config/values-01_HelmChart.yaml'
+ '[' install = delete ']'
+ '[' -z v1.1.0 ']'
+ '[' failed = deployed ']'
+ '[' failed = failed ']'
+ '[' helm_v3 == helm_v3 ']'
+ helm_v3 uninstall cert-manager
Error: uninstall: Release not loaded: cert-manager: release: not found

ability to add private registries

Currently helm-controller supports default registry and anything passed through spec.

Is it possible to add support for private helm registries.

e.g. we currently use AWS s3 for the registry with IAM role
this requires s3 plugin for helm https://github.com/hypnoglow/helm-s3.git

could this be done in a modular way so we can potentially pass in the klipper image as a value. https://github.com/rancher/helm-controller/blob/d5f5c830231110722f14d446d3b2038e5cdf1532/pkg/helm/controller.go#L40

RBAC errors while deploying helm chart

I tried to deploy the factorio chart to a cluster using the "cluster scope" manifest. I tried a few different helm charts, nearly all deployed fine, but I ran into trouble with stable/factorio.

E github.com/rancher/helm-controller/pkg/generated/informers/externalversions/factory.go:117: Failed to list *v1.HelmChart: helmcharts.k3s.cattle.io is forbidden: User "system:serviceaccount:default:default" cannot list helmcharts.k3s.cattle.io at the cluster scope
E k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ServiceAccount: serviceaccounts is forbidden: User "system:serviceaccount:default:default" cannot list serviceaccounts at the cluster scope
E k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Job: jobs.batch is forbidden: User "system:serviceaccount:default:default" cannot list jobs.batch at the cluster scope

I didn't see an option to set the service account being used for deployments. While I could alter the default role used, I think it'd be better to allow defining the service account to be used.

Upgrading a helm deployment that failed during first deployment --> Error: cannot re-use a name that is still in use

This is related to longhorn/longhorn#2292 - where longhorn has a problem with deploying successful with cis-1.5 profile active on RKE2.

In case the first helm chart deployment fails and an adjustment is made to the helmchart - the "upgrade" is done via "install" instead of "upgrade --install".

+ helm_v3 install --namespace longhorn-system longhorn /longhorn.tgz --values /config/values-01_HelmChart.yaml
Error: cannot re-use a name that is still in use

I thought that this was fixed in #32 but this is RKE2 with a much newer image.. so probably an other issue?

Prevent execution of helm uninstall during deletion of helmchart object

We are moving helmchart / helm-controller based installations into fleet and due to that we have to delete the existing helmchart objects.
Unfortunately the deletion of an helmchart object causes the deployment to be deleted, too which is not what we want. We want to keep the installed deployments as the "uninstall" of CRDs or longhorn will lead to data-loss.

-> how can we prevent the finalizer on a helmchart object to be executed causing CRDs and longhorn to be uninstalled?

Multiline strings can not be set

Using multiline strings in settings like:

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: linkerd2
  namespace: kube-system
spec:
  chart: linkerd2
  version: 2.7.1
  repo: https://helm.linkerd.io/stable
  set:
    global.identityTrustAnchorsPEM: |
      -----BEGIN CERTIFICATE-----
      MIIBljCCATugAwIBAgIQIrPA+fc+QTyBy6HYDQ53KzAKBggqhkjOPQQDAjApMScw
      JQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMjAwNTI1
      ...
      -----END CERTIFICATE-----

results in: Error: bad flag syntax: -----END the string argument in the command contains spaces at the line breaks:

helm_v3 install --repo https://helm.linkerd.io/stable --version 2.7.1 --set-string global.identityTrustAnchorsPEM=-----BEGIN CERTIFICATE----- MIIBljCCATugAwIBAgIQIrPA+fc+QTyBy6HYDQ53KzAKBggqhkjOPQQDAjApMScw JQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMjAwNTI1...

Cannot delete charts

As the title says. I have several charts that just won't delete. kubectl hangs after saying a chart was deleted (but it is not).
No job is started to delete the chart.

fatal error: "sweep increased allocation count" when installing chart

When I tried to install thisi chart, I got the following error:

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: prometheus-node-exporter
  namespace: kube-system
spec:
  chart: stable/prometheus-node-exporter
  version: 1.9.1
  targetNamespace: monitoring
  set:
    prometheus.monitor.enabled: "true"

Log from installation pod:

$ kubectl -n kube-system logs helm-install-prometheus-node-exporter-c96ql
CHART=$(sed -e "s/%{KUBERNETES_API}%/${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}/g" <<< "${CHART}")
set +v -x
+ cp /var/run/secrets/kubernetes.io/serviceaccount/ca.crt /usr/local/share/ca-certificates/
+ update-ca-certificates
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
+ export HELM_HOST=127.0.0.1:44134
+ HELM_HOST=127.0.0.1:44134
+ helm_v2 init --skip-refresh --client-only
+ tiller --listen=127.0.0.1:44134 --storage=secret
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!
++ jq -r '.Releases | length'
++ helm_v2 ls --all '^prometheus-node-exporter$' --output json
[main] 2020/05/15 21:02:45 Starting Tiller v2.12.3 (tls=false)
[main] 2020/05/15 21:02:45 GRPC listening on 127.0.0.1:44134
[main] 2020/05/15 21:02:45 Probes listening on :44135
[main] 2020/05/15 21:02:45 Storage driver is Secret
[main] 2020/05/15 21:02:45 Max history per release is 0
[storage] 2020/05/15 21:02:45 listing all releases with filter
+ EXIST=
+ '[' '' == 1 ']'
+ '[' '' == v2 ']'
+ helm_repo_init
+ grep -q -e 'https\?://'
+ '[' helm_v3 == helm_v3 ']'
+ [[ stable/prometheus-node-exporter == stable/* ]]
+ helm_v3 repo add stable https://kubernetes-charts.storage.googleapis.com/
"stable" has been added to your repositories
+ helm_v3 repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
+ '[' -n '' ']'
+ helm_update install --namespace monitoring --version 1.9.1 --set prometheus.monitor.enabled=true
+ '[' helm_v3 == helm_v3 ']'
++ tr '[:upper:]' '[:lower:]'
++ jq -r '"\(.[0].app_version),\(.[0].status)"'
++ helm_v3 ls --all -f '^prometheus-node-exporter$' --output json
+ LINE=null,null
++ echo null,null
++ cut -f1 -d,
+ INSTALLED_VERSION=null
++ echo null,null
++ cut -f2 -d,
+ STATUS=null
+ '[' -e /config/values.yaml ']'
+ '[' install = delete ']'
+ '[' -z null ']'
+ '[' -z 1.9.1 ']'
+ '[' null = 1.9.1 ']'
+ '[' null = failed ']'
+ '[' null = deleted ']'
+ helm_v3 install --namespace monitoring --version 1.9.1 --set prometheus.monitor.enabled=true prometheus-node-exporter stable/prometheus-node-exporter
runtime: nelems=256 nalloc=13 previous allocCount=12 nfreed=65535
fatal error: sweep increased allocation count

goroutine 3 [running]:
runtime.throw(0x1946414, 0x20)
	/usr/local/go/src/runtime/panic.go:774 +0x72 fp=0xc00005d660 sp=0xc00005d630 pc=0x42e652
runtime.(*mspan).sweep(0x7f3bfe8a4108, 0xc00007a000, 0x455900)
	/usr/local/go/src/runtime/mgcsweep.go:328 +0x8c6 fp=0xc00005d740 sp=0xc00005d660 pc=0x4234f6
runtime.sweepone(0x19f15e8)
	/usr/local/go/src/runtime/mgcsweep.go:136 +0x285 fp=0xc00005d7a8 sp=0xc00005d740 pc=0x4229c5
runtime.bgsweep(0xc00007a000)
	/usr/local/go/src/runtime/mgcsweep.go:73 +0xba fp=0xc00005d7d8 sp=0xc00005d7a8 pc=0x42268a
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc00005d7e0 sp=0xc00005d7d8 pc=0x45b9d1
created by runtime.gcenable
	/usr/local/go/src/runtime/mgc.go:210 +0x5c

goroutine 1 [runnable]:
reflect.Value.assignTo(0x16fb260, 0xc003a394b0, 0x194, 0x193a968, 0x19, 0x16fb260, 0x0, 0x16fb260, 0xc003a39480, 0x194)
	/usr/local/go/src/reflect/value.go:2370 +0x438
reflect.Value.SetMapIndex(0x16fcfa0, 0xc000241050, 0x15, 0x16fb260, 0xc003a39480, 0x194, 0x16fb260, 0xc003a394b0, 0x194)
	/usr/local/go/src/reflect/value.go:1672 +0x1dc
gopkg.in/yaml%2ev2.(*decoder).setMapIndex(0xc0000b05a0, 0xc004b76af0, 0x16fcfa0, 0xc000241050, 0x15, 0x16fb260, 0xc003a39480, 0x194, 0x16fb260, 0xc003a394b0, ...)
	/go/pkg/mod/gopkg.in/[email protected]/decode.go:686 +0x271
gopkg.in/yaml%2ev2.(*decoder).mapping(0xc0000b05a0, 0xc004b75e30, 0x16fb260, 0xc003a38fd0, 0x194, 0x16fb260)
	/go/pkg/mod/gopkg.in/[email protected]/decode.go:673 +0x54b
gopkg.in/yaml%2ev2.(*decoder).unmarshal(0xc0000b05a0, 0xc004b75e30, 0x16fb260, 0xc003a38fd0, 0x194, 0x194)
	/go/pkg/mod/gopkg.in/[email protected]/decode.go:368 +0x1af
gopkg.in/yaml%2ev2.(*decoder).sequence(0xc0000b05a0, 0xc004b1e770, 0x16fb260, 0xc00528e7d0, 0x194, 0x16fb260)
	/go/pkg/mod/gopkg.in/[email protected]/decode.go:605 +0x25c
gopkg.in/yaml%2ev2.(*decoder).unmarshal(0xc0000b05a0, 0xc004b1e770, 0x16fb260, 0xc00528e7d0, 0x194, 0x194)
	/go/pkg/mod/gopkg.in/[email protected]/decode.go:370 +0x183
gopkg.in/yaml%2ev2.(*decoder).mapping(0xc0000b05a0, 0xc0004e8310, 0x16fb260, 0xc005197ed0, 0x194, 0x16fb260)
	/go/pkg/mod/gopkg.in/[email protected]/decode.go:672 +0x48d
gopkg.in/yaml%2ev2.(*decoder).unmarshal(0xc0000b05a0, 0xc0004e8310, 0x16fb260, 0xc005197ed0, 0x194, 0x194)
	/go/pkg/mod/gopkg.in/[email protected]/decode.go:368 +0x1af
gopkg.in/yaml%2ev2.(*decoder).mapping(0xc0000b05a0, 0xc0004e8070, 0x16fb260, 0xc0004fa010, 0x194, 0x16fb260)
	/go/pkg/mod/gopkg.in/[email protected]/decode.go:672 +0x48d
gopkg.in/yaml%2ev2.(*decoder).unmarshal(0xc0000b05a0, 0xc0004e8070, 0x16fb260, 0xc0004fa010, 0x194, 0x0)
	/go/pkg/mod/gopkg.in/[email protected]/decode.go:368 +0x1af
gopkg.in/yaml%2ev2.(*decoder).document(0xc0000b05a0, 0xc0004e8000, 0x16fb260, 0xc0004fa010, 0x194, 0x978870)
	/go/pkg/mod/gopkg.in/[email protected]/decode.go:380 +0x7c
gopkg.in/yaml%2ev2.(*decoder).unmarshal(0xc0000b05a0, 0xc0004e8000, 0x16fb260, 0xc0004fa010, 0x194, 0x194)
	/go/pkg/mod/gopkg.in/[email protected]/decode.go:356 +0x247
gopkg.in/yaml%2ev2.unmarshal(0xc000614000, 0x804205, 0x804405, 0x16541e0, 0xc0004fa010, 0x16fb200, 0x0, 0x0)
	/go/pkg/mod/gopkg.in/[email protected]/yaml.go:148 +0x35c
gopkg.in/yaml%2ev2.Unmarshal(0xc000614000, 0x804205, 0x804405, 0x16541e0, 0xc0004fa010, 0x40bf56, 0xc0000f6050)
	/go/pkg/mod/gopkg.in/[email protected]/yaml.go:81 +0x58
sigs.k8s.io/yaml.yamlToJSON(0xc000614000, 0x804205, 0x804405, 0xc00172b568, 0x19f00e8, 0x0, 0x1b7f040, 0x281fd20, 0x7f3c00d9c6d0, 0x0)
	/go/pkg/mod/sigs.k8s.io/[email protected]/yaml.go:124 +0x73
sigs.k8s.io/yaml.yamlUnmarshal(0xc000614000, 0x804205, 0x804405, 0x17c0e40, 0xc0000f6050, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/pkg/mod/sigs.k8s.io/[email protected]/yaml.go:53 +0x119
sigs.k8s.io/yaml.Unmarshal(...)
	/go/pkg/mod/sigs.k8s.io/[email protected]/yaml.go:36
helm.sh/helm/v3/pkg/repo.loadIndex(0xc000614000, 0x804205, 0x804405, 0x804205, 0x804405, 0x0)
	/home/circleci/helm.sh/helm/pkg/repo/index.go:284 +0x9b
helm.sh/helm/v3/pkg/repo.LoadIndexFile(0xc0000419b0, 0x2e, 0x2, 0xc0000419b0, 0x2e)
	/home/circleci/helm.sh/helm/pkg/repo/index.go:102 +0x87
helm.sh/helm/v3/pkg/downloader.(*ChartDownloader).ResolveChartVersion(0xc00172bb28, 0x7ffc57a7463b, 0x1f, 0x7ffc57a745f6, 0x5, 0x1c, 0xc00046e518, 0x0)
	/home/circleci/helm.sh/helm/pkg/downloader/chart_downloader.go:219 +0x329
helm.sh/helm/v3/pkg/downloader.(*ChartDownloader).DownloadTo(0xc00172bb28, 0x7ffc57a7463b, 0x1f, 0x7ffc57a745f6, 0x5, 0xc0004b0d40, 0x1c, 0x0, 0x4a39c5, 0x191bc31, ...)
	/home/circleci/helm.sh/helm/pkg/downloader/chart_downloader.go:87 +0x74
helm.sh/helm/v3/pkg/action.(*ChartPathOptions).LocateChart(0xc00016b7a8, 0x7ffc57a7463b, 0x1f, 0xc000403720, 0x7ffc57a74622, 0x18, 0x7ffc57a7463b, 0x1f)
	/home/circleci/helm.sh/helm/pkg/action/install.go:669 +0x428
main.runInstall(0xc000459d80, 0x2, 0x8, 0xc00016b7a0, 0xc00023a960, 0x1b7f060, 0xc0000ae008, 0x191b839, 0x4, 0x5de1d0)
	/home/circleci/helm.sh/helm/cmd/helm/install.go:160 +0x16e
main.newInstallCmd.func1(0xc0000bea00, 0xc000459d80, 0x2, 0x8, 0x0, 0x0)
	/home/circleci/helm.sh/helm/cmd/helm/install.go:115 +0x83
github.com/spf13/cobra.(*Command).execute(0xc0000bea00, 0xc000459b80, 0x8, 0x8, 0xc0000bea00, 0xc000459b80)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:826 +0x460
github.com/spf13/cobra.(*Command).ExecuteC(0xc00052e000, 0x1bbd060, 0xc000464000, 0x7ffc57a745e1)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main()
	/home/circleci/helm.sh/helm/cmd/helm/helm.go:75 +0x204

goroutine 18 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x281ed40)
	/go/pkg/mod/k8s.io/[email protected]/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
	/go/pkg/mod/k8s.io/[email protected]/klog.go:411 +0xd6

goroutine 9 [syscall]:
os/signal.signal_recv(0x45b9d6)
	/usr/local/go/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
	/usr/local/go/src/os/signal/signal_unix.go:29 +0x41

A brief search suggests this is a race condition.

recent uninitialized toleration preventing rke2 from coming up fully

Looks like bad syntax on for the Equals operator:

E0816 07:16:58.188461      24 controller.go:135] error syncing 'kube-system/rke2-kube-proxy': handler helm-controller: failed to create kube-system/helm-install-rke2-kube-proxy batch/v1, Kind=Job for helm-controller kube-system/rke2-kube-proxy: Job.batch "helm-install-rke2-kube-proxy" is invalid: spec.template.spec.tolerations[1].operator: Unsupported value: "=": supported values: "Equal", "Exists", requeuing
E0816 07:16:59.296654      24 controller.go:135] error syncing 'kube-system/rke2-coredns': handler helm-controller: failed to create kube-system/helm-install-rke2-coredns batch/v1, Kind=Job for helm-controller kube-system/rke2-coredns: Job.batch "helm-install-rke2-coredns" is invalid: spec.template.spec.tolerations[1].operator: Unsupported value: "=": supported values: "Equal", "Exists", requeuing
E0816 07:16:59.678604      24 controller.go:135] error syncing 'kube-system/rke2-canal': handler helm-controller: failed to create kube-system/helm-install-rke2-canal batch/v1, Kind=Job for helm-controller kube-system/rke2-canal: Job.batch "helm-install-rke2-canal" is invalid: spec.template.spec.tolerations[1].operator: Unsupported value: "=": supported values: "Equal", "Exists", requeuing

--wait support

Is it possible to force the HelmChart CRD to wait for the helm chart to actually be installed? That way I can kubectl apply -f crd.yml and know that when the command is done, the helm chart is fully installed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.