Coder Social home page Coder Social logo

stakater / reloader Goto Github PK

View Code? Open in Web Editor NEW
7.2K 52.0 476.0 26.57 MB

A Kubernetes controller to watch changes in ConfigMap and Secrets and do rolling upgrades on Pods with their associated Deployment, StatefulSet, DaemonSet and DeploymentConfig – [✩Star] if you're using it!

Home Page: https://docs.stakater.com/reloader/

License: Apache License 2.0

Makefile 0.86% Go 98.32% Dockerfile 0.29% Mustache 0.53%
kubernetes openshift configmap secrets pods deployments daemonset statefulsets k8s watch-changes

reloader's People

Contributors

ahmedwaleedmalik avatar ahsan-storm avatar alexanderldavis avatar alexconlin avatar aliartiza75 avatar avihuly avatar bnallapeta avatar ctrought avatar daniel-butler-irl avatar deschmih avatar faizanahmad055 avatar gciria avatar hanzala1234 avatar hussnain612 avatar itaispiegel avatar jkroepke avatar kahootali avatar karl-johan-grahn avatar katainaka0503 avatar muneebaijaz avatar patrickspies avatar rasheedamir avatar renovate[bot] avatar sheryarbutt avatar stakater-user avatar talha0324 avatar tete17 avatar usamaahmadkhan avatar vladlosev avatar waseem-h avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reloader's Issues

Only reload pods on specific environment variables

Hi :) Been using reloader for a while and it works great, thanks.

Is there any way of setting reloader to watch on specific environment variables?

Our use case is that we have one configmap for a number of different pods, and don't want reloader reloading all pods when a value changes in the configmap which may not be related to that pod.

Currently in Helm we're using envFrom: configMapRef: x, and not specifying exact environment variables.

Support for openshift deploymentconfig

First of thanks for great documentations .
I have a question :
can we extend support to openshift deploymentconfigs ?
would be great if this works well for deploymentconfigs

crashing with "rpc error: code = Unknown desc = Error: No such container:"

I depoyed reloader in my minikube and facing

time="2018-12-17T07:51:10Z" level=info msg="Starting Controller"
time="2018-12-17T07:51:10Z" level=info msg="Starting Controller"
\\\\\time="2018-12-17T07:52:35Z" level=info msg="Changes detected in myconfig of type 'CONFIGMAP' in namespace: default"
time="2018-12-17T07:52:35Z" level=info msg="Updated reloader of type Deployment in namespace: default "
rpc error: code = Unknown desc = Error: No such container: b92e64652c9555ec1cf70db9e8c1e0816269723798f297acee64bae2b64946a48c8590bf0681:tmp z00341m$ \\\\\
> 
-bash: \\: command not found

and it crashing

Add support for envFrom

Currently relaoder only supports env vars from config map like this

- name: SPECIAL_TYPE_KEY
          valueFrom:
            configMapKeyRef:
              name: special-config
              key: SPECIAL_TYPE

and not like this

envFrom:
      - configMapRef:
          name: special-config

Add support for this as soon as possible

wrong permissions on deployment config

Hi,

I deployed reloader in OpenShift with the helm chart. it looks like it detects that it is running on OpenShift because it tries to list deployment config, but it can't because it doesn't have permissions to do so.

time="2019-08-03T11:08:36Z" level=error msg="Failed to list deploymentConfigs deploymentconfigs.apps.openshift.io is forbidden: User \"system:serviceaccount:reloader:reloader\" cannot list resource \"deploymentconfigs\" in API group \"apps.openshift.io\" in the namespace \"openshift-kube-scheduler\""
time="2019-08-03T11:08:36Z" level=error msg="Failed to list deploymentConfigs deploymentconfigs.apps.openshift.io is forbidden: User \"system:serviceaccount:reloader:reloader\" cannot list resource \"deploymentconfigs\" in API group \"apps.openshift.io\" in the namespace \"openshift-kube-apiserver\""
time="2019-08-03T11:08:36Z" level=error msg="Failed to list deploymentConfigs deploymentconfigs.apps.openshift.io is forbidden: User \"system:serviceaccount:reloader:reloader\" cannot list resource \"deploymentconfigs\" in API group \"apps.openshift.io\" in the namespace \"openshift-kube-apiserver\""

Disable monitoring changes to Secrets

I want to use Reloader, but don't want it to monitor Secrets, just ConfigMaps (as I don't want to give it access to read Secrets in my cluster). Seems like a pretty straightforward change - happy to open a PR.

Reloader With SealedSecrets

Hi, Would it be possible to make reloader restart pods if sealed secrets changes, as of now it works great with configmap and secret changes but we want it to restart pod on an update of sealed secrets

Using reloader manifests with kustomize build prints error log.

We tried to install reloader by kustomization.
Then kustomize build . printed following log .

2019/11/22 22:42:48 nil value at `env.valueFrom.configMapKeyRef.name` ignored in mutation attempt
2019/11/22 22:42:48 nil value at `env.valueFrom.secretKeyRef.name` ignored in mutation attempt

kustomize build . succeeded and result is correct, so maybe these logs are warning.
Are these log because of reloader's manifests? How to suppress?

Our settings

reloader/kustomization.yml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
  - github.com/stakater/Reloader//deployments/kubernetes?ref=v0.0.49

commonAnnotations:
  reloader.stakater.com/auto: "true"

namespace: ops

usage

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
  - ../reloader 

Reloader broken with Kubernetes 1.16

After deploying Reloader on an on-prem Kubernetes 1.16 cluster, I've seen many logs like :
Failed to list [kind] the server could not find the requested resource

I've reproduced the issue on a more simple and controlled minikube setup.

Using minikube 1.3.1 (with Kubernetes 1.15), it works fine.
Using minikube 1.4.0 (with Kubernetes 1.16), it produces the same error.

Manifests are available there to reproduce if needed.

I assume it's linked to deprecations in Kubernetes 1.16 : see https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

DaemonSet, Deployment, StatefulSet, and ReplicaSet (in the extensions/v1beta1 and apps/v1beta2 API groups)
Migrate to use the apps/v1 API, available since v1.9. Existing persisted data can be retrieved/updated via the apps/v1 API.

And I guess to fix it you would have to (at least) change clients.KubernetesClient.ExtensionsV1beta1().[Kind] in https://github.com/stakater/Reloader/blob/master/internal/pkg/callbacks/rolling_upgrade.go

Direct installation into k8s not working (since 0.39)

Hi,

I tried to install per instructions today (v0.40):

kubectl  apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml

but this results in an error:

Error from server (Invalid): error when creating "https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml": Deployment.apps "RELEASE-NAME-reloader" is invalid: [metadata.name: Invalid value: "RELEASE-NAME-reloader": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'), spec.template.spec.containers[0].name: Invalid value: "RELEASE-NAME-reloader": a DNS-1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name',  or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?'), spec.template.spec.serviceAccountName: Invalid value: "RELEASE-NAME-reloader": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')]
    default: Error from server (Invalid): error when creating "https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml": ClusterRoleBinding.rbac.authorization.k8s.io "RELEASE-NAME-reloader-role-binding" is invalid: subjects[0].name: Invalid value: "RELEASE-NAME-reloader": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

Error from server (Invalid): error when creating "https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml": ServiceAccount "RELEASE-NAME-reloader" is invalid: metadata.name: Invalid value: "RELEASE-NAME-reloader": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

I think this is coming from changes after 0.38 release when names in the reloader.yaml were changed from reloader to RELEASE-NAME-reloader.

Helm - broken chart and multiple issues

Hi,

The current helm chart has multiple issues:

  • the reloader level in the values file completely breaks backwards compatibility
  • some resources use reloader-name instead of reloader-fullname
  • missing nodeSelector on the deployment
  • values specify a default name for the service account instead of leaving it empty, to fallback on reloader-fullname

As a last note, it would be great if the reloader chart would be moved to Helm Hub, in order to expedite reviews and updates.

Log is printing detected changes of configmaps that it's not to be monitored

Log is constantly printing 'Changes detected ...' of a configmap that I don't wanna its changes be monitored.

I have only one deployment with the related annotation for the reloader monitor the changes of its configmap.
But the reloader logs is printing detected changes of a configmap that it's constantly changing, but I don't wanna its changes be monitored by reloader.

time="2019-04-24T14:35:31Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:35:41Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:35:51Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:01Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:11Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:21Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:32Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:42Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:36:52Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:03Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:13Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:23Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:33Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:43Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:37:54Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:04Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:14Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:24Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:34Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:45Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:38:55Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:05Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:15Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:25Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:36Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:46Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:39:56Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:06Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:17Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:27Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:37Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:47Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:40:57Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:07Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:18Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:28Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:38Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:48Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:41:59Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:42:09Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:42:19Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:42:29Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:42:39Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:42:50Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:00Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:10Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:20Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:31Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:41Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:43:51Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:01Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:11Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:22Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:32Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:42Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:44:52Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:03Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:13Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:23Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:34Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:44Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:45:54Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:04Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:15Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:25Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:35Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:45Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:46:55Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:06Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:16Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:26Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:36Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:46Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:47:57Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:07Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:17Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:27Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:38Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:48Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:48:58Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:08Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:18Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:29Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:39Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:49Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:49:59Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:50:10Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:50:20Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:50:30Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:50:40Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:50:50Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:01Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:11Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:21Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:31Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:41Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:51:52Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:02Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:12Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:22Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:33Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:43Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:52:53Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"
time="2019-04-24T14:53:03Z" level=info msg="Changes detected in 'cluster-autoscaler-status' of type 'CONFIGMAP' in namespace 'kube-system'"

With that, it's dificult to me to view the detected changes of the configmap that I really want to be monitored.

Race condition for kube-system namespace

Describe the bug
Logs are full of changes observed in kube-system namespace. For cluster deployments, in this case, cluster-autoscaler-status. From what I observed this adds quite GB per month just for logs, then delays a bit updates to resources that we want to watch actually.

To Reproduce
Deploy Reloader to Google Kubernetes Engine

Expected behavior
I think Reloader should not watch certain namespaces, there should be an option to ignore, I can work on it, just need your input. :)

Screenshots
image

Additional context

Deployed on managed Kubernetes cluster.

Coalescing changes in multiple configmaps

In our setup we have many services modelled as deployments, and each of them consumes settings from multiple configmaps and secrets (typically one for the "bigger picture" settings, and then one more specialized for the service itself).

We also roll-out changes in the kubernetes resources through our CI setup (Travis + https://github.com/Collaborne/kubernetes-bootstrap).

Together this leads to situations where multiple configmaps update at the same time, and reloader seems to trigger multiple redeployments. As an idea here: It could be nice to collect the updates for deployments for a short time, and only trigger a single redeployment.

(For a somewhat unrelated reason I actually implemented our own reloader now with that built-in and moved away from reloader, so this is merely a "might be interesting for you guys to think about" feedback issue.)

Errors in pod logs on startup - failed to list deployments

I installed the stable release with default settings on a Kubernetes cluster using Helm. Annotated my Deployments as per the instructions but I'm not seeing any rolling updates when I change a ConfigMap. When I checked the reloader pod logs I found this:

time="2019-12-13T15:46:02Z" level=info msg="Environment:Kubernetes"
time="2019-12-13T15:46:02Z" level=info msg="Starting Reloader"
time="2019-12-13T15:46:02Z" level=warning msg="KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces."
time="2019-12-13T15:46:02Z" level=info msg="Starting Controller to watch resource type: secrets"
time="2019-12-13T15:46:02Z" level=info msg="Starting Controller to watch resource type: configMaps"
time="2019-12-13T15:46:02Z" level=error msg="Failed to list deployments the server could not find the requested resource"
time="2019-12-13T15:46:02Z" level=error msg="Failed to list daemonSets the server could not find the requested resource"
time="2019-12-13T15:46:02Z" level=error msg="Failed to list statefulSets the server could not find the requested resource"

Then the last 3 lines just repeat periodically.

I'm wondering if its an RBAC issue, but the ClusterRole and ClusterRoleBinding seem to be there.

Any help would be greatly appreciated.

Reloader is not detecting any changes in config maps.

Hi,
This application is a perfect solution to the nightmare. Thank you for building this.

But the thing is I am not able to make it work.
I deployed using helm chart with --set reloader.watchGlobally=false --namespace dev and without any args. I have istio enabled in the NS, so I have to add sidecar.istio.io/inject: "false" else it was giving 172.17.0.1 connection refused.

But for what ever reason, reloader is not checking for any changes in configmaps.

reloader-reloader-59f546bbbc-lvb7k reloader-reloader time="2019-12-29T10:58:19Z" level=info msg="Environment: Kubernetes"
reloader-reloader-59f546bbbc-lvb7k reloader-reloader time="2019-12-29T10:58:19Z" level=info msg="Starting Reloader"
reloader-reloader-59f546bbbc-lvb7k reloader-reloader time="2019-12-29T10:58:19Z" level=warning msg="KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces."
reloader-reloader-59f546bbbc-lvb7k reloader-reloader time="2019-12-29T10:58:19Z" level=info msg="Starting Controller to watch resource type: configMaps"
reloader-reloader-59f546bbbc-lvb7k reloader-reloader time="2019-12-29T10:58:19Z" level=info msg="Starting Controller to watch resource type: secrets"
                                                                                                                                           │

Add support for reloading application running in a container instead of rolling update

Suggested by Marton Szucs in slack channel:

For applications that supports updating configuration at runtime, like pgbouncer or nginx, it would be nice to just reload the application inside a running container. Instead of restarting the whole Pod using a rolling update.

This is how I do it manually:

  • Change configmap/secret and apply it to the cluster
  • Exec into the container that has mounted the configmap/secret as a file.
  • Check if the mounted file is updated inside the container
  • reload application with kill -SIGHUP 1 or nginx -s reload or some other application specific command.

Questions:

  • Will it be safe?
  • Look into how does nginx ingress controller does it right now? It indeed never does rolling update but does have new configs!

Prometheus endpoint to monitor Reloader's metrics

To monitor reloader, It would be good if Reloader has prometheus endpoint and be able to fetch metrics.

In my usecase, Following metrics are wanted.

  • How many times reloader tried to reload Deployments (or other resources)
  • How many times reloader succeeded to reload Deployments (or other resources)
  • How many times reloader failed to reload Deployments (or other resources)

Reloader losing connection / kubernetes api connection refused / http2 GOAWAY

Hi there!
First up, thanks for this great tool!

We are experiencing some issues with reloader on the long run. We used the default config:

---
# Source: reloader/templates/role.yaml


---
# Source: reloader/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: reloader
    chart: "reloader-v0.0.38"
    release: "RELEASE-NAME"
    heritage: "Tiller"
    group: com.stakater.platform
    provider: stakater
    version: v0.0.38
    
  name: reloader
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: reloader
      release: "RELEASE-NAME"
  template:
    metadata:
      labels:
        app: reloader
        chart: "reloader-v0.0.38"
        release: "RELEASE-NAME"
        heritage: "Tiller"
        group: com.stakater.platform
        provider: stakater
        version: v0.0.38
        
    spec:
      containers:
      - env:
        image: "stakater/reloader:v0.0.38"
        imagePullPolicy: IfNotPresent
        name: reloader
        args:
      serviceAccountName: reloader

---
# Source: reloader/templates/clusterrole.yaml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  labels:
    app: reloader
    chart: "reloader-v0.0.38"
    release: "RELEASE-NAME"
    heritage: "Tiller"
  name: reloader-role
  namespace: default
rules:
  - apiGroups:
      - ""
    resources:
      - secrets
      - configmaps
    verbs:
      - list
      - get
      - watch
  - apiGroups:
      - "apps"
    resources:
      - deployments
      - daemonsets
      - statefulsets
    verbs:
      - list
      - get
      - update
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - deployments
      - daemonsets
    verbs:
      - list
      - get
      - update
      - patch

---
# Source: reloader/templates/rolebinding.yaml


---
# Source: reloader/templates/clusterrolebinding.yaml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  labels:
    app: reloader
    chart: "reloader-v0.0.38"
    release: "RELEASE-NAME"
    heritage: "Tiller"
  name: reloader-role-binding
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: reloader-role
subjects:
  - kind: ServiceAccount
    name: reloader
    namespace: default

---
# Source: reloader/templates/serviceaccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: reloader
    chart: "reloader-v0.0.38"
    release: "RELEASE-NAME"
    heritage: "Tiller"
  name: reloader

This works fine in the next few days. However, after a few days, changes to CMs and Secrets aren't detected and we see the following in the logs:

→ kubectl logs -f reloader-795497cbbc-msg47
time="2019-10-09T14:10:23Z" level=info msg="Environment: Kubernetes"
time="2019-10-09T14:10:23Z" level=info msg="Starting Reloader"
time="2019-10-09T14:10:23Z" level=warning msg="KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces."
time="2019-10-09T14:10:23Z" level=info msg="Starting Controller to watch resource type: configMaps"
time="2019-10-09T14:10:23Z" level=info msg="Starting Controller to watch resource type: secrets"
E1017 03:54:22.864403       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=6329, ErrCode=NO_ERROR, debug=""
E1017 03:54:22.883831       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=6329, ErrCode=NO_ERROR, debug=""
E1017 03:54:22.939076       1 reflector.go:322] github.com/stakater/Reloader/internal/pkg/controller/controller.go:77: Failed to watch *v1.ConfigMap: Get https://10.11.32.1:443/api/v1/configmaps?resourceVersion=33794470&timeoutSeconds=324&watch=true: dial tcp 10.11.32.1:443: connect: connection refused

The last entry continues.
I checked, we can reach the Kubernetes API at https://10.11.32.1:443 from within the container via curl.

Any clues what that's about?

Cheers from Hamburg

Support for ~~openshift~~ init containers?

Is there anyway currently to have this work with openshift's DeploymentConfigs?


Edit: I switched over to a regular deployment and am still having issues getting Reloader to see it.

I'm curious if you see anything in the deployment below that would prevent the it from being picked up?

oc describe deployment manifest-config-map
Name:                   manifest-config-map
Namespace:              nx-lowest
CreationTimestamp:      Fri, 15 Feb 2019 18:56:09 -0800
Labels:                 app=review
                        name=manifest-config-map
Annotations:            configmap.reloader.stakater.com/reload=content-manifest
                        deployment.kubernetes.io/revision=1
                        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"configmap.reloader.stakater.com/reload":"content-manifest"},"labels":{...
Selector:               name=manifest-config-map
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  name=manifest-config-map
  Init Containers:
   process-index:
    Image:  busybox
    Port:   <none>
    Command:
      sh
      -c
      ...  

    Environment:  <none>
    Mounts:
      /content-manifest.json from content-manifest-volume (rw)
      /original/index.html from nginx-config-volume (rw)
      /processed from processed-index-volume (rw)
  Containers:
   manifest-config-map:
    Image:        quay.sys.com/digital/nx:lowest-manifest-config-map-latest
    Port:         8443/TCP
    Liveness:     http-get http://:8443/healthcheck delay=0s timeout=1s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8443/web/secure/consumer delay=10s timeout=2s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/nginx/nginx.conf from nginx-config-volume (rw)
      /usr/share/nginx/html/index.html from processed-index-volume (rw)
  Volumes:
   nginx-config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      manifest-config-map-nginx-conf
    Optional:  false
   content-manifest-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      content-manifest
    Optional:  false
   processed-index-volume:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  manifest-config-map-7dbc6fcc4c (3/3 replicas created)
NewReplicaSet:   <none>
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  11m   deployment-controller  Scaled up replica set manifest-config-map-7dbc6fcc4c to 3

Check for pod annotations too

Hi

I often have troubles because I use charts that I would like to automatically reload, but they almost never allow configuring the annotations of the deployment / statefulset. However, they almost always allow to edit the annotations of the pod. Thus it would be easier if the reloader would also check the annotations of the pod (when not found on the deployment / statefulset).

Do you see any objection to such a feature ?

Create a doc which compares Reloader vs. k8s-trigger-controller

Create a separate README in doc directory of the repo which just talks about differences between k8s-trigger-controller & ConfigmapController; why did we create Reloader?

  • only support for deployments
  • what else?
  • we have more efficient way to do hash calculation?

Reloader overrides my dnsConfig

Helm chart version: 1.1.3

I have set up dnsConfig in my deployment to override the ndots parameter. But, when reloader rolls out a new release, the parameter value gets overridden.

My original deployment manifest - kgdepoyaml pushowl-backend-gunicorn

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: env-staging
    deployment.kubernetes.io/revision: "6"
  creationTimestamp: "2019-10-14T09:13:14Z"
  generation: 73
  labels:
    app: gunicorn
    env: staging
    release: pushowl-backend
  name: pushowl-backend-gunicorn
  namespace: pushowl-backend
  resourceVersion: "5683180"
  selfLink: /apis/extensions/v1beta1/namespaces/pushowl-backend/deployments/pushowl-backend-gunicorn
  uid: d9feff65-ee62-11e9-8b9b-02b54c8e79ec
spec:
  progressDeadlineSeconds: 600
  replicas: 4
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: gunicorn
      env: staging
      release: pushowl-backend
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        prometheus.io/path: /
        prometheus.io/port: "9106"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app: gunicorn
        env: staging
        release: pushowl-backend
    spec:
      containers:
      - command:
        - sh
        - bin/start_gunicorn.sh
        envFrom:
        - configMapRef:
            name: env-default
        - configMapRef:
            name: env-staging
        image: 684417159526.dkr.ecr.us-east-1.amazonaws.com/pushowl-backend:13100634024a9e92b8a014ec15d46bd5bfe575c7
        imagePullPolicy: IfNotPresent
        name: pushowl-backend-gunicorn
        ports:
        - containerPort: 8000
          name: gunicorn
          protocol: TCP
        - containerPort: 9106
          name: prometheus
# Please edit the object below. Lines beginning with a '#' will be ignored,
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: 512Mi
          requests:
            cpu: 500m
            memory: 256Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /prometheus
          name: prometheus-multiproc-dir
      dnsConfig:
        options:
        - name: ndots
          value: "1"
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: prometheus-multiproc-dir
status:
  availableReplicas: 4
  conditions:
  - lastTransitionTime: "2019-10-15T05:53:58Z"
    lastUpdateTime: "2019-10-15T05:53:58Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2019-10-14T09:13:14Z"
    lastUpdateTime: "2019-10-15T05:55:05Z"
    message: ReplicaSet "pushowl-backend-gunicorn-75b494bb47" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 73
  readyReplicas: 4
  replicas: 4
  updatedReplicas: 4

Then I change my configmap to change some value. kubectl edit configmap/env-staging

My new deployment then becomes - kgdepoyaml pushowl-backend-gunicorn

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: env-staging
    deployment.kubernetes.io/revision: "7"
  creationTimestamp: "2019-10-14T09:13:14Z"
  generation: 75
  labels:
    app: gunicorn
    env: staging
    release: pushowl-backend
  name: pushowl-backend-gunicorn
  namespace: pushowl-backend
  resourceVersion: "5684388"
  selfLink: /apis/extensions/v1beta1/namespaces/pushowl-backend/deployments/pushowl-backend-gunicorn
  uid: d9feff65-ee62-11e9-8b9b-02b54c8e79ec
spec:
  progressDeadlineSeconds: 600
  replicas: 7
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: gunicorn
      env: staging
      release: pushowl-backend
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        prometheus.io/path: /
        prometheus.io/port: "9106"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app: gunicorn
        env: staging
        release: pushowl-backend
    spec:
      containers:
      - command:
        - sh
        - bin/start_gunicorn.sh
        env:
        - name: STAKATER_ENV_STAGING_CONFIGMAP
          value: 45a547270324d5feded44fe0938dfbdfbbf6250e
        envFrom:
        - configMapRef:
            name: env-default
        - configMapRef:
            name: env-staging
        image: 684417159526.dkr.ecr.us-east-1.amazonaws.com/pushowl-backend:13100634024a9e92b8a014ec15d46bd5bfe575c7
        imagePullPolicy: IfNotPresent
        name: pushowl-backend-gunicorn
        ports:
        - containerPort: 8000
          name: gunicorn
          protocol: TCP
        - containerPort: 9106
          name: prometheus
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: 512Mi
          requests:
            cpu: 500m
            memory: 256Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /prometheus
          name: prometheus-multiproc-dir
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: prometheus-multiproc-dir
status:
  availableReplicas: 4
  conditions:
  - lastTransitionTime: "2019-10-14T09:13:14Z"
    lastUpdateTime: "2019-10-15T05:59:20Z"
    message: ReplicaSet "pushowl-backend-gunicorn-66cc987457" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2019-10-15T05:59:23Z"
    lastUpdateTime: "2019-10-15T05:59:23Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  observedGeneration: 75
  readyReplicas: 4
  replicas: 7
  unavailableReplicas: 3
  updatedReplicas: 7

Please note that the dnsConfig parameter is gone in new deployment.

Add HA support in Reloader

Is it possible to have two replicas running at the same time? I mean, are they able to coordinate so, when a configmap/secret changes, only one of them takes care of reloading the associated deployment/statefulset/etc?

The use case is being able to deploy reloader with HA guarantees in a K8S cluster.

Thanks. This project is really useful.

Remove obsolete(?) file build/package/reloader

The file was added in release 0.0.1 (067f09b), and it does not seem to be used anywhere. At least I was able to build the docker images and deploy to a local kubernetes cluster even though I deleted the file.

The presence of a binary elf file in the repo freaks out our security people to no end.

[SUPPORT] reloader not working

Hello, I need your help.
I followed the documentation but it does not work.
I have a deployment with config maps, and also the reloeader container running.
When the config map is updated, also in the deployment volume path, then the reloader does not reload the deployment to reflect configuration changes.

All set up I did was:
in the deployment add the annotation:
reloader.stakater.com/auto: "true"
and then I have the reloader container and my deployment service container.

am I missing some other configuration? I am using the default.

Thanks

Helm chart

Public helm chart of reloader is not available i have to do helm repo add to add the helm chart.Can you please make your helm chart's yaml public.

Thanks

doesn't see work on 1.12.6 EKS

version 0.0.27 - do nothing - means reloader comes up and that's it.
only shows :
time="2019-05-07T15:08:49Z" level=info msg="Starting Reloader"
time="2019-05-07T15:08:49Z" level=warning msg="KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces."
time="2019-05-07T15:08:49Z" level=info msg="Starting Controller to watch resource type: secrets"
time="2019-05-07T15:08:49Z" level=info msg="Starting Controller to watch resource type: configMaps"

version 0.0.26 - writing to log such message "level=info msg="Changes detected in 'prometheus-alert-rules' of type 'CONFIGMAP' in namespace 'monitoring'"
but pod don't make restart.

any idea ?

Error in logs "spec.template.spec.dnsConfig: Required value: must provide `dnsConfig` when `dnsPolicy` is None"

Hello!
We are facing an issue with reloader running on k8s 1.13.5 on DigitalOcean:

time="2019-06-28T09:28:33Z" level=error msg="Update for 'web' of type 'Deployment' in namespace 'test' failed with error Deployment.apps \"web\" is invalid: spec.template.spec.dnsConfig: Required value: must provide `dnsConfig` when `dnsPolicy` is None"

While dnsConfig is provided in corresponding Deployment:

    spec:
      dnsConfig:
        nameservers:
          - 10.0.0.1
        searches:
          - svc.cluster.local
        options:
          - name: ndots
            value: "5"
      dnsPolicy: "None" 

Please advice.

Correct documentation for in repo helm chart

Since this has transitioned to using an in repo helm chart, the helm chart documentation should be checked, and documented. The current helm chart in this repo, seems to be built from several layers of templating which makes it harder to understand, and the correctness of this documentation all the more essential.

Add support for whitelisting resourceNames in clusterrole.yaml

Hello,

Very useful project and helpful documentation! I was wondering if there would be interest in adding a change which would allow folks to explicitly limit Reloader's RBAC access to named resources. For example in clusterrole.yaml...

rules:
  - apiGroups:
      - ""
    resources:      
      - secrets
{{- if .Values.reloader.resourceNames}}
    resourceNames: {{ .Values.reloader.resouceNames}}
{{- end }}
    verbs:
      - list
      - get
      - watch
  - apiGroups:
      - ""
    resources:      
      - configmaps
    verbs:
      - list
      - get
      - watch
  - apiGroups:
      - "extensions"
      - "apps"
    resources:
      - deployments
      - daemonsets
      - statefulsets
    verbs:
      - list
      - get
      - update
      - patch
{{- end }}

I'm not sure if it would make sense to do this for all of the resources or only secrets. For our use case we are mainly concerned about secrets. Thanks!

Charlie

Documentation disambiguation regarding restart vs rolling upgrade

In the headline of the github project page, one can read:

A Kubernetes controller to watch changes in ConfigMap and Secrets and then restart pods for Deployment, StatefulSet, DaemonSet and

However, further down below the docs mentioned that a rolling uprade is performed.

e.g.

Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets.

Which one is the actual case?

Permission Issue

Hi,

I followed the documentation and added the annotations to detect changes for the configmaps like so

metadata:
  annotations:
    reloader.stakater.com/auto: "true"
    configmap.reloader.stakater.com/reload: "adapter-config"

But when I apply changes to my configmap the pod doesn't reload. I investigated the logs for the reloader and I get the following error:

Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:my-ns:reloader-reloader" cannot list resource "configmaps" in API group "" at the cluster scope

Did anyone faced a similar issue before?

Thanks,
Mark

Geeting Issue While Using Reloader In ConfigMap

Hi Team,
I used reloader for ConfigMap in my deployment file .and as per documentation when we make any changes in configmap then their value should reflect on running pod.But we are not getting that change in pod.
Here is My Deployment File:-------
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
annotations:
configmap.reloader.stakater.com/reload: "nginx-configmap"
spec:
selector:
matchLabels:
app: nginx
replicas: 3 # tells deployment to run 2 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config1
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
volumes:
- name: nginx-config1
configMap:
name: nginx-configmap
Here Is the Config File:----------------------------
apiVersion: v1
data:
default.conf: |2

   upstream backend {
        server 172.27.15.8:8081;
     server 192.0.0.1 backup;
    }
server {
    location / {
        proxy_pass http://backend;
    }
}

kind: ConfigMap
metadata:
creationTimestamp: 2019-01-18T05:33:31Z
name: nginx-configmap
namespace: default
resourceVersion: "9436168"
selfLink: /api/v1/namespaces/default/configmaps/nginx-configmap
uid: 971e503d-1ae2-11e9-9773-a08cfdc9b1ab


--please help me on this

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.