fluxcd / image-reflector-controller Goto Github PK
View Code? Open in Web Editor NEWGitOps Toolkit controller that scans container registries
Home Page: https://fluxcd.io
License: Apache License 2.0
GitOps Toolkit controller that scans container registries
Home Page: https://fluxcd.io
License: Apache License 2.0
We have applied the image-reflection-controller with the default controller.yaml generated by kustomize build /config/default
to our GKE cluster.
Unfortunately this results in controller error logs because of the absence of multiple permissions for both the image-reflector-manager-role
ClusterRole
and the image-reflector-leader-election-role
Role
.
We made the controller work by adding the patch
verb for the image-reflector-leader-election-role
and by adding the following to the image-reflector-manager-role
:
- apiGroups:
- ""
resources:
- secrets
verbs:
- list
- watch
- get
Does anyone else faces this issue?
In a given ImagePolicy
object you may want to select a subset of the images, e.g., those marked as destined for a particular environment dev-*
. You might also want to filter in an ImageRepository
which images are available to policies; for example, if there are images that are known to be faulty, or cause problems for scanning.
I don't think this needs a big up front design, but it might pay to think (then write) about
We use Nexus Repo Manager for storing docker images internally, and the path to images contains multiple slashes (i.e. https://ourrepo.ourdomain.com:8083/repository/internal/myimage). It looks like when the we configure the ImageRepository config, it's parsing the repository as ourrepo.ourdomain.com:8083, not the full path of ourrepo.ourdomain.com:8083/repository/internal.
apiVersion: image.toolkit.fluxcd.io/v1alpha1
kind: ImageRepository
metadata:
name: myimage
namespace: infraops
spec:
image: ourrepo.ourdomain.com:8083/repository/internal/myimage
interval: 1m0s
secretRef:
name: regcred
Error received:
me@myhost:~/Fluxv2-InfraOps/base$ flux get image repository -n infraops
NAME READY MESSAGE LAST SCAN SUSPENDED
myimage False auth for "ourrepo.ourdomain.com:8083" not found in secret infraops/regcred False
In our regcred secret, our secret is defined like this:
{"auths":{"ourrepo.ourdomain.com:8083/repository/internal":{"username":"REDACT","password":"REDACT","email":"REDACT","auth":"REDACT"}}}
This repo URL structure works fine for our deployment (and this also worked correctly for us in fluxv1 because it was just an annotation on the deployment).
apiVersion: apps/v1
kind: Deployment
metadata:
name: myimage
namespace: infraops
labels:
app: myimage
spec:
replicas: 1
selector:
matchLabels:
app: myimage
template:
metadata:
labels:
app: myimage
spec:
containers:
- name: myimage
image: ourrepo.ourdomain.com:8083/repository/internal/myimage:latest
ports:
- name: http-port
containerPort: 80
imagePullSecrets:
- name: regcred
I am using several images from the linuxserver repositories on ghcr.io, and the image reflector is only returning 1000 tags for them even when there are more than 1000 tags. This is preventing those repositories from matching newer tags and being updated by the automation controller. I configured authentication on one of the repos but the 1000 tag limit is still being hit.
$ flux get image repository tautulli
NAME READY MESSAGE LAST SCAN SUSPENDED
tautulli True successful scan, found 1000 tags 2021-05-08T09:22:11+08:00 False
If I use curl on the API tags list URL for tautulli (https://ghcr.io/v2/linuxserver/tautulli/tags/list) I get 1088 tags returned.
$ curl -s -H "Authorization: Bearer $(echo <token> | base64)" https://ghcr.io/v2/linuxserver/tautulli/tags/list | jq '.tags | length'
1088
The PHP Docker Hub registry is returning over 5000 tags so it might be something specific to GH.
$ flux get image repository php
NAME READY MESSAGE LAST SCAN SUSPENDED
php True successful scan, found 5033 tags 2021-05-08T09:22:03+08:00 False
Running image-reflector-controller v0.9.1
This is the convention for gitops-toolkit.
The ImageRepository uses ref.Context().RegistryStr()
to get the container registry auth from the map in the secret
For docker, this value is index.docker.io
. Unfortunately, many .dockerconfigjson contain the full domain https://index.docker.io/v2
as shown in the k8s docs
I can change my docker secret for now, but it might be a "gotcha" for other users
controller-runtime has helpers for this: see for example https://github.com/fluxcd/kustomize-controller/blob/8bb4f4c80b4d257c32e2a36349d9687de966af0d/main.go#L158
I.e., how Flux v1 does it.
Running flux (0.6.1) in a cluster using two RasberryPI.
All but image-reflector-controller starts up fine.
Image-reflector-controller fails to allocate memory for the Badger database.
Logs:
kubectl -n flux-system logs image-reflector-controller-6bbbcc5c76-ttwcq
{"level":"info","ts":"2021-01-16T10:10:43.238Z","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
badger 2021/01/16 10:10:43 INFO: All 0 tables opened in 0s
{"level":"error","ts":"2021-01-16T10:10:43.302Z","logger":"setup","msg":"unable to open the Badger database","error":"Mmap value log file. Path=/data/000000.vlog. Error=cannot allocate memory"}
Flux-system pods
kubectl get pods -n flux-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
source-controller-bb458c944-kj76r 1/1 Running 0 15h 10.42.0.24 rpi1 <none> <none>
helm-controller-77c9c759fb-z5f95 1/1 Running 10 15h 10.42.1.41 rpi2 <none> <none>
notification-controller-6fbbc586b7-4kkff 1/1 Running 10 15h 10.42.1.42 rpi2 <none> <none>
kustomize-controller-64b894fb67-8fnkq 1/1 Running 10 15h 10.42.1.44 rpi2 <none> <none>
image-automation-controller-cd59d8b74-hh82w 1/1 Running 10 15h 10.42.1.43 rpi2 <none> <none>
image-reflector-controller-5d98f74bc-6sxdr 0/1 CrashLoopBackOff 7647 27d 10.42.1.40 rpi2 <none> <none>
image-reflector-controller-6bbbcc5c76-ttwcq 0/1 CrashLoopBackOff 186 15h 10.42.1.45 rpi2 <none> <none>
Flux:
flux --version flux version 0.6.1
Cluster memory/cpus
kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
rpi1 712m 17% 793Mi 81%
rpi2 491m 12% 619Mi 63%
Raspberry PI details:
uname -a
Linux rpi1 4.19.97-v7+ #1294 SMP Thu Jan 30 13:15:58 GMT 2020 armv7l GNU/Linux
uname -a
Linux rpi2 4.19.97-v7+ #1294 SMP Thu Jan 30 13:15:58 GMT 2020 armv7l GNU/Linux
The general pattern within GitOps Toolkit is that resources can be suspended (don't act on it), and those running on a schedule can be run on demand (act on it now). See, for example, https://toolkit.fluxcd.io/components/kustomize/kustomization/#reconciliation
This applies at least to ImageRepository. I don't think it applies to ImagePolicy, since it just reflects a calculation, rather than an ongoing process.
I.e., do the same for this controller as fluxcd/image-automation-controller#28 did.
Adding the Dockerhub registry through the kubectl create secret docker-registry
command, as recommended by the docs, does not add a working secret for image automation. You need to specify the registry in another, unexpected format. Adding a secret the default way results in the following error message on repository scanning:
❯ flux get images repository
NAME READY MESSAGE LAST SCAN SUSPENDED
docker-repo False auth for "index.docker.io" not found in secret flux-system/dockerhub False
This is because using the default kubectl command to create the registry secret:
kubectl create secret docker-registry testregistry --docker-username="username" --docker-password="password" --docker-email="email"
Results in this, decoded, secret:
{
"auths": {
"https://index.docker.io/v1/": {
"username":"blabla",
"password":"blabla",
"email":"[email protected]",
"auth":"YmxhYmxhOmJsYWJsYQ=="
}
}
}
To create a working secret you need to create a secret as follows, you need to specify the registry:
kubectl create secret docker-registry testregistry --docker-username="username" --docker-password="password" --docker-email="email" --docker-server="index.docker.io"
Which results in:
{
"auths": {
"index.docker.io": {
"username":"blabla",
"password":"blabla",
"email":"[email protected]",
"auth":"YmxhYmxhOmJsYWJsYQ=="
}
}
}
Which solves the issue.
Repository: https://github.com/cwijnekus/flux-bugreport
flux bootstrap github \
--owner=$GITHUB_USER \
--repository=flux-bugreport \
--branch=main \
--path=./clusters/kind-local \
--personal --components-extra=image-reflector-controller,image-automation-controller \
--token-auth
Add the non functioning dockerhub secret:
kubectl create secret docker-registry testregistry --docker-username="username" --docker-password="password" --docker-email="email"
Add the functioning dockerhub secret:
kubectl create secret docker-registry testregistry-working --docker-username="username" --docker-password="password" --docker-email="email" --docker-server="index.docker.io"
Check the flux output for image repository scanning:
❯ flux get images repository
NAME READY MESSAGE LAST SCAN SUSPENDED
test-private-repo-notworking False auth for "index.docker.io" not found in secret flux-system/testregistry False
test-private-repo-working True successful scan, found 1 tags 2021-03-17T10:23:04+01:00 False
Expected behavior is to have https://index.docker.io/v1/
also as a functioning docker registry entry. The default which is provided by Kubernetes. And/or provide a better error message, it was confusing because i thought i had added an entry for that registry. And/or update the documentation to mention this specific case of adding the --docker-server
to the command when creating the secret.
Below please provide the output of the following commands:
❯ flux --version
flux version 0.9.0
❯ flux check
► checking prerequisites
✗ flux 0.9.0 <0.9.1 (new version is available, please upgrade)
✔ kubectl 1.20.2 >=1.18.0-0
✔ Kubernetes 1.20.2 >=1.16.0-0
► checking controllers
✔ helm-controller: healthy
► ghcr.io/fluxcd/helm-controller:v0.8.0
✔ image-automation-controller: healthy
► ghcr.io/fluxcd/image-automation-controller:v0.6.1
✔ image-reflector-controller: healthy
► ghcr.io/fluxcd/image-reflector-controller:v0.7.0
✔ kustomize-controller: healthy
► ghcr.io/fluxcd/kustomize-controller:v0.9.1
✔ notification-controller: healthy
► ghcr.io/fluxcd/notification-controller:v0.9.0
✔ source-controller: healthy
► ghcr.io/fluxcd/source-controller:v0.9.0
✔ all checks passed
❯ kubectl -n flux-system get all
NAME READY STATUS RESTARTS AGE
pod/helm-controller-c85c67f98-8xs7z 1/1 Running 0 32m
pod/image-automation-controller-7d7bbb68c7-gr7v8 1/1 Running 0 32m
pod/image-reflector-controller-7848db879b-wzjfk 1/1 Running 0 32m
pod/kustomize-controller-5857688c67-77vdx 1/1 Running 0 32m
pod/notification-controller-d9464dbdf-gxqcn 1/1 Running 0 32m
pod/source-controller-798bd8fffb-5z979 1/1 Running 0 32m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/notification-controller ClusterIP 10.96.93.45 <none> 80/TCP 32m
service/source-controller ClusterIP 10.96.124.57 <none> 80/TCP 32m
service/webhook-receiver ClusterIP 10.96.47.9 <none> 80/TCP 32m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/helm-controller 1/1 1 1 32m
deployment.apps/image-automation-controller 1/1 1 1 32m
deployment.apps/image-reflector-controller 1/1 1 1 32m
deployment.apps/kustomize-controller 1/1 1 1 32m
deployment.apps/notification-controller 1/1 1 1 32m
deployment.apps/source-controller 1/1 1 1 32m
NAME DESIRED CURRENT READY AGE
replicaset.apps/helm-controller-c85c67f98 1 1 1 32m
replicaset.apps/image-automation-controller-7d7bbb68c7 1 1 1 32m
replicaset.apps/image-reflector-controller-7848db879b 1 1 1 32m
replicaset.apps/kustomize-controller-5857688c67 1 1 1 32m
replicaset.apps/notification-controller-d9464dbdf 1 1 1 32m
replicaset.apps/source-controller-798bd8fffb 1 1 1 32m
❯ kubectl -n flux-system logs deploy/source-controller
{"level":"info","ts":"2021-03-17T08:55:46.135Z","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:46.136Z","logger":"setup","msg":"starting manager"}
I0317 08:55:46.136804 7 leaderelection.go:243] attempting to acquire leader lease flux-system/305740c0.fluxcd.io...
{"level":"info","ts":"2021-03-17T08:55:46.136Z","msg":"starting metrics server","path":"/metrics"}
I0317 08:55:46.144518 7 leaderelection.go:253] successfully acquired lease flux-system/305740c0.fluxcd.io
{"level":"info","ts":"2021-03-17T08:55:46.237Z","logger":"setup","msg":"starting file server"}
{"level":"info","ts":"2021-03-17T08:55:46.237Z","logger":"controller.gitrepository","msg":"Starting EventSource","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-03-17T08:55:46.237Z","logger":"controller.helmchart","msg":"Starting EventSource","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-03-17T08:55:46.237Z","logger":"controller.helmrepository","msg":"Starting EventSource","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmRepository","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-03-17T08:55:46.237Z","logger":"controller.helmchart","msg":"Starting EventSource","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-03-17T08:55:46.237Z","logger":"controller.helmrepository","msg":"Starting Controller","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmRepository"}
{"level":"info","ts":"2021-03-17T08:55:46.237Z","logger":"controller.bucket","msg":"Starting EventSource","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"Bucket","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-03-17T08:55:46.237Z","logger":"controller.helmchart","msg":"Starting EventSource","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-03-17T08:55:46.338Z","logger":"controller.bucket","msg":"Starting Controller","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"Bucket"}
{"level":"info","ts":"2021-03-17T08:55:46.338Z","logger":"controller.helmchart","msg":"Starting EventSource","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-03-17T08:55:46.338Z","logger":"controller.helmchart","msg":"Starting Controller","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart"}
{"level":"info","ts":"2021-03-17T08:55:46.338Z","logger":"controller.gitrepository","msg":"Starting Controller","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository"}
{"level":"info","ts":"2021-03-17T08:55:46.338Z","logger":"controller.gitrepository","msg":"Starting workers","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","worker count":2}
{"level":"info","ts":"2021-03-17T08:55:46.338Z","logger":"controller.bucket","msg":"Starting workers","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"Bucket","worker count":2}
{"level":"info","ts":"2021-03-17T08:55:46.338Z","logger":"controller.helmchart","msg":"Starting workers","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","worker count":2}
{"level":"info","ts":"2021-03-17T08:55:46.338Z","logger":"controller.helmrepository","msg":"Starting workers","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmRepository","worker count":2}
{"level":"info","ts":"2021-03-17T08:55:58.047Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 544.61233ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T08:56:58.379Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 331.213798ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T08:57:58.697Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 317.96382ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T08:58:59.017Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 318.926006ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T08:59:59.340Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 321.130802ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:00:59.643Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 299.507794ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:02:00.032Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 388.893544ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:03:00.510Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 477.220177ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:04:00.922Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 411.704089ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:05:01.644Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 721.666712ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:06:02.051Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 406.40257ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:07:02.414Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 362.302249ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:08:02.793Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 375.597042ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:09:03.149Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 354.534896ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:10:03.478Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 327.681681ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:11:03.793Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 313.836725ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:12:04.241Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 447.071035ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:13:04.668Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 426.303934ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:14:04.974Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 306.076274ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:15:05.327Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 351.696463ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:16:05.637Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 309.975561ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:17:05.971Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 331.316217ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:18:06.280Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 308.800785ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:18:21.198Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 377.734472ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:19:06.604Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 322.920319ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:20:07.239Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 634.17857ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:21:07.575Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 335.165483ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:21:53.586Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 358.363014ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:22:07.967Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 392.47694ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:23:08.283Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 313.910174ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:24:08.592Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 308.657598ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:25:08.927Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 334.087373ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:26:09.241Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 313.671732ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:27:09.568Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 325.333795ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:28:09.883Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 315.14254ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T09:29:10.256Z","logger":"controller.gitrepository","msg":"Reconciliation finished in 371.762399ms, next run in 1m0s","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"GitRepository","name":"flux-system","namespace":"flux-system"}
❯ kubectl -n flux-system logs deploy/kustomize-controller
{"level":"info","ts":"2021-03-17T08:55:45.867Z","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":"2021-03-17T08:55:45.867Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:45.867Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:45.867Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:45.867Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:45.867Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:45.867Z","logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":"2021-03-17T08:55:45.867Z","logger":"setup","msg":"starting manager"}
I0317 08:55:45.867754 7 leaderelection.go:243] attempting to acquire leader lease flux-system/7593cc5d.fluxcd.io...
{"level":"info","ts":"2021-03-17T08:55:45.868Z","msg":"starting metrics server","path":"/metrics"}
I0317 08:55:45.902972 7 leaderelection.go:253] successfully acquired lease flux-system/7593cc5d.fluxcd.io
{"level":"info","ts":"2021-03-17T08:55:45.968Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-03-17T08:55:45.968Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-03-17T08:55:46.069Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-03-17T08:55:46.169Z","logger":"controller.kustomization","msg":"Starting Controller","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization"}
{"level":"info","ts":"2021-03-17T08:55:46.169Z","logger":"controller.kustomization","msg":"Starting workers","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","worker count":4}
{"level":"info","ts":"2021-03-17T08:55:57.501Z","logger":"controller.kustomization","msg":"Source 'GitRepository/flux-system' not found","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system"}
{"level":"info","ts":"2021-03-17T08:56:00.124Z","logger":"controller.kustomization","msg":"Kustomization applied in 515.791273ms","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"configured","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"configured","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"configured","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagepolicies.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagerepositories.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imageupdateautomations.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/image-automation-controller":"configured","deployment.apps/image-reflector-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"configured","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"configured","namespace/flux-system":"configured","networkpolicy.networking.k8s.io/allow-scraping":"configured","networkpolicy.networking.k8s.io/allow-webhooks":"configured","networkpolicy.networking.k8s.io/deny-ingress":"configured","service/notification-controller":"configured","service/source-controller":"configured","service/webhook-receiver":"configured","serviceaccount/helm-controller":"configured","serviceaccount/image-automation-controller":"configured","serviceaccount/image-reflector-controller":"configured","serviceaccount/kustomize-controller":"configured","serviceaccount/notification-controller":"configured","serviceaccount/source-controller":"configured"}}
{"level":"info","ts":"2021-03-17T08:56:00.129Z","logger":"controller.kustomization","msg":"Reconciliation finished in 2.082281639s, next run in 10m0s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","revision":"main/f50658d4c1a578529f42379a16dd7964cd79402e"}
{"level":"info","ts":"2021-03-17T09:05:58.728Z","logger":"controller.kustomization","msg":"Kustomization applied in 390.173069ms","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagepolicies.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagerepositories.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imageupdateautomations.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/image-automation-controller":"configured","deployment.apps/image-reflector-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","networkpolicy.networking.k8s.io/deny-ingress":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/image-automation-controller":"unchanged","serviceaccount/image-reflector-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
{"level":"info","ts":"2021-03-17T09:05:58.733Z","logger":"controller.kustomization","msg":"Reconciliation finished in 1.227999272s, next run in 10m0s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","revision":"main/f50658d4c1a578529f42379a16dd7964cd79402e"}
{"level":"info","ts":"2021-03-17T09:15:59.942Z","logger":"controller.kustomization","msg":"Kustomization applied in 371.373715ms","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagepolicies.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagerepositories.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imageupdateautomations.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/image-automation-controller":"configured","deployment.apps/image-reflector-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","networkpolicy.networking.k8s.io/deny-ingress":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/image-automation-controller":"unchanged","serviceaccount/image-reflector-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
{"level":"info","ts":"2021-03-17T09:15:59.947Z","logger":"controller.kustomization","msg":"Reconciliation finished in 1.211428827s, next run in 10m0s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","revision":"main/f50658d4c1a578529f42379a16dd7964cd79402e"}
{"level":"info","ts":"2021-03-17T09:18:07.544Z","logger":"controller.kustomization","msg":"Kustomization applied in 481.996158ms","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"configured","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"configured","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"configured","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagepolicies.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagerepositories.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imageupdateautomations.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/image-automation-controller":"configured","deployment.apps/image-reflector-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"configured","imagerepository.image.toolkit.fluxcd.io/test-private-repo-notworking":"created","imagerepository.image.toolkit.fluxcd.io/test-private-repo-working":"created","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"configured","namespace/flux-system":"configured","networkpolicy.networking.k8s.io/allow-scraping":"configured","networkpolicy.networking.k8s.io/allow-webhooks":"configured","networkpolicy.networking.k8s.io/deny-ingress":"configured","service/notification-controller":"configured","service/source-controller":"configured","service/webhook-receiver":"configured","serviceaccount/helm-controller":"configured","serviceaccount/image-automation-controller":"configured","serviceaccount/image-reflector-controller":"configured","serviceaccount/kustomize-controller":"configured","serviceaccount/notification-controller":"configured","serviceaccount/source-controller":"configured"}}
{"level":"info","ts":"2021-03-17T09:18:07.591Z","logger":"controller.kustomization","msg":"Reconciliation finished in 1.31305s, next run in 10m0s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","revision":"main/fcfdf82fca6a67478289392197f28f66a7b127d5"}
{"level":"info","ts":"2021-03-17T09:18:23.953Z","logger":"controller.kustomization","msg":"Kustomization applied in 377.85931ms","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagepolicies.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagerepositories.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imageupdateautomations.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/image-automation-controller":"configured","deployment.apps/image-reflector-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","imagerepository.image.toolkit.fluxcd.io/test-private-repo-notworking":"unchanged","imagerepository.image.toolkit.fluxcd.io/test-private-repo-working":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","networkpolicy.networking.k8s.io/deny-ingress":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/image-automation-controller":"unchanged","serviceaccount/image-reflector-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
{"level":"info","ts":"2021-03-17T09:18:23.957Z","logger":"controller.kustomization","msg":"Reconciliation finished in 1.121467999s, next run in 10m0s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","revision":"main/fcfdf82fca6a67478289392197f28f66a7b127d5"}
{"level":"info","ts":"2021-03-17T09:21:56.355Z","logger":"controller.kustomization","msg":"Kustomization applied in 379.732857ms","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagepolicies.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagerepositories.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imageupdateautomations.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/image-automation-controller":"configured","deployment.apps/image-reflector-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","imagerepository.image.toolkit.fluxcd.io/test-private-repo-notworking":"unchanged","imagerepository.image.toolkit.fluxcd.io/test-private-repo-working":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","networkpolicy.networking.k8s.io/deny-ingress":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/image-automation-controller":"unchanged","serviceaccount/image-reflector-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
{"level":"info","ts":"2021-03-17T09:21:56.360Z","logger":"controller.kustomization","msg":"Reconciliation finished in 1.112196542s, next run in 10m0s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","revision":"main/fcfdf82fca6a67478289392197f28f66a7b127d5"}
{"level":"info","ts":"2021-03-17T09:26:01.164Z","logger":"controller.kustomization","msg":"Kustomization applied in 416.643139ms","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagepolicies.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imagerepositories.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/imageupdateautomations.image.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/image-automation-controller":"configured","deployment.apps/image-reflector-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","imagerepository.image.toolkit.fluxcd.io/test-private-repo-notworking":"unchanged","imagerepository.image.toolkit.fluxcd.io/test-private-repo-working":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","networkpolicy.networking.k8s.io/deny-ingress":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/image-automation-controller":"unchanged","serviceaccount/image-reflector-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
{"level":"info","ts":"2021-03-17T09:26:01.169Z","logger":"controller.kustomization","msg":"Reconciliation finished in 1.220409742s, next run in 10m0s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","revision":"main/fcfdf82fca6a67478289392197f28f66a7b127d5"}
.. so you can create ImageRepository and ImagePolicy objects from flux
.
Hi, I set up Image Auto Update following this guide: https://toolkit.fluxcd.io/guides/image-update/
I use EKS, ECR private registry, and bitbucket. Components such as Kustomize, Helm, Source controller work fine. But Auto Update Image shows errors. Here is the details:
Result of command:
flux get image repository
flux get image policy
flux get image update
Image update show errors so I guess flux can't update image to my deployment.
Here is a summary of the yaml content
Image Auto Update:
# ImageRepository to tell Flux which container registry to scan for new tags:
apiVersion: image.toolkit.fluxcd.io/v1alpha1
kind: ImageRepository
metadata:
name: nginx-test
namespace: flux-system
spec:
image: xxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com/fluxcd/nginx
interval: 1m0s
secretRef:
name: ecr-credentials
---
# ImagePolicy to tell Flux which semver range (or another tools) to use when filtering tags:
apiVersion: image.toolkit.fluxcd.io/v1alpha1
kind: ImagePolicy
metadata:
name: nginx-test
namespace: flux-system
spec:
imageRepositoryRef:
name: nginx-test
filterTags:
pattern: '^master-.+'
policy:
alphabetical:
order: asc
---
# ImageUpdateAutomation to tell Flux which Git repository to write image updates to
apiVersion: image.toolkit.fluxcd.io/v1alpha1
kind: ImageUpdateAutomation
metadata:
name: flux-system
namespace: flux-system
spec:
checkout:
branch: master
gitRepositoryRef:
name: flux-system
commit:
authorEmail: [email protected]
authorName: fluxcdbot
messageTemplate: '[ci skip] update image'
interval: 1m0s
update:
strategy: Setters
ECR cron:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ecr-credentials-sync
namespace: flux-system
rules:
- apiGroups: [""]
resources:
- secrets
verbs:
- delete
- create
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ecr-credentials-sync
namespace: flux-system
subjects:
- kind: ServiceAccount
name: ecr-credentials-sync
roleRef:
kind: Role
name: ecr-credentials-sync
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ecr-credentials-sync
namespace: flux-system
# Uncomment and edit if using IRSA
# annotations:
# eks.amazonaws.com/role-arn: <role arn>
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: ecr-credentials-sync
namespace: flux-system
spec:
suspend: false
schedule: 0 */6 * * *
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
serviceAccountName: ecr-credentials-sync
restartPolicy: Never
volumes:
- name: token
emptyDir:
medium: Memory
initContainers:
- image: amazon/aws-cli
name: get-token
imagePullPolicy: IfNotPresent
# You will need to set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables if not using
# IRSA. It is recommended to store the values in a Secret and load them in the container using envFrom.
envFrom:
- secretRef:
name: aws-credentials
env:
- name: REGION
value: ap-southeast-1 # change this if ECR repo is in a different region
volumeMounts:
- mountPath: /token
name: token
command:
- /bin/sh
- -ce
- aws ecr get-login-password --region ${REGION} > /token/ecr-token
containers:
- image: bitnami/kubectl
name: create-secret
imagePullPolicy: IfNotPresent
env:
- name: SECRET_NAME
value: ecr-credentials
- name: ECR_REGISTRY
value: xxxxx.dkr.ecr.ap-southeast-1.amazonaws.com # fill in the account id and region
volumeMounts:
- mountPath: /token
name: token
command:
- /bin/bash
- -ce
- |-
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server="$ECR_REGISTRY" \
--docker-username=AWS \
--docker-password="$(</token/ecr-token)"
AWS-credentials secret
apiVersion: v1
kind: Secret
metadata:
name: aws-credentials
namespace: flux-system
type: Opaque
data:
AWS_ACCESS_KEY_ID: xxxxxxxxxxxxxxxxx
AWS_SECRET_ACCESS_KEY: xxxxxxxxxxxxxxxx
Could you show me why errors happen? Thanks
There's a flag (default true 😢 ) for watching all namespaces, vs watching just the namespace the controller is deployed to.
For example, see
https://github.com/fluxcd/kustomize-controller/blob/main/main.go#L96
https://github.com/fluxcd/kustomize-controller/blob/main/main.go#L101
We have docker tags like v{semver}-{commit-offset}-{gitHash}
generated by git describe
. The issue is once stable tags of the format v{semver}
get pushed to the repository, tags that should be more recent do not get recognized.
v0.0.0-1-abcd (first tag)
v0.0.1 (second recognized tag)
v0.0.1-1-abcd (not recognized even if latest)
Here is an example of the imagePolicy spec:
policy:
semver:
range: '>=0.0.0-0'
filterTags:
pattern: '^v.*'
Let me know if i'm doing something wrong or need to provide more details.
I've also tested the case with the "github.com/Masterminds/semver/v3"
package here is a link to the go playground: https://play.golang.org/p/M3UfEk1O8tZ
Hello there
I have an authentication problem between EKS and ECR via flux-v2,
I am currently using authentication by giving out a password with command:
aws ecr get-login --no-include-email
I use this command to create a secret.
kubectl create secret docker-registry ecr --docker-server = my-id-aws.dkr.ecr.us-east-1.amazonaws.com --docker-username = AWS --docker-password = my-key-gen == -n flux-system
And after 12 hours my secret expires and flux-v2 cannot pull images.
P/s. I have added a role for EKS Node is AmazonEC2ContainerRegistryReadOnly
My error now: GET https://my-id-aws.dkr.ecr.us-east-1.amazonaws.com/v2/image/tags/list?n=1000: DENIED: Your authorization token has expired. Reauthenticate and try again.
With regexp filtering, it's possible to extract a value from each tag for the policy to consider. This is to cover a use case like giving all your tags dev-<version>
, where you want the latest version but only from tags with dev-
.
This creates the possibility of duplicates: if you had a pattern (dev-)?v(<version>)
for example, you might extract the same value from both dev-v1.0
and v1.0
. This makes behaviour unpredictable, since the tag selected by the policy may depend on the order the original tags are encountered.
Following the example from docs
Image policy:
apiVersion: image.toolkit.fluxcd.io/v1alpha2
kind: ImagePolicy
metadata:
name: app
spec:
imageRepositoryRef:
name: app-repo
filterTags:
pattern: '^staging-[a-fA-F0-9]+-(?P<ts>.*)'
extract: '$ts'
policy:
numerical:
order: asc
Docker container tags:
staging-b7a90d9-552613297731
staging-34769fa-551067109282849
I would expect that staging-b7a90d9-552613297731 is considered as newer and therefore container tag reference would be updated however staging-34769fa-551067109282849 is being selected.
Changing from asc to desc will pick up the most recent image, but shouldn't be the case right?
Same policy but with different environment within the pattern section does not work as expected either.
Cloud you please clarify if this is the right way of doing it please? Thanks.
This has the very-slightly less strict parsing (allows a leading 'v').
I have an error in regex for Image Policy and this is the log:
{"level":"error","ts":"2021-03-01T12:31:33.411Z","logger":"controller-runtime.manager.controller.imagepolicy","msg":"Reconciler error","reconciler group":"image.toolkit.fluxcd.io","reconciler kind":"ImagePolicy","name":"myapp","namespace":"flux-system","error":"invalid regular expression pattern '^[a-fA-F0-9]+-(?<build_number>[0-9]*)': error parsing regexp: invalid or unsupported Perl syntax: (?<
"}
flux get image policy
shows "waiting to be reconciled"
The error is not visible as an event in k8s either.
ImageReposity error (at least docker auth failure) has this desired behaviour.
flux reconcile image policy
Below please provide the output of the following commands:
flux version 0.9.0
► checking prerequisites
✔ kubectl 1.20.0 >=1.18.0-0
✔ Kubernetes 1.20.2 >=1.16.0-0
► checking controllers
✔ helm-controller: healthy
► ghcr.io/fluxcd/helm-controller:v0.8.0
✔ image-automation-controller: healthy
► ghcr.io/fluxcd/image-automation-controller:v0.6.1
✔ image-reflector-controller: healthy
► ghcr.io/fluxcd/image-reflector-controller:v0.7.0
✔ kustomize-controller: healthy
► ghcr.io/fluxcd/kustomize-controller:v0.9.1
✔ notification-controller: healthy
► ghcr.io/fluxcd/notification-controller:v0.9.0
✔ source-controller: healthy
► ghcr.io/fluxcd/source-controller:v0.9.0
✔ all checks passed
I have an image repository with one tag
SHA-123
And I use this regex ^[a-fA-F0-9]+-(?P<build_number>[0-9]*)
to find the "123" that is later being compared numerically.
I now add this tag
SHA-v1.2.3-prod
Now image policy starts failing with the following message:
no image found for policy
It seems that the real problem is that my Regex does not guarantee only numbers (no $ at the end) so it will read the new tag and numerical policy cannot apparently be applied.
My suggestion is to improve the error message to be more informative as really there are images found for the policy (form step 1), but there are some images that are failing. Ideally the message should include the tag that does not work as well, e.g.
"found image SHA-v1.2.3-prod which extracts that cannot be used in specified policy"
That would help narrow down the problem as the original message is quite confusing: there are in fact images that satisfy the policy.
We have docker tags that consist {version}-alpha.{build_number}.{revision_number}
if the revision number starts with 0 and its all numbers. the image-controller will not fetch it as a new image.
ex:
0.0.1-alpha.12.b4djgk3 // recognizes as a new image
new image gets pushed
0.0.1-alpha.13.0545458 // does not recognize as a new image
0.0.1-alpha.12.b4djgk3 // fetches fine
we try retagging the same image with an "r" in front of the revision number and that helped as a work around
0.0.1-alpha.13.r0545458 // recognizes as a new image
0.0.1-alpha.12.b4djgk3 //
Both the GOTK conventional metrics, where they make sense, and any particular to image-reflector.
It has been noticed that while using fossa-contrib/fossa-action@v1 your fossa-api-key 5ee8bf422db1471e0bcf2bcb289185de is present in plaintext. Please ensure that secrets are encrypted or not passed as plain text in github workflows.
I had inadvertently created an ImagePolicy with an invalid pattern pattern: '^develop-continuous-(?<buildnumber>\d+)$'
(omitting the p
between the ?
and the <
) . When trying to reconcile the policy, it was stuck in the following state:
NAME READY MESSAGE LATEST IMAGE
<name> False waiting to be reconciled
Given the message, I tried flux reconcile image repository
and flux reconcile kustomization
several times to no avail. I looked in the image-reflector-controller logs and sure enough there was an error informing me that my regex was invalid.
It would be nice to bubble that information upwards as an explicit error message in flux get image policy
or an event on the ImagePolicy
API object so that users would have a better idea of what went wrong.
As an aside, perhaps it's possible to validate the spec.filterTags.pattern
field at creation time with a ValidatingAdmissionWebhook
instead of at runtime?
flux get image policy
I would have liked a more precise message in flux get image policy
for ImagePolicies
with invalid regexes. While it's technically true that the ImagePolicy
was waiting to be reconciled, the message makes it seem like running flux reconcile
is be obvious (but incorrect) action to take.
N/A
flux version 0.13.1
► checking prerequisites
✔ kubectl 1.21.0 >=1.18.0-0
✔ Kubernetes 1.20.4 >=1.16.0-0
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.10.0
✔ image-automation-controller: deployment ready
► ghcr.io/fluxcd/image-automation-controller:v0.9.0
✔ image-reflector-controller: deployment ready
► ghcr.io/fluxcd/image-reflector-controller:v0.9.0
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v0.11.1
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v0.13.0
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v0.12.1
✔ all checks passed
For the internal Docker registry we use a TLS certificate to authenticate, instead of username/password.
The image update automation in FluxV2 lacks support for using a custom TLS certificate to authenticate towards docker registry.
In FluxV1, we used the registry.insecureHosts
helm property as a work-around.
flux get image repository
NAME READY MESSAGE LAST SCAN SUSPENDED
<REDACTED> False Get "<docker-registry-url>": x509: certificate signed by unknown authority False
flux --version
flux version 0.5.4
The Masterminds semver is seriously flawed see fluxcd/flux#2729
In source controller we decided to use blang/semver for Git tags semver ranges.
While writing the CLI subcommands for image policy, I realised that there's not a reliable indication of readiness for an image policy. Although they are just a calculation based on the image data, there's still things that can go wrong e.g., the referenced image repository does not exist, or is not ready itself.
Therefore,
(any other GOTK controller has examples of how to do these)
Example: https://github.com/fluxcd/kustomize-controller/blob/main/docs/api/kustomize.md
as generated in the Makefile: https://github.com/fluxcd/kustomize-controller/blob/main/Makefile#L66
Creating an Image repository with this YAML
apiVersion: image.toolkit.fluxcd.io/v1alpha2
kind: ImageRepository
metadata:
name: inspector
namespace: flux-system
spec:
image: https://quay.io/repository/adoreme/inspector
interval: 1m0s
secretRef:
name: quay-secret
and this docker-registry secret
{"auths":{"https://quay.io":{"username":"****","password":"****","auth":"****"}}}
results in this error
auth for "https:" not found in secret flux-system/quay-secret
This issue is gotten from this message on slack
When using the 1.0 installation of flux I have noticed that when the 1.0 flux controller updates the image version of a formatted manifest, the manifest in our case was formatted with prettier, flux doesn't change just the image tag it will enforce its own formatting probably based how the go-yaml
package unmarshalls yaml.
What I would like is the ability to have a gitops flux repo where I can enforce formatting using a tool like prettier. I see two options where I can get this.
Add hooks in flux that allow us to run arbitrary commands on the modified yaml when an image patch is committed. This is a bit of a complex problem because the flux docker image wouldn't necessarily have the ability to run prettier so people that want to leverage this would have to run a baked version of the flux were they also install their formatter.
Get a better understanding on how flux is formatting the yaml files today and a means of replicating that with a formatter that I can invoke via bash against a yaml file. This way I could just make my repo formatter consistent with the formatting that flux will use when it generates yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
fluxcd.io/automated: "true"
fluxcd.io/tag.bar: glob:master-*
labels:
k8s-app: foo
name: foo
namespace: bar
spec:
replicas: 2
selector:
matchLabels:
k8s-app: foo
template:
metadata:
labels:
k8s-app: foo
name: foo
spec:
containers:
- env:
- name: PORT
value: "3000"
- name: NODE_APP_INSTANCE
value: "qa"
image: foo:master-fdb57d8
name: foo
the above was styled by prettier by running prettier ./file-name -w
.
below was what flux will generate after bumping the image version.
Notice how the section starting with - env:
is unindented.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
fluxcd.io/automated: "true"
fluxcd.io/tag.bar: glob:master-*
labels:
k8s-app: foo
name: foo
namespace: bar
spec:
replicas: 2
selector:
matchLabels:
k8s-app: foo
template:
metadata:
labels:
k8s-app: foo
name: foo
spec:
containers:
- env:
- name: PORT
value: "3000"
- name: NODE_APP_INSTANCE
value: "qa"
image: foo:master-elka323d
name: foo
I cannot configure prettier to match the current formatting that flux uses according to these issues prettier/prettier#4723 prettier/prettier#9355
Similar to kustomize-controller e2e workflow, we should create e2e tests for image-reflector-controller. We can use GHCR to avoid Docker Hub rate limits for public image: https://github.com/orgs/fluxcd/packages
Sometimes people want to partition their container image repo and use only a subset of the images. For example, by giving all the images destined for a dev environment a prefix dev-
.
To give those people a way to filter for the desired subset of images, we can let them supply a regular expression in their ImagePolicy
spec, and consider only the tags that match the regular expression.
They may also want to use only a portion of the tag for sorting, e.g., if the sort order is CalVer but there is a prefix, like dev-2021-01-11.build-1
. Therefore it's useful to be able to sort using a particular capture group from the regular expression, or more generally, a replacement expression.
Why not globbing?
Globs are simpler to understand -- dev-*
-- but suffer from a couple of problems:
If we implemented globs, we would very likely be asked to also implement regexp -- this is what happened for Flux v1. In the context of image tags, which don't have path separators (so no **
glob pattern), it is not difficult to construct a regular expression that's equivalent to a glob: dev-.*
for dev-*
and so on.
Place in the API
#58 (comment) and the comment following it #58 (comment) suggest that filtering is independent of the ordering. It can either appear in each policy:
spec:
policy:
semver:
range: 1.x
filterRegex: 'dev-v(.*)'
sortOn: '\1'
or it can appear alongside the policy:
spec:
filterRegex: 'dev-v(.*)'
sortOn: '\1'
policy:
semver:
range: 1.x
There's a sense in which the filtering takes place before the policy (e.g., extract the version portion of the tag before checking the semver range), so the latter feels closer to how people might think about it.
The additionalPrinterColumn for LAST SCAN is not populating. I believe it was #56 that introduced this. I have confirmed that updating the CRD as follows populates the field:
should instead read
.status.lastScanResult.scanTime
Must take into account these scenarios:
In the case of ECR and GCR, the permissions are connected to the account running the controller, so it is not possible to have multi-tenancy (ImageRepository objects that have different access to ECR, say) without some extra machinery. Supporting only single tenancy is OK for now.
There are some tag conventions that lend themselves naturally to lexicographic sorting, for example, nightly builds with a timestamp style date, or CalVer style naming.
It'd be nice to support this too, essentially the sorted list of names, and take the last one.
It's sometimes necessary to pin an automation to a particular tag of an image, say if
You can use an exact semver (if using semver for your images), or a literal string as a regular expression, to restrict the image policy to exactly one tag. You can also just hard-wire the desired tag into the resource in question and remove the update marker.
These overwrite the original policy or automation, so you can't see what it would normally be. Mechanically this isn't a problem, since you can use the git history, or etcd history, to examine or restore the previous state. However, making the pinning explicit means the operational state ("this is being held back") is clear in the API.
We should create an admission webhook for validating the policies. This makes sense in the context of ImagePolicyChoice
which should configure one and only one policy type. An image policy that lacks a valid ImagePolicyChoice
should be considered invalid and fail to create in this situation.
Need to look for examples, first. Will likely need some design.
I created an image repository with a spec.image
of ghcr.io/<org>/<image-name>
. I added a secret ref of spec.secretRef.name
with the value of github
there is a github secret in the flux-system namespace that has auth that is identical to the one kubernetes is using to download container images but I am getting an error of
"auth for \"ghcr.io\" not found in secret flux-system/github"
Any ideas what could be causing this? I am using the new github container image repositories to host my container images.
While alphabetical ordering works great with image tags that contain a RFC3339 timestamp, it's not suitable for CI build IDs, as these are numbers with no padding.
I propose we introduce a new ordering option called numerical
to be able to correct detect the latest build for tags in the format <PREFIX>-<BUILD_ID>
.
Example:
kind: ImagePolicy
spec:
filterTags:
pattern: '^main-[a-fA-F0-9]+-(?P<id>.*)'
extract: '$id'
policy:
numerical:
order: asc
Given the tags main-845d3a80-100
and main-3e32dc8a-2
the numerical
policy, unlike alphabetical
, will chose main-3e32dc8a-100
as the latest build.
So far there's a semver policy for selecting images, which selects the highest version within a given range.
Another variety of policy that we use everywhere in the Weave Cloud dev environment is "run the most recently built image", which is calculated by looking at the build timestamp. This is easily tripped up, though, in a few ways:
What's actually required is the image built from the most recent commit (.. on a particular branch). The git revision policy would calculate that by looking at the git repository itself, to determine which among the available images should be selected.
kind: ImagePolicy
spec:
imageRepository:
name: flux-image
policy:
gitRevision:
gitRepositoryRef:
name: flux-repo
branch: main
NB this will probably need a bit of extra config to tell it how to get the revision of a given image -- either by parsing its tag, or by looking at a label in the metadata -- tbd.
After a hard restart of a k8s node on my Raspberry Pi cluster the image-reflector-controller went into a crashloop and was only able to recover after the pod was deleted and recreated. The logs point to a problem with a truncate operation. I'm running v0.5.0.
flux-system image-reflector-controller-774686df7b-9swbh 0/1 CrashLoopBackOff 106 23h
badger 2021/02/11 10:23:21 INFO: All 1 tables opened in 0s
badger 2021/02/11 10:23:21 INFO: Discard stats nextEmptySlot: 0
badger 2021/02/11 10:23:21 INFO: Set nextTxnTs to 872
badger 2021/02/11 10:23:21 INFO: Deleting empty file: /data/000026.vlog
badger 2021/02/11 10:23:21 ERROR: Received err: while truncating last value log file: /data/000027.vlog error: mremap size mismatch: requested: 20 got: 536870912. Cleaning up...
{"level":"error","ts":"2021-02-11T10:23:21.766Z","logger":"setup","msg":"unable to open the Badger database","error":"During db.vlog.open error: while truncating last value log file: /data/000027.vlog error: mremap size mismatch: requested: 20 got: 536870912","stacktrace":"runtime.main\n\t/usr/local/go/src/runtime/proc.go:204"}
Use the logger setup from https://github.com/fluxcd/pkg/tree/main/runtime/logger and review use of logging vs dev guide / examples elsewhere.
You sometimes want to use the digest of an image, rather than the tag; e.g., if you are interested in exactly reproducible builds.
For that reason, it'd be useful to supply the digest of an image selected by a policy object, as well as its tag, in the status. The digest appears to be available via https://godoc.org/github.com/google/go-containerregistry/pkg/v1/remote#Head (but if not, Get
in the same place). This has to be done per tags, so while we don't need metadata for sorting/selecting, the policy controller can just fetch it for those images it selects.
The image reflector controller log is showing this:
{"level":"error","ts":"2021-04-27T14:55:40.005Z","logger":"controller-runtime.manager.controller.imagerepository","msg":"Reconciler error","reconciler group":"image.toolkit.fluxcd.io","reconciler kind":"ImageRepository","name":"giftcard","namespace":"flux-system","error":"GET https://XXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/v2/redacted/redacted/tags/list?n=1000: unexpected status code 401 Unauthorized: Not Authorized\n"}
Which is bizarre because the node in EKS has this policy attached to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:ListTagsForResource",
"ecr:DescribeImageScanFindings"
],
"Resource": "*"
}
]
}
...and if I specify an image tag for the same repository, Kubernetes has no problem at all pulling the image.
I understand there is a mechanism you recommend to setup a secret + daemon set to get the credentials, but I'm puzzled as to why it can't just pull the image when the node itself has permissions to do so, and Kubernetes pulls the images just fine.
Hey All--just deployed fluxv2 to our dev cluster and am SUPER excited about it! BIG improvement, and the last version served us so well, we cant wait to see the things about the new version we have not even unlocked.
I was playing around with the ImagePolicy, and wondered how you would do rollbacks in the event of a big issue if the policy only works on incrementing semver
versions.
For example, I have this policy:
apiVersion: image.toolkit.fluxcd.io/v1alpha1
kind: ImagePolicy
metadata:
name: my-image
namespace: flux-system
spec:
filterTags:
extract: $1
pattern: ^dev-v(.*)
imageRepositoryRef:
name: my-image
policy:
semver:
range: 2.x.x-0
The image tag we are using is dev-v2.x.x-<build-id>
. Doing it this way allows us to simply re-tag an image with stg
and the staging cluster picks it up and deploys automatically (same for prod when we promote). This currently works with our expected workflow, but what happens when we need to roll back?
Say we have just deployed v2.31.0-34
and now we need a way to re-deploy v2.29.3-33
. Is there any way to do this or is it on the roadmap? Or is the expected path that we would just create a patch and create v2.31.1-35
? That would be particularly difficult in the case of a rollback, as we would need to revert the commit in our git repo versus just trigger a CI job that rebuilds a particular tag (or better yet, find a way to tell flux to grab an old image temporarily).
Any thoughts around this?
Thanks so much in advance!
The image metadata (tags, for now) used by the image reflector are just kept in a hash, to start with. But given the expense (and latency) of scanning for more image metadata, this will soon be a pain point, because any restart will lose everything.
A couple of design ideas:
The one thing memcached was quite useful for, in theory, was expiring entries -- any solution here will probably need a garbage collection process.
As mentioned in fluxcd/flux2#107 (reply in thread): instead of giving a range of versions, give the part of the version that's allowed to be updated, e.g., patch
or minor
.
policy:
semver:
level: patch
.. except this doesn't work without a base version, and supplying a base version would in effect be giving a version range. So maybe not that useful.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.