kyverno / policy-reporter Goto Github PK
View Code? Open in Web Editor NEWMonitoring and Observability Tool for the PolicyReport CRD with an optional UI.
Home Page: https://kyverno.github.io/policy-reporter/
License: MIT License
Monitoring and Observability Tool for the PolicyReport CRD with an optional UI.
Home Page: https://kyverno.github.io/policy-reporter/
License: MIT License
I was trying to get Prometheus metrics (as described on docs) on /metrics
but somehow it returns 404
.
$ kubectl port-forward service/policy-reporter-ui 8082:8080 -n kyverno
$ helm get values policy-reporter
USER-SUPPLIED VALUES:
kyvernoPlugin:
enabled: true
metrics:
enabled: true
ui:
enabled: true
plugins:
kyverno: true
$ curl http://localhost:8082/metrics
Not Found
How i installed:
$ helm install policy-reporter policy-reporter/policy-reporter --set kyvernoPlugin.enabled=true --set ui.enabled=true --set ui.plugins.kyverno=true --set metrics.enabled=true -n kyverno --create-namespace
Anything I missed here? ๐ค
On both of the "Details" dashboards included in the Helm Chart, there are drop-down filters at the top for Policy
, Category
, Severity
, Namespace
(PolicyReport Details only) and Kind
. I'd like to see these same filters on the PolicyReports
dashboard.
Hi,
Is there any option to host the policy reporter UI from a sub path?
Regards,
Shilpa
New vulnerabilities are found in Golang 1.17.2, we need to bump Golang version to 1.17.6 for all policy-reporter images:
New vulnerabilities are found in Golang 1.17.6. Need to bump Golang version to 1.17.8 (or higher) for all policy-reporter images:
Hi,
I'm trying to deploy the policy-reporter with Helm and I'm running into an issue when I try to apply our own values.yaml file. Not sure if it's a bug or I'm missing something here.
policy-reporter: v2.2.0
kubernetes: 1.24.1
helm: v3.8.0
helm upgrade -i policy-reporter policy-reporter/policy-reporter -n policy-reporter --namespace policy-reporter --version v2.2.2
Release "policy-reporter" does not exist. Installing it now.
NAME: policy-reporter
LAST DEPLOYED: Fri Jan 28 19:14:22 2022
NAMESPACE: policy-reporter
STATUS: deployed
REVISION: 1
TEST SUITE: None
Export values used by deployment and store them in file:
helm show values policy-reporter/policy-reporter > default_values.yaml
Run helm upgrade using the newly created file:
helm upgrade -i policy-reporter policy-reporter/policy-reporter -n policy-reporter --namespace policy-reporter --version v2.2.2 -f default_values.yaml
Expected behavior:
Succesful upgrade showing REVISION: 2
Error: UPGRADE FAILED: template: policy-reporter/templates/deployment.yaml:31:28: executing "policy-reporter/templates/deployment.yaml" at <include (print .Template.BasePath "/config-secret.yaml") .>: error calling include: template: policy-reporter/templates/config-secret.yaml:10:18: executing "policy-reporter/templates/config-secret.yaml" at <tpl (.Files.Get "config.yaml") .>: error calling tpl: error during tpl function execution for "loki:\n host: {{ .Values.target.loki.host | quote }}\n minimumPriority: {{ .Values.target.loki.minimumPriority | quote }}\n skipExistingOnStartup: {{ .Values.target.loki.skipExistingOnStartup }}\n {{- with .Values.target.loki.sources }}\n sources:\n {{- toYaml . | nindent 4 }}\n {{- end }}\n\nelasticsearch:\n host: {{ .Values.target.elasticsearch.host | quote }}\n index: {{ .Values.target.elasticsearch.index | default \"policy-reporter\" | quote }}\n rotation: {{ .Values.target.elasticsearch.rotation | default \"dayli\" | quote }}\n minimumPriority: {{ .Values.target.elasticsearch.minimumPriority | quote }}\n skipExistingOnStartup: {{ .Values.target.elasticsearch.skipExistingOnStartup }}\n {{- with .Values.target.elasticsearch.sources }}\n sources:\n {{- toYaml . | nindent 4 }}\n {{- end }}\n\nslack:\n webhook: {{ .Values.target.slack.webhook | quote }}\n minimumPriority: {{ .Values.target.slack.minimumPriority | quote }}\n skipExistingOnStartup: {{ .Values.target.slack.skipExistingOnStartup }}\n {{- with .Values.target.slack.sources }}\n sources:\n {{- toYaml . | nindent 4 }}\n {{- end }}\n\ndiscord:\n webhook: {{ .Values.target.discord.webhook | quote }}\n minimumPriority: {{ .Values.target.discord.minimumPriority | quote }}\n skipExistingOnStartup: {{ .Values.target.discord.skipExistingOnStartup }}\n {{- with .Values.target.discord.sources }}\n sources:\n {{- toYaml . | nindent 4 }}\n {{- end }}\n\nteams:\n webhook: {{ .Values.target.teams.webhook | quote }}\n minimumPriority: {{ .Values.target.teams.minimumPriority | quote }}\n skipExistingOnStartup: {{ .Values.target.teams.skipExistingOnStartup }}\n {{- with .Values.target.teams.sources }}\n sources:\n {{- toYaml . | nindent 4 }}\n {{- end }}\n\nui:\n host: {{ include \"policyreporter.uihost\" . }}\n minimumPriority: {{ .Values.target.ui.minimumPriority | quote }}\n skipExistingOnStartup: {{ .Values.target.ui.skipExistingOnStartup }}\n {{- with .Values.target.ui.sources }}\n sources:\n {{- toYaml . | nindent 4 }}\n {{- end }}\n\ns3:\n accessKeyID: {{ .Values.target.s3.accessKeyID }}\n secretAccessKey: {{ .Values.target.s3.secretAccessKey }}\n region: {{ .Values.target.s3.region }}\n endpoint: {{ .Values.target.s3.endpoint }}\n bucket: {{ .Values.target.s3.bucket }}\n prefix: {{ .Values.target.s3.prefix }}\n minimumPriority: {{ .Values.target.s3.minimumPriority | quote }}\n skipExistingOnStartup: {{ .Values.target.s3.skipExistingOnStartup }}\n {{- with .Values.target.s3.sources }}\n sources:\n {{- toYaml . | nindent 4 }}\n {{- end }}\n\n{{- with .Values.policyPriorities }}\npriorityMap:\n {{- toYaml . | nindent 2 }}\n{{- end }}": template: policy-reporter/templates/deployment.yaml:49:11: executing "policy-reporter/templates/deployment.yaml" at <include "policyreporter.uihost" .>: error calling include: template: policy-reporter/templates/_helpers.tpl:68:47: executing "policyreporter.uihost" at <.Values.ui.views.logs>: nil pointer evaluating interface {}.logs
Digging through the error message I manage to see something about the UI. So I tried to enable the UI and if I do so, my deployment works.
helm upgrade -i policy-reporter policy-reporter/policy-reporter -n policy-reporter --namespace policy-reporter --version v2.2.2 -f default_values.yaml --set ui.enabled=true
Release "policy-reporter" has been upgraded. Happy Helming!
There is no need for us to have the UI enabled for the policy reporter. So I'd rather have it disabled.
Thanks!
faeac45 broke
Our policy reporter UI regularly responds very slowly and sometimes an error message appears : Unable to retrieve all Data from the Server
In the logs :
2022/06/02 16:20:38 http: proxy error: context canceled
2022/06/02 16:20:44 http: proxy error: context canceled
2022/06/02 16:20:44 http: proxy error: context canceled
2022/06/02 16:20:44 http: proxy error: context canceled
When we update a kyverno policy, the information takes a long time to appear in the UI whereas the report CRD has already been updated.
Any idea about a configuration we can tune to improve this please ?
Additinnal notes :
It would be really good to have an arm64 compatible Docker image to deploy the application on a Kubernetes Raspberry PI cluster
Trying to use an amazing feature from #167 I noticed that external clusters behind reverse proxy can't been accessed. E.g. my external policy-reporter is accessed through an nginx ingress.
The reason is
NewSingleHostReverseProxy does not rewrite the Host header. To rewrite Host headers, use ReverseProxy directly with a custom Director policy.
Requests to the ingress controller are submitted with wrong Host header and root policy-reporter can't gather the data. So only non-proxied setups work for now.
This behavior was seen on a cluster running Kyverno 1.7.1. Deleting the pod in order for it to be recreating resulted in everything working as expected.
I deployed a set of kyverno with policies, policy exporter and policy exporter UI on cluster A, able to see the policy reports from UI
Configured one more setup of kyverno with policies, policy exporter on cluster B, but this time without policy expoter UI. In helm chart values.yaml of policy exporter, for UI url field, i gave fqdn of cluster A policy UI url.
After installing of the setup, i see the reports are pushed. I able to see report error from policy exporter log, but unable to see from dashbaord by filtering cluster or namespace etc.. How can we do this multi cluster UI setup
awesome project. Just wondering about options how multiple teams in cluster can have different access level to UI and get separate notification back-ends. Are there any road-map in this direction ?
Hi!
So I have latest policy-reporter Helm chart (2.10.0) and I have the monitoring enabled.
monitoring:
enabled: true
serviceMonitor:
labels:
release: kube-prometheus-stack
plugins:
kyverno: true
grafana:
# required: namespace of your Grafana installation
namespace: monitoring-system
dashboards:
# Enable the deployment of grafana dashboards
enabled: true
# Label to find dashboards using the k8s sidecar
label: grafana_dashboard
folder:
# Annotation to enable folder storage using the k8s sidecar
annotation: grafana_folder
# Grafana folder in which to store the dashboards
name: Big Brother
What is interesting that these 3 Dashboards worked on Monday and I did no changes on the policy-reporter itself.
But what I did is a couple of kube-prometheus-stack Helm chart upgrades. I didn't see anything dangerous there, but my suspicion is that something changed that the policy-reporter is expecting.
On Monday I did upgrade from 36.2.1 -> 36.6.1 and 36.6.1 -> 36.6.2 (nothing special in values file)
On Wednesday 36.6.2 -> 37.0.0 (here they changed metricRelabelings and cAdvisorMetricRelabelings)
On Thursday 37.0.0 -> 37.2.0 (nothing special in values file)
Not sure when the dashboards stopped working, but they worked on Monday and yesterday after the upgrade I got this:
Or maybe I am on the wrong track here?
Thanks!
Hello!
Iam from Yandex Cloud. Recently we contributed with yandex cloud s3 to policy reporter.
We want to add policy-reporter to our Yandex Cloud Kubernetes Marketplace and we need the logo of policy-reporter in svg format.
I tryed to convert your image from docs to svg but unfortunately i didnt meet a good quality and size. Could you pleasee send me your logo in svg format in normal quality and size?
Thank you very much
Hi Kyverno Team,
Looking at https://raw.githubusercontent.com/kyverno/policy-reporter/main/charts/policy-reporter/values.yaml, possible to add skipTLS for MS Teams notifications?
Thank you
The Kyverno package already has the PDBs implemented in the chart but not for the policy-reporter packages.
https://github.com/kyverno/policy-reporter/pull/58/files#diff-892d0c98e0e009d28a0ff1ea944c97db2fbbb4d71b53f92b3736c8780e824f25R9 updated policy-reporter container version to 1.8.5, but it isn't available in docker hub https://hub.docker.com/r/fjogeleit/policy-reporter/tags
Also, I don't like the secret file, you don't encrypt to base64 and some CI can block when you're checking manifest
I'll recomend you replace https://github.com/fjogeleit/policy-reporter/blob/main/charts/policy-reporter/templates/targetssecret.yaml#L8 to use | b64enc
but it's braking change in helm chart
And one more thing what's about move vars loki:
,elasticsearch:
insde additinal object config.loki
, config.elasticsearch i
,
https://github.com/fjogeleit/policy-reporter/blob/b71128448dcbfde8bd2937d4d60661103d9c52c3/charts/policy-reporter/values.yaml#L30
helm template ./
---
# Source: policy-reporter/templates/targetssecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: policy-reporter-targets
labels:
helm.sh/chart: policy-reporter-0.16.2
app.kubernetes.io/name: policy-reporter
app.kubernetes.io/instance: policy-reporter
app.kubernetes.io/version: "0.12.0"
app.kubernetes.io/managed-by: Helm
type: Opaque
stringData:
config.yaml: |-
loki:
host: ""
minimumPriority: ""
skipExistingOnStartup: true
elasticsearch:
host: ""
index: "policy-reporter"
rotation: "dayli"
minimumPriority: ""
skipExistingOnStartup: true
slack:
webhook: ""
minimumPriority: ""
skipExistingOnStartup: true
discord:
webhook: ""
minimumPriority: ""
skipExistingOnStartup: true
When creating a image policy and creating a subsequent resource which triggers that policy (E.g an unsigned image on a pod), it appears to crash policy reporter.
helm repo add policy-reporter https://kyverno.github.io/policy-reporter
helm repo update
helm upgrade --install policy-reporter policy-reporter/policy-reporter --create-namespace -n policy-reporter --set metrics.enabled=true --set api.enabled=true
test-image-policy.txt
attached, convert to yaml - kubectl apply -f test-image-policy.yaml
kubectl run unsigned --image=ghcr.io/kyverno/test-verify-image:unsigned
.kubectl run signed--image=ghcr.io/kyverno/test-verify-image:signed
Notice how to policy-reporter will produce an error in the logs and be constantly restarting.
The expected result is that policyreporter does not crash
The errors found
2022/02/15 18:38:44 [WARNING] - Healthz Check: No policyreport.wgpolicyk8s.io and clusterpolicyreport.wgpolicyk8s.io crds are found
2022/02/15 18:38:46 [WARNING] - Healthz Check: No policyreport.wgpolicyk8s.io and clusterpolicyreport.wgpolicyk8s.io crds are found
2022/02/15 18:38:49 [WARNING] - Healthz Check: No policyreport.wgpolicyk8s.io and clusterpolicyreport.wgpolicyk8s.io crds are found
2022/02/15 18:38:49 [INFO] Resource registered: wgpolicyk8s.io/v1alpha2, Resource=clusterpolicyreports
2022/02/15 18:38:49 [INFO] Resource registered: wgpolicyk8s.io/v1alpha2, Resource=policyreports
E0215 18:38:49.225534 1 runtime.go:78] Observed a panic: &runtime.TypeAssertionError{_interface:(*runtime._type)(0x1741cc0), concrete:(*runtime._type)(nil), asserted:(*runtime._type)(0x16fbc20), missingMethod:""} (interface conversion: interface {} is nil, not string)
goroutine 52 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x177cde0, 0xc000211d10})
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x7d
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x40ed74})
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x75
panic({0x177cde0, 0xc000211d10})
/usr/local/go/src/runtime/panic.go:1038 +0x215
github.com/kyverno/policy-reporter/pkg/kubernetes.(*mapper).mapResult(0xc0000109f8, 0xc0003cdf80)
/app/pkg/kubernetes/mapper.go:90 +0x734
github.com/kyverno/policy-reporter/pkg/kubernetes.(*mapper).MapPolicyReport(0xc000121cc0, 0xc0005cc230)
/app/pkg/kubernetes/mapper.go:55 +0x485
github.com/kyverno/policy-reporter/pkg/kubernetes.(*k8sPolicyReportClient).watchCRD.func2({0x191f320, 0xc00037a7e8})
/app/pkg/kubernetes/policy_report_client.go:108 +0x44
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777 +0x9f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fcf7c455e60)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000056f38, {0x1c78f20, 0xc0005d6000}, 0x1, 0xc0005d4000)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x3b9aca00, 0x0, 0x0, 0xc000056f88)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc00010df00)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
panic: interface conversion: interface {} is nil, not string [recovered]
panic: interface conversion: interface {} is nil, not string
goroutine 52 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x40ed74})
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0xd8
panic({0x177cde0, 0xc000211d10})
/usr/local/go/src/runtime/panic.go:1038 +0x215
github.com/kyverno/policy-reporter/pkg/kubernetes.(*mapper).mapResult(0xc0000109f8, 0xc0003cdf80)
/app/pkg/kubernetes/mapper.go:90 +0x734
github.com/kyverno/policy-reporter/pkg/kubernetes.(*mapper).MapPolicyReport(0xc000121cc0, 0xc0005cc230)
/app/pkg/kubernetes/mapper.go:55 +0x485
github.com/kyverno/policy-reporter/pkg/kubernetes.(*k8sPolicyReportClient).watchCRD.func2({0x191f320, 0xc00037a7e8})
/app/pkg/kubernetes/policy_report_client.go:108 +0x44
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:777 +0x9f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fcf7c455e60)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000056f38, {0x1c78f20, 0xc0005d6000}, 0x1, 0xc0005d4000)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x3b9aca00, 0x0, 0x0, 0xc000056f88)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc00010df00)
/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go:771 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
Possibly the same in other paths in the repo
I'm using the Grafana dashboard named PolicyReports
provided in the Helm chart. To be consistent with other dashboards, the panel columns should be updated as follows:
Failing PolicyRules
category
and severity
columns in front of namespace
Failing ClusterPolicyRules
Pass
and Skip
statuses since this is supposed to be for failing policies onlycategory
, severity
, kind
, name
, policy
, rule
, status
)container
columnHey there,
first off, thank you for this helpful tool. Makes adapting kyverno even easier.
I was "quick starting" kyverno and stumbled upon this issue, that the existing (strict) kyverno policy disallow-capabilities-strict
will trigger on the policy-reporter
deployment.
The problem:
The policy is testing upper case "ALL" as necessary drop capability:
https://github.com/kyverno/policies/blob/b3d81ea30e8751a503abe9dd888cc6cc3d4ebd72/pod-security/restricted/disallow-capabilities-strict/disallow-capabilities-strict.yaml#L39-L41
All deployment templates in this repo use a lower case "all" and therefor trigger the policy.
I first tried adding a to_upper
function on that validate
condition
, but that just got messy, because we are already in a foreach
loop for the containers.
Although I was unable to find a definition from Kubernetes side if a lower case "all" is allowed, all references I could find just adapt the upper case solution.
So I am going the easy way first and will add a PR here that fixes it for policy-reporter
.
I happen to be one of the folks who don't use Helm, Kustomize, etc.; so trying to get this stood up has been mostly reverse engineering the Helm chart. There are some places that aren't quite clear to me how to adapt, so even a globbed yaml manifest of the associated resources to install via kubectl [apply|create] -f <some path to maniest>.yaml
would be very welcomed.
Kyverno's own Quick Start page has this, for example:
kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml
Hi,
I'm trying to configure the trivy scan
I faced an issue
2022/08/19 06:26:25 [ERROR] Failed to update Policy Report pod-vault-0-vault (UNIQUE constraint failed: policy_report_result.id)
policy-report vault trivy
apiVersion: wgpolicyk8s.io/v1alpha2
kind: PolicyReport
metadata:
creationTimestamp: "2022-08-18T16:10:25Z"
generation: 1
labels:
pod-spec-hash: 764f764bb7
trivy-adapter.container.name: vault
trivy-adapter.resource.kind: Pod
trivy-adapter.resource.name: vault-0
trivy-adapter.resource.namespace: infra
name: pod-vault-0-vault
namespace: infra
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: false
controller: true
kind: Pod
name: vault-0
uid: d4451f8a-11c5-4a2b-94f7-a7e0fc0d4019
resourceVersion: "425016218"
uid: 3ea09e50-8b3e-43df-b875-2499eaea033c
results:
- category: libcrypto1.1
message: 'openssl: AES OCB fails to encrypt some bytes'
policy: CVE-2022-2097
properties:
FixedVersion: 1.1.1q-r0
InstalledVersion: 1.1.1n-r0
PrimaryURL: https://avd.aquasec.com/nvd/cve-2022-2097
resources:
- apiVersion: v1
kind: Pod
name: vault-0
namespace: infra
uid: d4451f8a-11c5-4a2b-94f7-a7e0fc0d4019
result: error
scored: true
severity: high
source: Trivy
timestamp:
nanos: -110037824
seconds: 1660839024
- category: libssl1.1
message: 'openssl: AES OCB fails to encrypt some bytes'
policy: CVE-2022-2097
properties:
FixedVersion: 1.1.1q-r0
InstalledVersion: 1.1.1n-r0
PrimaryURL: https://avd.aquasec.com/nvd/cve-2022-2097
resources:
- apiVersion: v1
kind: Pod
name: vault-0
namespace: infra
uid: d4451f8a-11c5-4a2b-94f7-a7e0fc0d4019
result: error
scored: true
severity: high
source: Trivy
timestamp:
nanos: -110016824
seconds: 1660839024
- category: zlib
message: 'zlib: a heap-based buffer over-read or buffer overflow in inflate in inflate.c
via a large gzip header extra field'
policy: CVE-2022-37434
properties:
FixedVersion: 1.2.12-r2
InstalledVersion: 1.2.12-r0
PrimaryURL: https://avd.aquasec.com/nvd/cve-2022-37434
resources:
- apiVersion: v1
kind: Pod
name: vault-0
namespace: infra
uid: d4451f8a-11c5-4a2b-94f7-a7e0fc0d4019
result: skip
scored: true
source: Trivy
timestamp:
nanos: -110011824
seconds: 1660839024
summary:
error: 2
fail: 0
pass: 0
skip: 1
warn: 0
some other scan and policyreports
works, example
apiVersion: wgpolicyk8s.io/v1alpha2
kind: PolicyReport
metadata:
creationTimestamp: "2022-08-19T03:09:52Z"
generation: 1
labels:
pod-spec-hash: 7db8786b54
trivy-adapter.container.name: victoria-metrics-agent
trivy-adapter.resource.kind: Pod
trivy-adapter.resource.name: vm-agent-victoria-metrics-agent-0
trivy-adapter.resource.namespace: monitoring
name: pod-vm-agent-victoria-metrics-agent-0-victoria-metrics-agent
namespace: monitoring
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: false
controller: true
kind: Pod
name: vm-agent-victoria-metrics-agent-0
uid: dd2448f0-ef36-40e1-8ee0-4da4dbaad378
resourceVersion: "426019219"
uid: f5e531a1-3d61-48fd-8515-f1fcc5d87dcc
results:
- category: zlib
message: 'zlib: a heap-based buffer over-read or buffer overflow in inflate in inflate.c
via a large gzip header extra field'
policy: CVE-2022-37434
properties:
FixedVersion: 1.2.12-r2
InstalledVersion: 1.2.12-r1
PrimaryURL: https://avd.aquasec.com/nvd/cve-2022-37434
resources:
- apiVersion: v1
kind: Pod
name: vm-agent-victoria-metrics-agent-0
namespace: monitoring
uid: dd2448f0-ef36-40e1-8ee0-4da4dbaad378
result: skip
scored: true
source: Trivy
timestamp:
nanos: -1659053872
seconds: 1660878592
summary:
error: 0
fail: 0
pass: 0
skip: 1
warn: 0
Thanks
Just wonding what I can configure with this option, or what was the intention to configure a namespace in 2 places:
and then the online place it is referenced is here:
So is this redundant to grafana.namespace setting? At the moment I dont understand it's purpose, so maybe I am using it wrong.
Hi,
Prometheus metrics do not contain validation mode label (audit or enforce) so we are not able to filter by this setting.
Potential solutions:
policy_report_result
metric is available.relabelings
in ServiceMonitor
so we can conform Policy Reporter label names to Kyverno naming and then we can use Prometheus metric joining based on the same label name to append validation mode data.Using only Kyverno or Policy Reporter metrics is not a silver bullet. They are different. Kyverno shows validation mode when Policy Reporter does not, Policy Reporter shows name of the resource affected when Kyverno does not etc.
What do you think?
Motivation
Hi - Im from Yandex.Cloud solution architect team. I believe that kyverno is the best way to manage kubernetes policies. But Yandex.Cloud users demand export to Yandex.Cloud storages
Feature
Yandex Cloud has multiple services that can be targets,but most demanded output right now is Yandex.Storage which has S3 API
Additional context
I could implement this feature by my own - Recently I added Yandex.Storage as part of falcosidekick falcosecurity/falcosidekick#261
In a multi cluster environment, we have one policy-reporter (PR) running in each cluster. Each PR server sends the events to a central Loki server. It will be useful to specify additional Loki labels in the PR config to query and create alerts on events from different PR servers separately.
Hello! I see this line repeated in logs of Policy Reporter every five seconds or so:
1 shared_informer.go:401] The sharedIndexInformer has started, run more than once is not allowed
I have metrics server enabled with "--metrics-enabled", it seems to be somehow related. I can troubleshoot more if need be. Thank you!
Maxim
Seems this would make sense.
Ive tested both the summary and violations report feature of the helm chart, but both dont produce logs.
In my case, i only got the summary email but no violation email, although both jobs are marked as recieved.
it would be nice if the jobs could output what they are working on
Right now policy-reporter-ui image tag in kyverno-policy-reporter-ui manifest is "4" (link). I guess it should be 1.4.0, based on default-policy-reporter-ui manifest?
When going to a wrong page in the UI, eg: /does-not-exist, we're getting a UI glitch instead of an error page:
I'm using the latest release, 2.9.0
If one is restricted to a namespace in their company from a SAAS platform / Kubernetes team.
Could this be applied at the scope of a namespace? No ability to use any type of CRD or cluster level role access.
But if so, I could focus on applying policies for network policies, labels or so on...
It would be great if the policy-reporter configuration would allow multiple slack targets.
Currently its only possible to setup one slack target.
Desired behaviour
Inspecting logs for the policy-reporter
pod, I'm seeing log entries like so:
2021/05/24 20:54:14 [INFO] Resource not Found: wgpolicyk8s.io/v1alpha2, Resource=clusterpolicyreports
2021/05/24 20:54:14 [INFO] Resource not Found: wgpolicyk8s.io/v1alpha2, Resource=policyreports
2021/05/24 21:50:29 [INFO] Resource not Found: wgpolicyk8s.io/v1alpha1, Resource=policyreports
2021/05/24 21:52:36 [INFO] Resource not Found: wgpolicyk8s.io/v1alpha1, Resource=clusterpolicyreports
Comparing to the CRDs in my k8s cluster, the policyreports
and clusterpolicyreports
CRDs are defined like so (truncated for brevity - these are Kyverno's CRDs and very lengthy):
### policyreports CRD ###
Name: policyreports.wgpolicyk8s.io
Namespace:
Labels: <none>
Annotations: controller-gen.kubebuilder.io/version: v0.4.0
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
# ...and so on
### clusterpolicyreports CRD ###
Name: clusterpolicyreports.wgpolicyk8s.io
Namespace:
Labels: <none>
Annotations: controller-gen.kubebuilder.io/version: v0.4.0
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
# ...and so on
I'm running Kyverno v1.3.6
for reference, which I installed from their globbed manifest with minimal changes - I only modified the namespace label in each resource. The only reference to wgpolicyk8s.io/v1alpha*
in their manifest is for the ClusterRole. So the CRDs exist, but aren't under the API policy-reporter seems to expect as defined in pkg/kubernetes/report_adapter.go
.
Please let me know if there's any additional info I can provide!
Hi,
great product in combination with Kyverno!
Just noticed the issue of Policy Reporter showing old Pod names failing after changes that make those resources pass the validation.
Restarting Policy Reporter is a solution (now).
Not sure, but it seems some occasional garbage collecting/removal of old records for resources that do not exist anymore in the cluster might be a solution?
Thank you,
Alen
Hello,
I am running policy-reporter Helm chart 2.9.1 for a couple of days now.
Everything is working fine. Also I use it together with Flux.
Yesterday policy-reporter 2.9.2 Helm chart was released and I went for upgrade.
I checked the commits and values file and the upgrade seemed quite straightforward, just increment the version to 2.9.2.
But the upgrade failed and I keep getting:
Helm upgrade failed: unable to decode "": json: cannot unmarshal number into Go struct field ObjectMeta.metadata.labels of type string
I couldn't get any more details out.
Since these last commits were related to Grafana monitoring part I also added
monitoring:
...
grafana:
# required: namespace of your Grafana installation
namespace: monitoring-system
dashboards:
# Enable the deployment of grafana dashboards
enabled: true
# Label to find dashboards using the k8s sidecar
label: grafana_dashboard
value: "1"
value: "1" to the values.yaml file but that also didn't work.
Regarding other options in values.yaml I don't have anything fancy. I have metrics, monitoring, kyverno plugin and Slack webhook. All other stuff is set to default.
We would like to use the policy-reporter as an overview for our policy violations but the clusters policy-reporter should run in do not have the ability to provide persistent volumes.
Is there a possibility to implement the usage of another database backend such as postgresql?
I don't generate any cpolr resources and am sure my users will end up clicking on ClusterPolicy Reports anyway :)
Seeing 2 things after upgrading to 2.6.1 along with Kyverno 1.7.0.
2022-06-21T15:57:50.076802254-04:00 W0621 19:57:50.076713 1 shared_informer.go:401] The sharedIndexInformer has started, run more than once is not allowed
In the UI, I only see data related to a single namespace.
Happy to provide additional information as needed.
It would be great to have a config option in the helm chart to filter of which namespaces policy-reports should be shown
We have a multi-tenant cluster with a single kyverno instance. The policies are all the same for everyone. Now it would be very cool if I could give different teams on the cluster access to different deployments of policy-reporter, where they can only see their individual reports, and not everything from all other teams, too.
Ideally this config option allows to configure:
teamA-*
will mean that all policies reports from teamA-namespace1
, teamA-namespace2
, ... are shownI am running 1.8.9 and see the following log entries when starting my policy-reporter pod. Is the ERROR legit?
2021/09/07 16:00:39 [INFO] UI configured
2021/09/07 16:00:52 [ERROR] No PolicyReport CRDs found
2021/09/07 16:01:09 [INFO] Resource Found: wgpolicyk8s.io/v1alpha1, Resource=clusterpolicyreports
2021/09/07 16:01:09 [INFO] Resource Found: wgpolicyk8s.io/v1alpha2, Resource=policyreports
The following CRDs exist on the system since this cluster is running Kyverno 1.4.2
clusterpolicies.kyverno.io 2021-09-02T15:13:05Z
clusterreportchangerequests.kyverno.io 2021-09-02T15:13:05Z
generaterequests.kyverno.io 2021-09-02T15:13:05Z
policies.kyverno.io 2021-09-02T15:13:05Z
reportchangerequests.kyverno.io 2021-09-02T15:13:05Z
networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
https://github.com/kyverno/policy-reporter/blob/policy-reporter-1.8.6/charts/policy-reporter/charts/ui/templates/ingress.yaml#L4-L8 Uses outdated logic for Ingress apiVersion.
Helm recently updated default ingress template to support new apiVersion and new spec fields. You can use as reference.
It would be great if it would be possible to define custom webhook endpoints as targets. The defined endpoints are great but a custom http endpoint would allow for simple implementations to respond differently.
Shouldn't we set automountServiceAccountToken: "false"
in deployment manifest? Any ideas why we set it to true instead?
$ helm install policy-reporter policy-reporter/policy-reporter --set kyvernoPlugin.enabled=true --set ui.enabled=true --set ui.plugins.kyverno=true -n policy-reporter --create-namespace
Error: INSTALLATION FAILED: admission webhook "validate.kyverno.svc-fail" denied the request:
resource Deployment/policy-reporter/policy-reporter-kyverno-plugin was blocked due to the following policies
restrict-automount-sa-token:
autogen-validate-automountServiceAccountToken: 'validation error: Auto-mounting
of Service Account tokens is not allowed. Rule autogen-validate-automountServiceAccountToken
failed at path /spec/template/spec/automountServiceAccountToken/'
Rule:
spec:
background: true
rules:
- match:
any:
- resources:
kinds:
- Pod
name: validate-automountServiceAccountToken
validate:
message: Auto-mounting of Service Account tokens is not allowed.
pattern:
spec:
automountServiceAccountToken: "false"
validationFailureAction: enforce
Is there a way to reach the UI internally via rancher without an ingress? I can access the web interface, but I don't get any data displayed.
The problem is the following:
example URL in rachner:
https: //rancher.com/k8s/clusters/xyz/api/v1/namespaces/policy-reporter/services/http:policy-reporter-ui:8080/proxy/
Request of the UI:
https: //rancher.com/api/
instead of
https: //rancher.com/k8s/clusters/xyz/api/v1/namespaces/policy-reporter/services/http:policy-reporter-ui:8080/proxy/api
Or is there a solution for the user login? Then the policy-reporter can also be accessible from outside.
Adding policy-reporter helm chart to artifacthub same as what currently is for kyverno
and kyverno-policies
charts.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.