Coder Social home page Coder Social logo

kubescape / helm-charts Goto Github PK

View Code? Open in Web Editor NEW
27.0 27.0 39.0 1.68 MB

Kubescape can run as a set of microservices inside a Kubernetes cluster. This allows you to continually monitor the status of a cluster, including for compliance and vulnerability management

License: Apache License 2.0

Smarty 100.00%

helm-charts's Introduction

Version build Go Report Card Gitpod Ready-to-Code GitHub CNCF Artifact HUB FOSSA Status OpenSSF Best Practices OpenSSF Scorecard Stars Twitter Follow Slack

Kubescape

Kubescape logo

Comprehensive Kubernetes Security from Development to Runtime

Kubescape is an open-source Kubernetes security platform that provides comprehensive security coverage from left to right across the entire development and deployment lifecycle. It offers hardening, posture management, and runtime security capabilities to ensure robust protection for Kubernetes environments.

Key features of Kubescape include

  • Shift-left security: Kubescape enables developers to scan for misconfigurations as early as the manifest file submission stage, promoting a proactive approach to security.
  • IDE and CI/CD integration: The tool integrates seamlessly with popular IDEs like VSCode and Lens, as well as CI/CD platforms such as GitHub and GitLab, allowing for security checks throughout the development process.
  • Cluster scanning: Kubescape can scan active Kubernetes clusters for vulnerabilities, misconfigurations, and security issues
  • Multiple framework support: Kubescape can test against various security frameworks, including NSA, MITRE, SOC2, and more.
  • YAML and Helm chart validation: The tool checks YAML files and Helm charts for correct configuration according to the frameworks above, without requiring an active cluster.
  • Kubernetes hardening: Kubescape ensures proactive identification and rapid remediation of misconfigurations and vulnerabilities through manual, recurring, or event-triggered scans.
  • Runtime security: Kubescape extends its protection to the runtime environment, providing continuous monitoring and threat detection for deployed applications.
  • Compliance management: The tool aids in maintaining compliance with recognized frameworks and standards, simplifying the process of meeting regulatory requirements.
  • Multi-cloud support: Kubescape offers frictionless security across various cloud providers and Kubernetes distributions.

By providing this comprehensive security coverage from development to production, Kubescape enables organizations to implement a robust security posture throughout their Kubernetes deployment, addressing potential vulnerabilities and threats at every stage of the application lifecycle.

Kubescape was created by ARMO and is a Cloud Native Computing Foundation (CNCF) sandbox project.

Demo

Please star ⭐ the repo if you want us to continue developing and improving Kubescape! πŸ˜€

Getting started

Experimenting with Kubescape is as easy as:

curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash

Learn more about:

Did you know you can use Kubescape in all these places?

Places you can use Kubescape: in your IDE, CI, CD, or against a running cluster.

Kubescape-operator Helm-Chart

Besides the CLI, the Kubescape operator can also be installed via a Helm chart. Installing the Helm chart is an excellent way to begin using Kubescape, as it provides extensive features such as continuous scanning, image vulnerability scanning, runtime analysis, network policy generation, and more. You can find the Helm chart in the Kubescape-operator documentation.

Kubescape GitHub Action

Kubescape can be used as a GitHub Action. This is a great way to integrate Kubescape into your CI/CD pipeline. You can find the Kubescape GitHub Action in the GitHub Action marketplace.

Under the hood

Kubescape uses Open Policy Agent to verify Kubernetes objects against a library of posture controls. For image scanning, it uses Grype. For image patching, it uses Copacetic.

By default, the results are printed in a console-friendly manner, but they can be:

  • exported to JSON, junit XML or SARIF
  • rendered to HTML or PDF
  • submitted to a cloud service

It retrieves Kubernetes objects from the API server and runs a set of Rego snippets developed by ARMO.

Architecture

kubescape

Otel collector - is not built-in, Otel endpoint spec is need to be added at setup Setting Otel

Community

Kubescape is an open source project, we welcome your feedback and ideas for improvement. We are part of the cloud-native community and are enhancing the project as the ecosystem develops.

We hold community meetings on Zoom, every other week, at 15:00 CET. (See that in your local time zone.

The Kubescape project follows the CNCF Code of Conduct.

Adopters

See here a list of adopters.

Contributions

Thanks to all our contributors! Check out our CONTRIBUTING file to learn how to join them.


Changelog

Kubescape changes are tracked on the release page

License

Copyright 2021-2024, the Kubescape Authors. All rights reserved. Kubescape is released under the Apache 2.0 license. See the LICENSE file for details.

Kubescape is a Cloud Native Computing Foundation (CNCF) sandbox project and was contributed by ARMO.

CNCF Sandbox Project

helm-charts's People

Contributors

adamstirk-ct avatar alainknaebel avatar alegrey91 avatar amirmalka avatar amitschendel avatar batazor avatar bezbran avatar daniel-grunbergerca avatar dwertent avatar gaardsholt avatar idohuber avatar johnmanjiro13 avatar kooomix avatar kubescapebot avatar lioralafiarmo avatar matanshk avatar matthyx avatar mgi166 avatar mkilchhofer avatar moshe-rappaport-ca avatar nc-srheinnecker avatar niniigit avatar oleksiihahren avatar rcohencyberarmor avatar rotemamsa avatar slashben avatar vladklokun avatar yiscahlevysilas1 avatar yonatanamz avatar yuleib avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

helm-charts's Issues

Storage apiservice attempts to call APIs that were removed in Kubernetes 1.27

I'm getting an alert in GKE 1.28 cluster that ksserver attempts to call this API that's been removed: /apis/storage.k8s.io/v1beta1/csistoragecapacities

After removing csistoragecapacities resources from ClusterRole seems that alert is gone:

resources: ["csistoragecapacities", "storageclasses"]

Since storage.k8s.io has non-beta API v1 now, I suspect Storage API has to be updated:

Specs ref:
https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/csi-storage-capacity-v1/

Rootless Podman / Kind - no ephemeral-storage

Description

When running locally using rootless podman and a Kind cluster the kubevuln pod doesn't run because of the ephemeral storage request.

As a workaround adding --set kubevuln.resources.requests.ephemeral-storage=null to the deploy seemed to work.

Environment

OS: NixOS
Version: helm.sh/chart=kubescape-cloud-operator-1.9.10

ServiceMonitor.monitoring.coreos.com "prometheus-exporter" is invalid: spec.endpoints: Required value

Description

Helm installation fails to deploy the prometheus-exporter ServiceMonitor.

Environment

OS: Ubuntu 22.04 LTS
Version: 3.0.3

Steps To Reproduce

  1. --set "capabilities.prometheusExporter=enable" or set value in values.yaml
  2. helm install ...
  3. install all, but prometheus-exporter
  4. error: Error: INSTALL FAILED: failed to create resource: ServiceMonitor.monitoring.coreos.com "prometheus-exporter" is invalid: spec.endpoints: Required value

Expected behavior

ServiceMonitor is created

Actual Behavior

ServiceMonitor is not created

Additional context

The template is missing the required "endpoints" specification.

Non-obvious error when trying to install Helm chart into old Kubernetes cluster.

In #111, a user attempted to install the Kubescape Helm chart into a Kubernetes 1.20 cluster.

Our chart tries to create batch/v1/CronJob objects, which only exist in Kubernetes 1.21 and up.

We should (a) keep track of the earliest supported Kubernetes version, and (b) provide an obvious error when someone tries to install the chart into a cluster that is older than that version.

Cannot Pull Helm Template

Description

I can't pull a helm template. We have strict requirements for labels that your chart does not conform to. We just want to pull the manifests so we can make the adjustments ourselves.

Environment

OS: Ubuntu 22.04 LTS
Version: 1.17.0

Steps To Reproduce

Steps to reproduce the behavior:

  1. Run
    helm template kubescape kubescape/kubescape-operator -n kubescape --create-namespace --set clusterName='kubectl config current-context' --set account=<removed> --set server=api.armosec.io > armo-helm.yaml --debug
  2. Observe Error

Expected behavior

A manifest generated.

Actual Behavior

No manifest and this error:

install.go:214: [debug] Original chart version: ""
install.go:231: [debug] CHART PATH: /Users/ashkaan/Library/Caches/helm/repository/kubescape-operator-1.17.0.tgz

Error: execution error at (kubescape-operator/templates/operator/registry-scan-recurring-cronjob-configmap.yaml:4:6): `batch/v1/CronJob not supported`
helm.go:84: [debug] execution error at (kubescape-operator/templates/operator/registry-scan-recurring-cronjob-configmap.yaml:4:6): `batch/v1/CronJob not supported`

Thanks

batch/v1beta1 CronJob is deprecated

Description

batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob

Environment

OS:
Version:

Steps To Reproduce

Expected behavior

Actual Behavior

Additional context

For some reason, the UI is showing that I have an older version chart

Description

I'm using ArgoCD for updates, and have updated the chart many times, but at some point the chart version in the UI stopped updating and I always see the version - v1.17.3

I tried restarting the services, but apparently the result is cached somewhere in the configs?

Screenshot 2024-03-08 at 15 31 05

Environment

k8s 1.29

Steps To Reproduce

  1. Install old chart
  2. upgrade to new version

Expected behavior

correct version in UI

Actual Behavior

Additional context

Missing tolerations field in kubescape deployment

Description

The tolerations field in kubescape deployment template is missing.

Environment

OS: ubuntu-focal-20.04-arm64-server-20230112
Version: 2.0.181

Steps To Reproduce

helm upgrade --install kubescape kubescape/kubescape-cloud-operator --set account=<account_id> --values /path/to/values.yaml

Expected behavior

The kubescape deployment should have a tolerations field like every other deployment when they are defined in values.yaml

Actual Behavior

The kubescape deployment does not have a tolerations field.

Additional context

I'm willing to make a PR for this issue.

Control C-0004 fails on all kubescape resources

Description

Control C004 - Resources memory limit and request is failing for all resources in Kubescape namespace.

Remediation: Set the memory limit or use exception mechanism to avoid unnecessary notifications

Use external otelCollector

Overview

I already have an openlemetry-operator running, perhaps we should add the ability to use an external opencollector?

Problem

service duplication

Solution

add the ability to use an external opencollector

Alternatives

Additional context

Add managing ServiceMonitor parameters to kubescape-prometheus-integrator

Manage ServiceMonitor parameter
We have large kubernetes cluster more than 5000 resources. Current parameters of ServiceMonitor don't suit our infrastructure.

Implement managing parameters of ServiceMonitor in values
Like that

serviceMonitor:
    enabled: true
    interval: 20m
    scrapeTimeout: 100s

be able to set labels for ServiceMonitor

Overview

In my prometheus-operator configuration, I rely on the labels values in ServiceMonitor, so I need to be able to add my values to labels.

I think this is a popular practice and may be useful to others as well

Solution

add a field to the template

unable to build kubernetes objects from release manifest: [resource mapping not found for name: "kubescape-scheduler" namespace: "kubescape" from "": no matches for kind "CronJob" in version "batch/v1"

uccessfully ran following 2 commands -
helm repo add kubescape https://kubescape.github.io/helm-charts/
helm repo update

However, I'm receiving following error, while installing kubescape. K8 version I'm using is 1.20.7.

Command I'm executing via PowerSHell in Azure Cluster -
helm upgrade --install kubescape kubescape/kubescape-cloud-operator -n kubescape --create-namespace --set account=myaccountid --set clusterName=kubectl config current-context

Release "kubescape" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "kubescape-scheduler" namespace: "kubescape" from "": no matches for kind "CronJob" in version "batch/v1"
ensure CRDs are installed first, resource mapping not found for name: "kubevuln-scheduler" namespace: "kubescape" from "": no matches for kind "CronJob" in version "batch/v1"
ensure CRDs are installed first]

Storage retention

Overview

I'm using kubescape-operator ,chart version 1.16.0, free plan. At web ui data retention for free plan it's 1 month according to the documentation, but what about storage in Kubernetes?

Is there any way than frequently deleting old data from the storage?

Problem

Storage always increases and there is no way to automatically delete old records

Solution

Create retention policy or time to live for storage and add this configuration to helm chart

Alternatives

None

Additional context

Image with % of storage PVC usage
image

Kubescape helm install fails | kollector-0 pod crashes

Description

Installation of latest Helm charts on a Kind based cluster fails as kollector-0 is not in a running state.

Environment

OS: Ubuntu 20.04 LTS
Kubescape v3.0.0

Steps To Reproduce

  1. Spin up a single node Kind cluster on a Ubuntu 20x VM
  2. Install the helm charts using the following commands
helm repo add kubescape https://kubescape.github.io/helm-charts/ ;
helm repo update kubescape
helm upgrade --install kubescape kubescape/kubescape-cloud-operator -n kubescape --create-namespace --set account=<my_account> --set clusterName=`kubectl config current-context`
  1. Check the status of running pods
  2. See error below.
root:~# kubectl get pods -n kubescape
NAME                              READY   STATUS             RESTARTS      AGE
gateway-7c59c9d56f-7cpzt          1/1     Running   3 (29s ago)   3m29s
kollector-0                       0/1     CrashLoopBackOff           2 (61s ago)   3m29s
kubescape-75999c869f-z8hk7        1/1     Running            0             3m29s
kubevuln-5996b6f647-l95sq         1/1     Running            0             3m29s
operator-6dc7cb68f7-tscdr         1/1     Running            0             3m29s
otel-collector-76bb986488-mlc9c   1/1     Running            0             3m29s
storage-798b568957-nd8cd          1/1     Running            0             3m29s

root:~# kubectl describe pods/kollector-0 -n kubescape 
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  19m                   default-scheduler  Successfully assigned kubescape/kollector-0 to kubescape-control-plane
  Normal   Pulling    19m                   kubelet            Pulling image "quay.io/kubescape/kollector:v0.1.24"
  Normal   Pulled     18m                   kubelet            Successfully pulled image "quay.io/kubescape/kollector:v0.1.24" in 10.743587845s (1m26.097982687s including waiting)
  Normal   Created    17m (x2 over 18m)     kubelet            Created container kollector
  Normal   Pulled     17m                   kubelet            Container image "quay.io/kubescape/kollector:v0.1.24" already present on machine
  Normal   Started    17m (x2 over 18m)     kubelet            Started container kollector
  Warning  BackOff    9m41s (x17 over 15m)  kubelet            Back-off restarting failed container kollector in pod kollector-0_kubescape(a1c2cc25-5833-4bf3-b70b-983acde32765)
  Warning  Unhealthy  4m45s (x81 over 18m)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503
root:~# kubectl logs pods/kollector-0 -n kubescape
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"Image version: "}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"connecting websocket","URL":"wss://report.armo.cloud/k8s/cluster-reports?clusterName=kind-kubescape&customerGUID=xxx"}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"skipping cluster trigger notification"}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"Watching over pods starting"}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"Watching over services starting"}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"Watching over secrets starting"}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"Watching over services started"}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"Watching over secrets started"}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"Watching over cronjobs starting"}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"K8s API version","version":"v1.27.3"}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"Watching over namespaces starting"}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"Watching over namespaces started"}
{"level":"info","ts":"2023-10-30T10:10:42Z","msg":"Watching over cronjobs started"}
{"level":"info","ts":"2023-10-30T10:10:57Z","msg":"K8s Cloud Vendor","cloudVendor":""}
{"level":"info","ts":"2023-10-30T10:10:57Z","msg":"Watching over nodes starting"}
{"level":"fatal","ts":"2023-10-30T10:11:49Z","msg":"cant connect to websocket after 60 tries"}
/usr/bin/kollector -alsologtostderr -v=4 2>&1  exited with code 1

Expected behavior

All the pods post the helm installation of the kubescape operator should be in a running state

Actual Behavior

The kollector-0 pod is never in a running state. Seems the readiness probe for kollector-0 pods continues to fail, websocket connection continues to fail

Additional context

kubernetes version: v1.27.3
kubescape version: v3.0.0

Supply exceptions json in helm chart

Overview

Currently it does not seem possible to easily supply an exceptions JSON file to Kubescape when using the helm chart.

Problem

We would like to run Kubescape scans using the Operator while taking into account the exceptions. Currently there is no way to supply the json to the Operator.

Solution

I would like to see this possibility, or pointers how we can already do it. In that case I will create a PR to the helm chart to make it easier for others.

Alternatives

Entering the exceptions in the ARMO portal is not desired. Running Kubescape manually with the --exceptions flag is not desired.

Additional context

N/A

kubevuln PVC should not be created if persistence is disabled

Description

the kubevuln PVC should not be created if persistence is disabled. Because in case the storage class is configured with dynamic volume provisionning (https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/) the pvc will be in "pending" status, which breaks ArgoCD sync process

{{- if and .Values.kubevuln.enabled .Values.kubescape.submit .Values.persistence.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ .Values.kubevuln.name }}
  namespace: {{ .Values.ksNamespace }}
  labels:
    app: {{ .Values.kubevuln.name }}
spec:
  accessModes:
    - {{ .Values.persistence.accessMode }}
  resources:
    requests:
      storage: {{ .Values.persistence.size.kubevuln }}
  {{- if ne .Values.persistence.storageClass "-" }}
  storageClassName: {{ .Values.persistence.storageClass | quote }}
  {{- end }}
{{- end }}

Environment

Version: v1.26

Expected behavior

PVC is not created if persistence is disabled, ArgoCD is able to be in sync

Actual Behavior

PVC is created even if persistence is disabled, breaking ArgoCD sync

In release 1.18.17 during fresh install getting this error " failed to create CustomResourceDefinition serviceauthentication.kubescape.io"

Description

I was doing a fresh install with version 1.18.17 and I got below error
failed to create CustomResourceDefinition serviceauthentication.kubescape.io: 1 error occurred:

  • CustomResourceDefinition.apiextensions.k8s.io "serviceauthentication.kubescape.io" is invalid: metadata.name: Invalid value: "serviceauthentication.kubescape.io": must be spec.names.plural+"."+spec.group

Environment

OS: Linux/kubernetes
Version: 3.0.11

Steps To Reproduce

Just do freshh install where you can re install CRDs

Expected behavior

It should be working fine

Actual Behavior

Installation failing

Invalid warnings and IRSA not working with AWS EKS

Description

Invalid warnings and IRSA not working

Environment

OS: Kubescape helm operator
Version: 1.18.13
Running kubescape operator
Verson: 1.18.13

Steps To Reproduce

Check kubescape pod logs

Expected behavior

No Invalid warnings

Actual Behavior

🚨 * failed to get EKS descriptive information. Read more: https://hub.armosec.io/docs/kubescape-integration-with-cloud-providers
Even though IRSA IAM role has permission. Still get this warning and controls dont work

🚨 ** This control is scanned exclusively by the Kubescape operator, not the Kubescape CLI. Install the Kubescape operator:
https://kubescape.io/docs/install-operator/.
why kubescape pod is throwing above warning? not using cli. I am using operator. Operator is asking to install operator? Please help identifying what I am missing here.

🚨 *** Control configurations are empty (docs: https://kubescape.io/docs/frameworks-and-controls/configuring-controls)

Tries to download from ARMO and failed. How to fix this?

Additional context

Use PriorityClass

Overview

Kubernetes offers a feature called PriorityClass which empowers cluster operators to determine the relative priority of pods. For our setup, pods such as the node-agent daemonset and kube-vuln should inherently possess a higher priority compared to other application pods.

Issue

Currently, the absence of a defined PriorityClass for our pods results in certain pods (like node-agent, kube-vuln, etc.) lingering in the "Pending" state. This persists until a cluster operator intervenes by manually deleting pods from nodes.

Proposed Solution

We should leverage PriorityClasses to address this:

  • system-node-critical: This should be assigned to node-agent pods, ensuring they run on every node without exception.

  • system-cluster-critical: This is apt for the remaining pods (operator, kubescape, kube-vuln, etc.) as they are essential for maintaining the cluster's health.

The `node-agent` is repeatedly crashing with a `CrashLoopBackOff` error : cannot create bpf perf link

Description

I'm encountering an issue with the node-agent in my Kubernetes cluster running Kubescape. The node-agent is repeatedly crashing with a CrashLoopBackOff error, and I'm seeing the following error messages in the logs:

{"level":"error","ts":"2024-01-29T12:13:19Z","msg":"error starting exec tracing","error":"creating tracer: attaching exit tracepoint: cannot create bpf perf link: permission denied"}
{"level":"fatal","ts":"2024-01-29T12:13:19Z","msg":"error starting the container watcher","error":"starting app behavior tracing: creating tracer: attaching exit tracepoint: cannot create bpf perf link: permission denied"}

Environment

OS: Linux (kernel version: 5.14.0-362.13.1.el9_3.x86_64)
Kubescape : v3.0.3
Helm chart : v1.8.1
Kubernetes Server : v1.27.6
Kubernetes Client : v1.26.3

Expected behavior

The node-agent pods should start successfully and not crash with the CrashLoopBackOff error.

Actual Behavior

The node-agent pods are repeatedly crashing with a CrashLoopBackOff error, and the logs show errors related to creating a BPF perf link due to permission denied.

Additional context

The node-agent container is using the quay.io/kubescape/node-agent:v0.1.114 image, and the pod has resource limits and requests defined.

I've checked the AppArmor profile for the node-agent container, and it's set to "unconfined," indicating that it's not confined by AppArmor security policies.

I also reviewed the RBAC permissions and ClusterRoleBinding for the node-agent service account, and it appears to have the necessary permissions and already checked kernel capabilities and seems to be right.

I use cilium CNI on the cluster.

Problem with upgrading to v1.16.1(7)

I had a prolem with upgrading to 1.16.1(2) version:

{"level":"info","ts":"2023-11-02T13:30:39Z","msg":"credentials loaded","accessKeyLength":0,"accountLength":36}
{"level":"info","ts":"2023-11-02T13:30:39Z","msg":"keepLocal config is true, statuses and scan reports won't be sent to Armo cloud"}
{"level":"info","ts":"2023-11-02T13:30:39Z","msg":"starting server"}
{"level":"info","ts":"2023-11-02T13:30:40Z","msg":"updating grype DB","listingURL":"https://toolbox-data.anchore.io/grype/databases/listing.json"}
{"level":"error","ts":"2023-11-02T13:31:40Z","msg":"failed to update grype DB","error":"vulnerability database is invalid (run db update to correct): database metadata not found: /home/nonroot/.cache/grype/db/5"}
{"level":"info","ts":"2023-11-02T13:31:40Z","msg":"restarting to release previous grype DB"}

kubevuln can't start correctly. I tried to delete and install with zero, but it doesn't help me.

Cannot change the default "kubescape" namespace name

Description

If you try to change the default name of the kubescape namespace, some features such as relevancy don't work.

Environment

OS: N/A
Version: 1.16.1 (Helm chart)

Steps To Reproduce

  1. helm repo add kubescape https://kubescape.github.io/helm-charts/
  2. helm repo update
  3. helm upgrade --install kubescape kubescape/kubescape-operator -n not-kubescape --create-namespace --set clusterName=kubectl config current-context --set account=xxxxx --set server=api.armosec.io --set ksNamespace=not-kubescape
  4. see kubevuln complaining in the logs

Expected behavior

Everything should work.

Actual Behavior

Relevancy doesn't work:

{"level":"warn","ts":"2023-10-24T07:13:16Z","msg":"failed to store SBOM into apiserver","error":"namespaces \"kubescape\" not found","name":"quay.io-kubescape-storage-v0.0.22-2c4018"}

Additional context

kubescape-operator 1.15.x fails when deployed with ArgoCD

Description

kubescape-operator 1.15.x helm chart fail when deployed with ArgoCD.
Argo manages chart resources a bit differently, but in most cases it works as expected (https://argo-cd.readthedocs.io/en/stable/user-guide/helm/).
Since 1.15.x ks-cloud-config configmap is dynamically updated and marked as "hook" resource. Most likely that makes ArgoCD think it is not needed after successful installation.

Environment

OS: alpine 3.18.3
Version: kubescape-operator 1.15.5

Steps To Reproduce

Install kubescape-operator helm chart with ArgoCD

Expected behavior

kubescape-cloud-operator 1.14.5 has no issues.
The issues can be solved in variety of ways:

Actual Behavior

ArgoCD application definition for kubescape in my deployment

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kubescape
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: https://kubescape.github.io/helm-charts/
    targetRevision: 1.15.5
    chart: kubescape-operator
    helm:
      values: |
        clusterName: <cut>
        account: <cut>
        server: api.armosec.io
  destination:
    server: {{ .Values.spec.destination.server }}
    namespace: kubescape
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

When automated prune is disabled the below can be seen (otherwise configmap is removed)
image
image

Additional context

Allow hostNetwork for storage deployment

Overview

Currently the helm chart does not provide the ability to set hostNetwork to true in the storage deployment. Also the port defaults to 443, which might already be in use if hostNetwork is set to true.

Problem

When using a virtual Network with custom CNIs like Cilium the storage API endpoint is not reachable by the kubeapi server.

Solution

Template values to set hostNetwork and port in the helm chart

Alternatives

None

Additional context

None

Rendering error in components diagram

Description

The Operator Component Diagram and several other component diagram at https://github.com/kubescape/helm-charts/blob/master/charts/kubescape-cloud-operator/README.md are not rendering.
image

Environment

OS:
Version:

Steps To Reproduce

Expected behavior

It should be rendered properly like the others. The issue appears to be with the rendering of diagrams within the <details> tag. Moving the declaration of the diagrams outside the <details> tag seems to resolve the rendering issue.

Actual Behavior

Additional context

Support ability to provide history limit in schedulers

Overview

We currently run kubescape on fargate nodes. Due to a quirk surrounding cronjob job cleanups, when a job completes its associated node sticks around. this means that we end up paying for lingering nodes i.e
image

Problem

Need to be able to provide

kind: CronJob
spec:
 successfulJobsHistoryLimit: 0
 failedJobsHistoryLimit: 1

Solution

by being able to provide these to the helm chart

kind: CronJob
spec:
 successfulJobsHistoryLimit: {{ .Values.kubevulnScheduler.successfulJobsHistoryLimit }}
 failedJobsHistoryLimit: {{ .Values.kubevulnScheduler.failedJobsHistoryLimit }}

and default to

successfulJobsHistoryLimit=3
failedJobsHistoryLimit=1

Alternatives

simply running an edit after install will "patch" the problem

Additional context

Happy to submit a pr to have the helm do this just let me know

unless theres another way to work around this

namespace consolidation is required for kubescape-cloud-operator

Description

kubescape-host-scanner is deployed into a different namespace i.e. kubescape-host-scanner. Kubescape should deploy all Kubernetes objects into a single namespace. Also, only for DaemonSet of kubescape-host-scanner namespace is defined (hard-coded) into its manifest, while for other Kubernetes objects, the namespace is inherited from values.yaml

Environment

Kubernetes Version: v1.23.13-eks-fb459a0
Kubescape Version: kubescape-cloud-operator(1.9.3)

Steps To Reproduce

Deploy the kubescape-cloud-operator Helm Chart. kubescape-host-scanner will be deployed into a different namespace i.e. kubescape-host-scanner.

Expected behavior

Kubescape Cloud Operator should deploys all Kubernetes objects into a single namespace kubescape by default.

Actual Behavior

kubescape-host-scanner is deployed into a different namespace i.e. kubescape-host-scanner.

Additional context

storage pod failing with error

Description

The storage pod is failing with below error

panic: runtime error: index out of range [0] with length 0

goroutine 261 [running]:
github.com/olvrng/ujson.Walk({0xc00ab87600, 0x44bb880?, 0x200}, 0xc005db98d8)
/go/pkg/mod/github.com/olvrng/[email protected]/Β΅json.go:87 +0x990
github.com/kubescape/storage/pkg/cleanup.loadMetadataFromPath({0x2db8240?, 0x44bb880?}, {0xc000a6a1e0, 0x96})
/work/pkg/cleanup/cleanup.go:144 +0x1fc
github.com/kubescape/storage/pkg/cleanup.(*ResourcesCleanupHandler).StartCleanupTask.func1({0xc000a6a1e0, 0x96}, {0x2da8a68?, 0xc002a6fee0?}, {0x0?, 0x0?})
/work/pkg/cleanup/cleanup.go:96 +0xfa
github.com/spf13/afero.walk({0x2db8240, 0x44bb880}, {0xc000a6a1e0, 0x96}, {0x2da8a68, 0xc002a6fee0}, 0xc005db9ed0)
/go/pkg/mod/github.com/spf13/[email protected]/path.go:44 +0x6c
github.com/spf13/afero.walk({0x2db8240, 0x44bb880}, {0xc00c6dca40, 0x3e}, {0x2da8a68, 0xc00509a4e0}, 0xc005db9ed0)
/go/pkg/mod/github.com/spf13/[email protected]/path.go:69 +0x2f4
github.com/spf13/afero.walk({0x2db8240, 0x44bb880}, {0xc00c6dc8c0, 0x34}, {0x2da8a68, 0xc00509a410}, 0xc005db9ed0)
/go/pkg/mod/github.com/spf13/[email protected]/path.go:69 +0x2f4
github.com/spf13/afero.Walk({0x2db8240, 0x44bb880}, {0xc00c6dc8c0, 0x34}, 0xc005db9ed0)
/go/pkg/mod/github.com/spf13/[email protected]/path.go:105 +0x7a
github.com/kubescape/storage/pkg/cleanup.(*ResourcesCleanupHandler).StartCleanupTask(0xc000b321b0)
/work/pkg/cleanup/cleanup.go:86 +0x58f
created by main.main in goroutine 1
/work/main.go:84 +0x6ed

Environment

OS: Amazon Linux 2
Version: 1.18.1

Steps To Reproduce

Expected behavior

Actual Behavior

Additional context

It is deployed in EKS Cluster

Also seeing these errors in storage pod but these do not fail the container

{"level":"error","ts":"2024-02-01T03:20:25Z","msg":"unmarshal file failed","error":"EOF","path":"/data/spdx.softwarecomposition.kubescape.io/vulnerabilitymanifestsummaries/dummy-namespace/deployment-appt-appt.m"}

Update README

Description

Update the repo's README:

  1. Remove the Prometheus operator integration link
  2. Move the CICD documentation to the .github/workflows directory
  3. Move the kubescape-operator README to the base

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.