Coder Social home page Coder Social logo

kubewarden / docs Goto Github PK

View Code? Open in Web Editor NEW
11.0 10.0 23.0 208.25 MB

Kubewarden's documentation

Home Page: https://docs.kubewarden.io

License: Creative Commons Attribution 4.0 International

JavaScript 41.12% CSS 15.23% MDX 43.65%
kubernetes policy-as-code webassembly hacktoberfest kubernetes-security

docs's People

Contributors

90er avatar agilgur5 avatar avestuk avatar bisht-richa avatar btat avatar charlieegan3 avatar dependabot[bot] avatar dgiebert avatar divya-mohan0209 avatar durlabhcodes avatar ereslibre avatar fabriziosestito avatar flavio avatar jhkrug avatar jordojordo avatar jvanz avatar khaledemaradev avatar kkaempf avatar kravciak avatar nnelas avatar nunix avatar olblak avatar phntxx avatar raulcabello avatar renovate-bot avatar robertsirc avatar viccuad avatar y-taka-23 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docs's Issues

Observability: explain how to use our Grafana dashboard

Update the docs so that the following acceptance criteria are met.

Acceptance criteria

  • User knows where to find our grafana dashboard: link to the marketplace
  • Explain how to access the Grafana instance created by the Prometheus operator (see @ereslibre's docs)
  • Provide a screenshot of how the dashboard looks like

Update the "architecture" section

The section describing the architecture of Kubewarden must be updated to reflect the changes done by the new architecture.

Acceptance criteria

  • The charts are updated to show also the new elements. We can use the big diagram @raulcabello used during our demo as a foundation
  • We expand the text to explain how the new PolicyServer CRD works, what is the purpose of that
  • We mention the scalability and reliability advantages achievable with this architecture. We can "copy&paste" what we wrote inside of the RFC of the new architecture.

Add user story explaining about to replace PSPs with Kubewarden policies

It would be great to extend our current documentation to include some user stories about PSPs replacement.

PSPs are currently deprecated and are going to be dropped from k8s pretty soon. Users relying on PSPs can use Kubewarden to replace them.

Acceptance criteria

  • Explain how an operator can replace already deployed PSPs with Kubewarden policies
  • Provide some examples, we can't cover all the possible PSPs with in depth docs
  • Provide a table that can be used as a "rosetta stone" between PSPs and Kubewarnde policies

Feature Request: Docs for Sigstore verification

Is your feature request related to a problem?

No response

Solution you'd like

Add a new Sigstore section (for example under 6.Distributed Policies).

This new section will explain https://github.com/kubewarden/kubewarden-controller/blob/main/rfc/0003-policy-signing.md in an user facing way.

  • Brief explanation of PubKey signing.
    Sign a policy locally with cosign sign and push it to a registry.
    Verify with kwct verify -k.
  • Brief explanation of keyless signing.
    Point to one of our policies signed with keyless verification in GHA.
    Verify with kwct verify --verification-config-path, for subjectUrl, subjectUrlPrefix.
  • Deploy default policy-server with verification. Explain verification on PolicyServer: it's the same as kwctl, just configured via Configmap containing the verification-config.yml
  • Reference for V1 of verification config, taken from RFC.

Alternatives you've considered

No response

Anything else?

No response

Initial example fails

> helm repo add kubewarden https://charts.kubewarden.io
"kubewarden" has been added to your repositories
> helm install --namespace kubewarden --create-namespace kubewarden-controller kubewarden/kubewarden-controller
Installing chart "kubewarden-controller" in namespace "kubewarden"…
Done! 👏
> kubectl apply -f - <<EOF
> apiVersion: kubewarden.io/v1alpha1
> kind: ClusterAdmissionPolicy
> metadata:
> name: privileged-pods
> spec:
> module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.1
> resources:
> - pods
> operations:
> - CREATE
> - UPDATE
> EOF
error: unable to recognize "STDIN": no matches for kind "ClusterAdmissionPolicy" in version "kubewarden.io/v1alpha1"

Document how to use OPA policies

Explain the workflow needed to run a vanilla OPA policy on top of Kubewarden.

This should cover topics such as:

  • Clarification between vanilla Opa and Gatekeeper
  • How to build a rego file to Wasm
  • Rego builtins:
    • We don't support all of them yet
    • What happens when a policy uses a builtin we don't support yet
  • kwctl developer flow:
    • Annotate: what to put into the metadata
    • Push workflow: nothing changes, let's say that out loud
  • kwctl operator flow: pull, run, scaffold: nothing changes, let's talk about that!

Also, we should provide some practical examples.

Document how to enable tracing

The Policy Server can export trace events to a Jaeger endpoint. This provides useful information to debug a running cluster.

The new architecture will enable us to deploy PolicyServer with tracing enabled. We should document that inside of our docs.

Rewrite "writing policies" introduction

Acceptance criteria

The "Writing policies" introduction should provide the following information:

  • High level overview of what kubewarden policies are
  • Programming language requirements
  • What is the communication protocol (functions + JSON objects) used to communicate between the policy evaluator and the policy itself

Document policy logging

Explain how to log messages from a policy.

  • Provide information to Go developers
  • Provide information to Rust developers

Docusaurus nits

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Issue opened to track nits mentioned by @ereslibre in this comment on the PR #104

Expected Behavior

No response

Steps To Reproduce

No response

Environment

- OS:
- Architecture:

Anything else?

No response

Simple developer workflow

While policy generation is well documented, getting this policy to run and tested is a spread over multiple documents

I'd like to see a simple developer workflow, ideally documented on a single page

  • create policy
  • build policy
  • install policy
  • call policy

It's not about writing/testing a correct policy, it's about getting this workflow documented and to give a developer an initial sense of accomplishment.

Bonus points if this all can be done on a local workstation (with e.g. k3s) and doesn't need a registry,

Adds a documentation in the policy author section warning about the security threat of the same policy performing mutation and validation.

As described in the threat 10 in the threat model, there is a possibility of a exploitation if the Kubewarden policies do mutation and validation in the same policy. If the policies are not properly written, bad actions can exploit this to get privileged container created. Therefore, it is necessary update the Kubewarden documentation to warn policy authors about this threat in their policies.

We could suggest policy authors to split policies which perform validation and mutation into two separable policies. One for mutation and another for validation.

NOTE: This is an issue created from RFC discussing the admission control threat model. It's created to allow the Kubewarden team discuss the proposed mitigation further and select each individual item when necessary.

Documentation structure

Creating this issue to discuss the big picture structure of the docs:

  1. Introduction
  2. Quick start
  3. Writing policies
    1. Introduction: here we could talk a bit about what a language needs to have to write policies with it, the waPC contract (functions, arguments and return structure)
    2. Supported languages
      3.2.1. Rust
      3.2.2. TinyGo: characteristics, limitations as opposed to Go, and why Go is not an option right now
      3.2.3. AssemblyScript: characteristics, limitations
  4. Distributing policies
  5. Testing policies
    1. While developing a policy
      1. Use unit tests, integration... whatever you are used to
      2. Use the policy-testdrive project for offline tests: how you can run tests that take different settings, and requests checking the response of your policy on a Wasm build of the code you are developing
    2. Before deploying a policy
      1. Use the policy-testdrive project for offline tests: how you can run tests to understand how it would impact your cluster before the real deployment of your policy coming from a registry/HTTP server

Placeholder for AssemblyScript and Swift policies

We already have policies written in both languages, but we don't have yet the time to write detailed instructions like the ones of Rust and Go.

Nevertheless, we should mention them.

Acceptance criteria

  • Create one page about AssemblyScript:
    • The page has a link to the policy we created with this language
  • Create one page about AssemblyScript
    • The page has a link to the policy we created with this language
    • The page has a link to Swift SDK we created

issues following quickstart

Hey.
I'm trying to follow the quickstart guide but hitting issues, presumably i'm doing something daft, hope you can tell me where i'm going wrong.

minikube start
helm repo add kubewarden https://charts.kubewarden.io
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
kubectl wait --for=condition=Available deployment --timeout=2m -n cert-manager --all
helm install --wait -n kubewarden --create-namespace kubewarden-crds kubewarden/kubewarden-crds
helm install --wait -n kubewarden kubewarden-controller kubewarden/kubewarden-controller

... wait for everything to come up

kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: PolicyServer
metadata:
  name: reserved-instance-for-tenant-a
spec:
  image: ghcr.io/kubewarden/policy-server:latest
  serviceAccountName: policy-server
  replicas: 1
  env:
  - name: KUBEWARDEN_LOG_LEVEL
    value: debug
EOF

kubectl apply -f - <<EOF
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: privileged-pods
spec:
  policyServer: reserved-instance-for-tenant-a
  module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.1.9
  rules:
  - apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
    operations:
    - CREATE
    - UPDATE
  mutating: false
EOF

everything looks fine

$ kubectl -n kubewarden get pods,ClusterAdmissionPolicy
NAME                                                                READY   STATUS    RESTARTS   AGE
pod/kubewarden-controller-5cc6b54d-fbp5b                            1/1     Running   0          2m22s
pod/policy-server-default-5bb75cbf9b-jddkh                          1/1     Running   0          2m
pod/policy-server-reserved-instance-for-tenant-a-7b88657b77-dw794   1/1     Running   0          27s

NAME                                                            POLICY SERVER                    MUTATING   STATUS
clusteradmissionpolicy.policies.kubewarden.io/privileged-pods   reserved-instance-for-tenant-a   false      active

but then doing:

$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: unprivileged-pod
spec:
  containers:
    - name: nginx
      image: nginx:latest
EOF

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "privileged-pods.kubewarden.admission": Post "https://policy-server-reserved-instance-for-tenant-a.kubewarden.svc:8443/validate/privileged-pods?timeout=10s": read tcp 192.168.49.2:37254->10.105.28.154:8443: read: connection reset by peer

Which causes the policy server to crash

$ kubectl -n kubewarden get pods,ClusterAdmissionPolicy
NAME                                                                READY   STATUS    RESTARTS      AGE
pod/kubewarden-controller-5cc6b54d-fbp5b                            1/1     Running   0             4m17s
pod/policy-server-default-5bb75cbf9b-jddkh                          1/1     Running   0             3m55s
pod/policy-server-reserved-instance-for-tenant-a-7b88657b77-dw794   1/1     Running   1 (55s ago)   2m22s

NAME                                                            POLICY SERVER                    MUTATING   STATUS
clusteradmissionpolicy.policies.kubewarden.io/privileged-pods   reserved-instance-for-tenant-a   false      active

and the logs despite being in DEBUG don't seem to give any clues:

$ kubectl logs -n kubewarden --previous policy-server-reserved-instance-for-tenant-a-7b88657b77-dw794
�[2mNov 26 10:43:55.550�[0m �[32m INFO�[0m policy_server: policies download download_dir="/tmp/" policies_count=1 status="init"
�[2mNov 26 10:43:55.550�[0m �[34mDEBUG�[0m policy_server: download policy="privileged-pods"
�[2mNov 26 10:43:55.582�[0m �[34mDEBUG�[0m policy_fetcher: pulling policy url=Url { scheme: "registry", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("ghcr.io")), port: None, path: "/kubewarden/policies/pod-privileged:v0.1.9", query: None, fragment: None }
�[2mNov 26 10:43:55.587�[0m �[34mDEBUG�[0m oci_distribution::client: Pulling image: ghcr.io/kubewarden/policies/pod-privileged:v0.1.9
�[2mNov 26 10:43:55.587�[0m �[34mDEBUG�[0m oci_distribution::client: Authorizing for image: ghcr.io/kubewarden/policies/pod-privileged:v0.1.9
�[2mNov 26 10:43:55.587�[0m �[34mDEBUG�[0m reqwest::connect: starting new connection: https://ghcr.io/    
�[2mNov 26 10:43:55.837�[0m �[34mDEBUG�[0m reqwest::async_impl::client: response '401 Unauthorized' for https://ghcr.io/v2/    
�[2mNov 26 10:43:55.837�[0m �[34mDEBUG�[0m oci_distribution::client: Making authentication call to https://ghcr.io/token
�[2mNov 26 10:43:55.974�[0m �[34mDEBUG�[0m reqwest::async_impl::client: response '200 OK' for https://ghcr.io/token?scope=repository%3Akubewarden%2Fpolicies%2Fpod-privileged%3Apull&service=ghcr.io    
�[2mNov 26 10:43:55.974�[0m �[34mDEBUG�[0m oci_distribution::client: Received response from auth request: {"token":"djE6a3ViZXdhcmRlbi9wb2xpY2llcy9wb2QtcHJpdmlsZWdlZDoxNjM3OTIzNDM2MDE1OTgzMzky"}

�[2mNov 26 10:43:55.974�[0m �[34mDEBUG�[0m oci_distribution::client: Succesfully authorized for image 'ghcr.io/kubewarden/policies/pod-privileged:v0.1.9'
�[2mNov 26 10:43:55.974�[0m �[34mDEBUG�[0m oci_distribution::client: Pulling image manifest from https://ghcr.io/v2/kubewarden/policies/pod-privileged/manifests/v0.1.9
�[2mNov 26 10:43:56.193�[0m �[34mDEBUG�[0m reqwest::async_impl::client: response '200 OK' for https://ghcr.io/v2/kubewarden/policies/pod-privileged/manifests/v0.1.9    
�[2mNov 26 10:43:56.193�[0m �[34mDEBUG�[0m oci_distribution::client: validating manifest: {"schemaVersion":2,"mediaType":null,"config":{"mediaType":"application/vnd.wasm.config.v1+json","digest":"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a","size":2,"urls":null,"annotations":null},"layers":[{"mediaType":"application/vnd.wasm.content.layer.v1+wasm","digest":"sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2","size":21858,"urls":null,"annotations":{"org.opencontainers.image.title":"sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2"}}],"annotations":null}
�[2mNov 26 10:43:56.193�[0m �[34mDEBUG�[0m oci_distribution::client: Parsing response as OciManifest: {"schemaVersion":2,"mediaType":null,"config":{"mediaType":"application/vnd.wasm.config.v1+json","digest":"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a","size":2,"urls":null,"annotations":null},"layers":[{"mediaType":"application/vnd.wasm.content.layer.v1+wasm","digest":"sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2","size":21858,"urls":null,"annotations":{"org.opencontainers.image.title":"sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2"}}],"annotations":null}
�[2mNov 26 10:43:56.193�[0m �[34mDEBUG�[0m oci_distribution::client: Pulling image layer
�[2mNov 26 10:43:56.390�[0m �[34mDEBUG�[0m reqwest::async_impl::client: redirecting 'https://ghcr.io/v2/kubewarden/policies/pod-privileged/blobs/sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2' to 'https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2?se=2021-11-26T10%3A50%3A00Z&sig=GJJkXqHYIkoNkX93V%2FeB1eVRsa5QUzWThtiQOaROacM%3D&sp=r&spr=https&sr=b&sv=2019-12-12'    
�[2mNov 26 10:43:56.390�[0m �[34mDEBUG�[0m reqwest::connect: starting new connection: https://pkg-containers.githubusercontent.com/    
�[2mNov 26 10:43:56.598�[0m �[34mDEBUG�[0m reqwest::async_impl::client: response '200 OK' for https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2?se=2021-11-26T10%3A50%3A00Z&sig=GJJkXqHYIkoNkX93V%2FeB1eVRsa5QUzWThtiQOaROacM%3D&sp=r&spr=https&sr=b&sv=2019-12-12    
�[2mNov 26 10:43:56.604�[0m �[34mDEBUG�[0m oci_distribution::client: Pulling image: ghcr.io/kubewarden/policies/pod-privileged:v0.1.9
�[2mNov 26 10:43:56.604�[0m �[34mDEBUG�[0m oci_distribution::client: Authorizing for image: ghcr.io/kubewarden/policies/pod-privileged:v0.1.9
�[2mNov 26 10:43:56.604�[0m �[34mDEBUG�[0m reqwest::connect: starting new connection: https://ghcr.io/    
�[2mNov 26 10:43:56.807�[0m �[34mDEBUG�[0m reqwest::async_impl::client: response '401 Unauthorized' for https://ghcr.io/v2/    
�[2mNov 26 10:43:56.807�[0m �[34mDEBUG�[0m oci_distribution::client: Making authentication call to https://ghcr.io/token
�[2mNov 26 10:43:56.953�[0m �[34mDEBUG�[0m reqwest::async_impl::client: response '200 OK' for https://ghcr.io/token?scope=repository%3Akubewarden%2Fpolicies%2Fpod-privileged%3Apull&service=ghcr.io    
�[2mNov 26 10:43:56.953�[0m �[34mDEBUG�[0m oci_distribution::client: Received response from auth request: {"token":"djE6a3ViZXdhcmRlbi9wb2xpY2llcy9wb2QtcHJpdmlsZWdlZDoxNjM3OTIzNDM2OTc5OTkzNzIy"}

�[2mNov 26 10:43:56.953�[0m �[34mDEBUG�[0m oci_distribution::client: Succesfully authorized for image 'ghcr.io/kubewarden/policies/pod-privileged:v0.1.9'
�[2mNov 26 10:43:56.953�[0m �[34mDEBUG�[0m oci_distribution::client: Pulling image manifest from https://ghcr.io/v2/kubewarden/policies/pod-privileged/manifests/v0.1.9
�[2mNov 26 10:43:57.186�[0m �[34mDEBUG�[0m reqwest::async_impl::client: response '200 OK' for https://ghcr.io/v2/kubewarden/policies/pod-privileged/manifests/v0.1.9    
�[2mNov 26 10:43:57.186�[0m �[34mDEBUG�[0m oci_distribution::client: validating manifest: {"schemaVersion":2,"mediaType":null,"config":{"mediaType":"application/vnd.wasm.config.v1+json","digest":"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a","size":2,"urls":null,"annotations":null},"layers":[{"mediaType":"application/vnd.wasm.content.layer.v1+wasm","digest":"sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2","size":21858,"urls":null,"annotations":{"org.opencontainers.image.title":"sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2"}}],"annotations":null}
�[2mNov 26 10:43:57.186�[0m �[34mDEBUG�[0m oci_distribution::client: Parsing response as OciManifest: {"schemaVersion":2,"mediaType":null,"config":{"mediaType":"application/vnd.wasm.config.v1+json","digest":"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a","size":2,"urls":null,"annotations":null},"layers":[{"mediaType":"application/vnd.wasm.content.layer.v1+wasm","digest":"sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2","size":21858,"urls":null,"annotations":{"org.opencontainers.image.title":"sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2"}}],"annotations":null}
�[2mNov 26 10:43:57.186�[0m �[34mDEBUG�[0m oci_distribution::client: Pulling image layer
�[2mNov 26 10:43:57.374�[0m �[34mDEBUG�[0m reqwest::async_impl::client: redirecting 'https://ghcr.io/v2/kubewarden/policies/pod-privileged/blobs/sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2' to 'https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2?se=2021-11-26T10%3A50%3A00Z&sig=GJJkXqHYIkoNkX93V%2FeB1eVRsa5QUzWThtiQOaROacM%3D&sp=r&spr=https&sr=b&sv=2019-12-12'    
�[2mNov 26 10:43:57.374�[0m �[34mDEBUG�[0m reqwest::connect: starting new connection: https://pkg-containers.githubusercontent.com/    
�[2mNov 26 10:43:57.450�[0m �[34mDEBUG�[0m reqwest::async_impl::client: response '200 OK' for https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2?se=2021-11-26T10%3A50%3A00Z&sig=GJJkXqHYIkoNkX93V%2FeB1eVRsa5QUzWThtiQOaROacM%3D&sp=r&spr=https&sr=b&sv=2019-12-12    
�[2mNov 26 10:43:57.461�[0m �[32m INFO�[0m policy_server: policy download name="privileged-pods" path="/tmp/registry/ghcr.io/kubewarden/policies/pod-privileged:v0.1.9" sha256sum="59e34f482b40cc39e408c6eef76ed48a25560ee54579839eda0817cb0ada31c2" mutating=false
�[2mNov 26 10:43:57.461�[0m �[32m INFO�[0m policy_server: policies download status="done"
�[2mNov 26 10:43:57.461�[0m �[32m INFO�[0m policy_server: kubernetes poller bootstrap status="init"
�[2mNov 26 10:43:57.461�[0m �[32m INFO�[0m policy_server: kubernetes poller bootstrap status="done"
�[2mNov 26 10:43:57.461�[0m �[32m INFO�[0m policy_server: worker pool bootstrap status="init"
�[2mNov 26 10:43:57.461�[0m �[32m INFO�[0m policy_server::kube_poller: spawning cluster context refresh loop
�[2mNov 26 10:43:57.461�[0m �[32m INFO�[0m policy_server::worker_pool: spawning worker spawned=2 total=5
�[2mNov 26 10:43:57.462�[0m �[32m INFO�[0m policy_server::worker_pool: spawning worker spawned=1 total=5
�[2mNov 26 10:43:57.462�[0m �[32m INFO�[0m policy_server::worker_pool: spawning worker spawned=3 total=5
�[2mNov 26 10:43:57.462�[0m �[32m INFO�[0m policy_server::worker_pool: spawning worker spawned=4 total=5
�[2mNov 26 10:43:57.471�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:43:57.482�[0m �[32m INFO�[0m policy_server::worker_pool: spawning worker spawned=5 total=5
�[2mNov 26 10:43:57.554�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:43:57.556�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:43:57.567�[0m �[34mDEBUG�[0m policy_server::worker_pool: worker loop start id=3
�[2mNov 26 10:43:57.567�[0m �[34mDEBUG�[0m policy_server::worker_pool: worker loop start id=1
�[2mNov 26 10:43:57.567�[0m �[32m INFO�[0m policy_server: worker pool bootstrap status="done"
�[2mNov 26 10:43:57.567�[0m �[34mDEBUG�[0m policy_server::worker_pool: worker loop start id=4
�[2mNov 26 10:43:57.567�[0m �[34mDEBUG�[0m policy_server::worker_pool: worker loop start id=2
�[2mNov 26 10:43:57.567�[0m �[34mDEBUG�[0m policy_server::worker_pool: worker loop start id=5
�[2mNov 26 10:43:57.570�[0m �[32m INFO�[0m policy_server::server: started HTTPS server address="0.0.0.0:8443"
�[2mNov 26 10:44:02.558�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:02.589�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:02.591�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:07.596�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:07.610�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:07.615�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:12.622�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:12.639�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:12.641�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:17.646�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:17.659�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:17.669�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:22.673�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:22.691�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:22.697�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:27.708�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:27.720�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:27.723�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:32.727�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:32.741�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:32.743�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:37.746�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:37.768�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:37.774�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:42.778�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:42.795�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:42.797�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:47.802�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:47.818�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:47.824�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:52.838�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:52.844�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:52.846�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:57.851�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:57.868�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:44:57.870�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:45:02.873�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:45:02.879�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:45:02.881�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:45:07.885�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:45:07.897�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:45:07.899�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:45:12.902�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/namespaces? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:45:12.936�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/api/v1/services? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting
�[2mNov 26 10:45:12.938�[0m �[34mDEBUG�[0m �[1mHTTP�[0m�[1m{�[0mhttp.method=GET http.url=https://10.96.0.1:443/apis/networking.k8s.io/v1/ingresses? otel.name="HTTP" otel.kind="client"�[1m}�[0m: kube_client::client: requesting

Every time I try to submit the pod that should be allowed the policy server crashes again 😢

Consequently I'm pretty stuck!

I've tried with both kind and minikube to the same end result, I'm trying to build https://github.com/appvia/psp-migration which might be a useful resource for the kubewarden folk, it'd be great to get some collaboration on it if you've time and interest (after I've got kubewarden to bootstrap!) thanks!
Chris

Describe the interface for policy logging through waPC

We have defined in our docs:

waPC contracts up to this point. This is the interface that we should support going forward so policies built today can be run on future Kubewarden versions without any glitch.

This issue is to describe the logging contract: https://github.com/kubewarden/policy-evaluator/blob/01548909206cbbc6ce8cf7321ca3596aa72cfb9b/src/runtimes/wapc.rs#L31-L46

Update documentation about the `sources.yml` file format

We've recently changed the format of the sources.yaml file. The documentation should be updated to reflect that.

I also think we should update the page to explain to the users how to get this information into the Policy Server, when the kubewarden operator is used.

Acceptance Criteria

  • The contents of this page are updated so that:
    • The right sources.yaml format is documented
    • We provide a link to the the operator manual/ that contains the instructions about how to tune the PolicyServer CR
  • A new page is added under the "Operator Manual" explaining how to specify the sources.yaml configuration when using our controller

Specify supported OCI registries

On the Distributing Policies section of the book, provide a list of what are the OCI registries that have been tested, and if some registries are giving troubles, give an explicit list as well, so we can get feedback from the users if at some point they are working.

Tracking issue for Kubewarden docs' Docusaurus transition

Is your feature request related to a problem?

Following tasks to be accomplished for Docusaurus migration

  • Set up contact with UI/UX team for understanding template colours
  • Create a dev-docusaurus branch for staging the content on local GitHub repo
  • Seek assistance for CSS & fonts
  • Demonstration of revised content
  • Creation of staging/preview branch in the docs repo
  • Final demo before go-live

Solution you'd like

This is an issue to track the progress of docusaurus migration for Kubewarden.

Alternatives you've considered

No response

Anything else?

No response

Feat: Reorg docs following Diataxis doc system

Reorganize docs following https://diataxis.fr. As an example, see https://docs.epinio.io.

This allows to cater to personas. Note that thanks to the Divio system, those personas don't necessarily get specific doc sections for them, they just get an easier time navigating the docs and finding the info relevant to them.

Docusaurus lints on build for broken links, so we can iterate on this process.

Acceptance criteria

  • Reorg is performed, work prioritized by the following personas (list by @mattfarina), in descending order of importance:
    1. Policy consumer/user – someone who takes a policy and uses it in a cluster. They don’t develop policies and may not even know how to write software. They may be someone in the CISO office. Their work is around running a policy in a cluster and seeing the results of that.
    2. Kubewarden/Kubernetes operator – someone who operates KW in a cluster. The person responsible for installing KW and keeping it up to date. This is an ops/devops role.
    3. Policy developer – someone who writes a policy of their own.
    4. Policy distributor – this is someone who has written a policy and wants to share it with others. They want the policy to be easily consumed by the “policy consumer” role.
    5. Kubewarden integrator – those who want to build on top of KW. This could be a custom UI, a helper tool for generating policies, or something else
    6. Kubewarden developer – those who build Kubewarden itself
  • No broken links are left in https://docs.kubewarden.io
  • Links to docs are updated in https://kubewarden.io/blog

Suggestions

As said, work should be prioritized by personas, starting from the first, to maximize adoption.

A low hanging fruit may be to create a "Reference documentation" section, and start moving relevant parts there (e.g: config files, CRDs, etc).

Document architecture

Acceptance criteria

  • Explain the components of the Kubewarden stack
  • Explain the relation between all the components
  • Describe the flow of an incoming request: what are the components it goes through

Air-gapped installation guide

Some users in the Kubewarden communication channels requested an air-gapped installation guide. This issue should keep track of this request. Furthermore, the team should discuss if this would be supported by Kubewarden.

Document policy Metadata

Acceptance criteria

  • Explain the purpose of policy metadata
  • Document the format of metadata.yml file
  • Explain how to annotate policies
  • Explain how to retrieve policy metadata*

Document how to write a Go policy

Acceptance criteria

Explain how to create a AssemblyScript based policy

  • Current limitations: why do we have to use TinyGo, what are the limitations of TinyGo
  • Go SDK: what are the limitations
  • How to accept incoming request
  • How to reject incoming request
  • How to mutate and accept incoming request - if that is not easily doable just note that down under the limitations section
  • How to distribute policy

Document the importance of properly RBAC configured privileges.

Document the importance of properly RBAC configured privileges.

It's necessary to document the importance of properly configured RBAC privileges to keep the admission control, and by consequence the Kuberwarden, secure and working as expected. Therefore, only the right users should be allowed to manipulate webhook objects and CRD objects. This is the mitigation action proposed in the threat #4 and #11

NOTE: This is an issue created from RFC discussing the admission control threat model. It's created to allow the Kubewarden team discuss the proposed mitigation further and select each individual item when necessary.

Fix TLS termination on the domain

TLS termination against docs.kubewarden.io is broken since we moved to the custom domain. This should be fixed to ensure the website can be accessed using https

Document AdmissionPolicy

Add documentation about how to use the new AdmissionPolicy and its differences with the ClusterAdmissionPolicy

Document how to write an AssemblyScript policy

Acceptance criteria

Explain how to create a AssemblyScript based policy

  • Current limitations: lack of scaffold tool, lack of kubewarden-sdk
  • How to accept incoming request
  • How to reject incoming request
  • How to mutate and accept incoming request
  • How to distribute policy

Dependency Dashboard

This issue provides visibility into Renovate updates and their statuses. Learn more

This repository currently has no open or pending branches.


  • Check this box to trigger a request for Renovate to run again on this repository

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.