Coder Social home page Coder Social logo

loft-sh / kiosk Goto Github PK

View Code? Open in Web Editor NEW
1.1K 22.0 68.0 173.84 MB

kiosk 🏢 Multi-Tenancy Extension For Kubernetes - Secure Cluster Sharing & Self-Service Namespace Provisioning

Home Page: https://kubernetes.slack.com/?redir=%2Fmessages%2Fkiosk#/

License: Apache License 2.0

Dockerfile 0.19% Go 98.82% Shell 0.67% Mustache 0.32%
kubernetes multi-tenancy devops kiosk

kiosk's Introduction

kiosk

Latest Release License: Apache-2.0

Multi-Tenancy Extension For Kubernetes

  • Accounts & Account Users to separate tenants in a shared Kubernetes cluster
  • Self-Service Namespace Provisioning for account users
  • Account Limits to ensure quality of service and fairness when sharing a cluster
  • Namespace Templates for secure tenant isolation and self-service namespace initialization
  • Multi-Cluster Tenant Management for sharing a pool of clusters (coming soon)

kiosk Demo Video


Contents


Why kiosk?

Kubernetes is designed as a single-tenant platform, which makes it hard for cluster admins to host multiple tenants in a single Kubernetes cluster. However, sharing a cluster has many advantages, e.g. more efficient resource utilization, less admin/configuration effort or easier sharing of cluster-internal resources among different tenants.

While there are hundreds of ways of setting up multi-tenant Kubernetes clusters and many Kubernetes distributions provide their own tenancy logic, there is no lightweight, pluggable and customizable solution that allows admins to easily add multi-tenancy capabilities to any standard Kubernetes cluster.

The Missing Multi-Tenancy Extension for Kubernetes > Tweet <

kiosk is designed to be:

  • 100% Open-Source: CNCF compatible Apache 2.0 license
  • Pluggable: easy to install into any existing cluster and suitable for different use cases
  • Fast: emphasizing automation and self-service for tenants
  • Secure: offering default configurations for different levels of tenant isolation
  • Extensible: providing building blocks for higher-level Kubernetes platforms

Architecture

The core idea of kiosk is to use Kubernetes namespaces as isolated workspaces where tenant applications can run isolated from each other. To minimize admin overhead, cluster admins are supposed to configure kiosk which then becomes a self-service system for provisioning Kubernetes namespaces for tenants.


Workflow & Interactions

The following diagram shows the main actors (Cluster Admins and Account Users) as well as the most relevant Kubernetes resources and their relationships.

kiosk Workflow

Click on the following links to view the description for each of the actors and kiosk components:

Cluster Admin

Cluster Admins have the permission to perform CRUD operations for cluster-wide / non-namespaced resources (especially RBAC related resources as well as the custom resources Account, AccountQuota, AccountQuotaSet, and Template). Cluster Admins configure kiosk by creating and managing Accounts, AccountQuotas, AccountQuotaSets, and Templates. They can also see and configure all Spaces owned by all Accounts.


Account

Every tenant is represented by an Account. Cluster Admins define and manage Accounts and assign Account Users (Users, Groups, ServiceAccounts) to Accounts - similar to assigning RBAC Roles to subjects as part of a RoleBinding configuration.


Account User

Account Users perform actions within the Kubernetes cluster via API server requests while using a certain Account. Cluster Admins can assign the same Account User to multiple Accounts. Account Users have access to Spaces that belong to the Accounts they are using. If assigned the default kiosk ClusterRole, every Account User has the permission to list/get/create/delete Spaces for the respective Account, however, this can be changed via RBAC RoleBindings.


Space

A Space is a non-persistent, virtual resource that represents exactly one Kubernetes namespace. Spaces have the following characteristics:

  • Every space can belong up to one Account which is the owner of this Space. Ownerless Spaces are possible.
  • If a user has rights to access the underlying Namespace, the user can access the Space in the same way. Hence besides Account Users, other actors (User, Group, ServiceAccount) can also access the Space if someone grants this access via additional Kubernetes RBAC.
  • Every User only sees the Spaces the User has access to. This is in contrast to regular namespaces, where Users can only list all namespaces or none
  • Space ownership can be changed, by changing the ownership annotation on the namespace
  • During Space creation (or Space ownership changes) a RoleBinding for the owning Account is created in the corresponding Space namespace. The referenced RBAC ClusterRole can be configured in the account
  • A Space can be prepopulated during creation with a predefined set of resources by configuring default Templates in the Account. Kiosk will make sure that these resources will be correctly deployed before the user gets access to the namespace.

Namespace

A Namespace is a regular Kubernetes Namespace that can be accessed by anyone who has the appropriate RBAC rules to do so. Namespaces are provisioned and managed by kiosk and have a 1-to-1 relationship to the resource Space which is a custom resource of kiosk. By default, Account Users have the permission to operate within all Namespaces that are represented by Spaces which belong to one of their Accounts.


Template

Templates are defined and managed by Cluster Admins. Templates are used to initialize Spaces/Namespaces with a set of Kubernetes resources (defined as manifests or as part of a Helm chart). Templates can be created using a different ClusterRole than the Account User uses, so they can be used to create resources that are not allowed to be created by actors of the Space/Namespace, e.g. to set up certain isolation resources (e.g. Network Policies, Pod Security Policies etc.). Cluster Admins can define default Templates within the Account configuration which automatically applies these templates to each Space that is created using the respective Account. Additionally, Account Users can state other non-mandatory Templates that should also be applied when creating a Space.


TemplateInstance

When a Template is applied to a Space, kiosk creates a TemplateInstance to keep track of which Templates have been applied to the Space. A TemplateInstance contains information about the Template as well as about the parameters used to instantiate it. Additionally, TemplateInstances can be configured to sync with Templates, i.e. the TemplateInstance will update the resources whenever the Template changes that has been used to create these resources.


AccountQuota

AccountQuotas are defined and managed by Cluster Admins. AccountQuotas define cluster-wide aggregated limits for Accounts. The resources of all Spaces/Namespaces that belong to an Account count towards the aggregated limits defined in the AccountQuota. Similar to Namespaces which can be limited by multiple ResourceQuotas, an Account can be limited by multiple AccountQuotas. If the same limit (e.g. total CPU per Account) is defined by multiple AccountQuotas, the Account will be limited according to the lowest value.



Custom Resources & Resource Groups

When installing kiosk in a Kubernetes cluster, these components will be added to the cluster:

  • CRDs for Account, AccountQuota, AccountQuotaSet, Template, TemplateInstance
  • Controller for kiosk Custom Resources (runs inside the cluster)
  • API Server Extension (runs inside the cluster similar to the Controller)

kiosk Data Structure

kiosk adds two groups of resources to extend the Standard API Groups of Kubernetes:

  1. Custom Resources: config.kiosk.sh
    Custom Resource Definitions (CRDs) for configuring kiosk. These resources are persisted in etcd just like any other Kubernetes resources and are managed by an operator which runs inside the cluster.

    Show List of Custom Resources
    • config.kiosk.sh/Account
    • config.kiosk.sh/AccountQuota
    • config.kiosk.sh/AccountQuotaSet (soon)
    • config.kiosk.sh/Template
    • config.kiosk.sh/TemplateInstance

  2. API Extension: tenancy.kiosk.sh
    Virtual resources which are accessible via an API Server Extension and will not be persisted in etcd. These resources are similar to views in a relational database. The benefit of providing these resources instead of only using CRDs is that we can calculate access permissions dynamically for every request. That means that it does not only allow to list, edit and manage Spaces (which map 1-to-1 to Namespaces), it also allows to show a different set of Spaces for different Account Users depending on the Accounts they are associated with or in other words: this circumvents the current limitation of Kubernetes to show filtered lists of cluster-scoped resources based on access rights.

    Show List of API Extension Resources
    • tenancy.kiosk.sh/Account
    • tenancy.kiosk.sh/AccountQuota
    • tenancy.kiosk.sh/Space
    • tenancy.kiosk.sh/TemplateInstance


Getting Started

0. Requirements

0.1. CLI Tools


0.2. Kubernetes Cluster

kiosk supports Kubernetes version: v1.14 and higher. Use kubectl version to determine the Server Version of your cluster. While this getting started guide should work with most Kubernetes clusters out-of-the-box, there are certain things to consider for the following types of clusters:

Docker Desktop Kubernetes

All ServiceAccounts have cluster-admin role by default, which means that emulating users with ServiceAccounts is not a good idea. Use impersonation instead.


Digital Ocean Kubernetes (DOKS)

All users in DOKS have cluster-admin role by default which means that when using impersonation, every user will have admin access. To see kiosk-based multi-tenancy in action, create ServiceAccounts to emulate different users.


Google Kubernetes Engine (GKE)

Your kube-context will by default not have cluster-admin role. Run the following command to get your google email address and to make your user cluster admin:

# GKE: make yourself admin
GKE_USER=$(gcloud config get-value account)
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $GKE_USER


0.3. Admin Context

You need a kube-context with admin rights.

If running all the following commands returns yes, you are most likely admin:

kubectl auth can-i "*" "*" --all-namespaces
kubectl auth can-i "*" namespace
kubectl auth can-i "*" clusterrole
kubectl auth can-i "*" crd

1. Install kiosk

# Install kiosk with helm v3
kubectl create namespace kiosk
helm install kiosk --repo https://charts.devspace.sh/ kiosk --namespace kiosk --atomic

To verify the installation make sure the kiosk pod is running:

$ kubectl get pod -n kiosk

NAME                     READY   STATUS    RESTARTS   AGE
kiosk-58887d6cf6-nm4qc   2/2     Running   0          1h

2. Configure Accounts

In the following steps, we will use Kubernetes user impersonation to allow you to quickly switch between cluster admin and simple account user roles. If you are cluster admin and you want to run a kubectl command as a different user, you can impersonate this user by adding the kubectl flags --as=[USER] and/or --as-group=[GROUP].

In this getting started guide, we assume two user roles:

  • Cluster Admin: use your admin-context as current context (kubectl commands without --as flag)
  • Account User john: use your admin-context to impersonate a user (kubectl commands with --as=john)

If you are using Digital Ocean Kubernetes (DOKS), follow this guide to simulate a user using a Service Account.


2.1. Create Account

To allow a user to create and manage namespaces, they need a kiosk account. Run the following command to create such an account for our example user john:

# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account.yaml

# Alternative: ServiceAccount as Account User (see explanation for account-sa.yaml below)
# kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account-sa.yaml
View: account.yaml
apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Account
metadata:
  name: johns-account
spec:
  subjects:
  - kind: User
    name: john
    apiGroup: rbac.authorization.k8s.io

As you can see in this example, every account defines subjects which are able to use this account. In this example, there is only one subject which is a User with name john. However, Accounts can also have multiple subjects.

Subjects for kiosk Accounts are defined in the exact same way as subjects in RoleBindings. Subjects can be a combination of:

  • Users
  • Groups
  • ServiceAccounts (see example below: account-sa.yaml)

View: account-sa.yaml (alternative for ServiceAccounts, e.g. Digital Ocean Kubernetes)

If you want to assign an Account to a ServiceAccount (e.g. when using Digital Ocean Kubernetes / DOKS), please use the following alternative:

# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account-sa.yaml
apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Account
metadata:
  name: johns-account
spec:
  subjects:
  - kind: ServiceAccount
    name: john
    namespace: kiosk

Learn more about User Management and Accounts in kiosk.


2.2. View Accounts

All Account Users are able to view their Account through their generated ClusterRole. Let's try this by impersonating john:

# View your own accounts as regular account user
kubectl get accounts --as=john

# View the details of one of your accounts as regular account user
kubectl get account johns-account -o yaml --as=john

3. Working with Spaces

Spaces are the virtual representation of namespaces. Each Space represents exactly one namespace. The reason why we use Spaces is that by introducing this virtual resource, we can allow users to only operate on a subset of namespaces they have access to and hide other namespaces they shouldn't see.


3.1. Allow Users To Create Spaces

By default, Account Users cannot create Spaces themselves. They can only use the Spaces/Namespaces that belong to their Accounts. That means a cluster admin would need to create the Spaces for an Account and then the Account Users could work with these Spaces/Namespaces.

To allow all Account Users to create Spaces for their own Accounts, create the following RBAC ClusterRoleBinding:

# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/rbac-creator.yaml
View: rbac-creator.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kiosk-creator
subjects:
- kind: Group
  name: system:authenticated
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: kiosk-edit
  apiGroup: rbac.authorization.k8s.io

Of course, you can also adjust this ClusterRoleBinding in a way that only certain subjects/users can create Spaces for their Accounts. Just modify the subjects section.



3.2. Create Spaces

After granting Account Users the right to create Spaces for their Accounts (see ClusterRoleBinding in 3.1.), all Account Users are able to create Spaces. Let's try this by impersonating john:

kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space.yaml --as=john
View: space.yaml
apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Space
metadata:
  name: johns-space
spec:
  # spec.account can be omitted if the current user only belongs to a single account
  account: johns-account

As you can see in this example, every Space belongs to exactly one Account which is referenced by spec.account.



3.3. View Spaces

Let's take a look at the Spaces of the Accounts that User john owns by impersonating this user:

# List all Spaces as john:
kubectl get spaces --as=john

# Get the defails of one of john's Spaces:
kubectl get space johns-space -o yaml --as=john

3.4. Use Spaces

Every Space is the virtual representation of a regular Kubernetes Namespace. That means we can use the associated Namespace of our Spaces just like any other Namespace.

Let's impersonate john again and create an nginx deployment inside johns-space:

kubectl apply -n johns-space --as=john -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/deployment.yaml

That's great, right? A user that did not have any access to the Kubernetes cluster, is now able to create Namespaces on-demand and gets restricted access to these Namespaces automatically.


3.5. Create Deletable Spaces

To allow Account Users to delete all Spaces/Namespace that they create, you need to set the spec.space.clusterRole field in the Account to kiosk-space-admin.

When creating a Space, kiosk creates the according Namespace for the Space and then creates a RoleBinding within this Namespace which binds the standard Kubernetes ClusterRole admin to every Account User (i.e. all subjects listed in the Account). While this ClusterRole allows full access to this Namespace, it does not allow to delete the Space/Namespace. (The verb delete is missing in the default admin clusterrole)

As john can be User of multiple Accounts, let's create a second Account which allows john to delete Spaces/Namespaces that belong to this Account:

# Run this as cluster admin:
# Create Account johns-account-deletable-spaces
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account-deletable-spaces.yaml
View: account-deletable-spaces.yaml
apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Account
metadata:
  name: johns-account-deletable-spaces
spec:
  space: 
    clusterRole: kiosk-space-admin
  subjects:
  - kind: User
    name: john
    apiGroup: rbac.authorization.k8s.io


If you are using ServiceAccounts instead of impersonation, adjust the subjects section of this Account similar to account-sa.yaml in 2.1.

Now, let's create a Space for this Account:

# Run this as john:
# Create Space johns-space-deletable
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space-deletable.yaml --as=john
View: space-deletable.yaml
apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Space
metadata:
  name: johns-space-deletable
spec:
  account: johns-account-deletable-spaces


3.6. Delete Spaces

If a Space belongs to an Account that allows Account Users to delete such Spaces, an Account User can simply delete the Space using kubectl:

kubectl get spaces --as=john
kubectl delete space johns-space-deletable --as=john
kubectl get spaces --as=john

Deleting a Space also deletes the underlying Namespace.

3.7. Defaults for Spaces

kiosk provides the spec.space.spaceTemplate option for Accounts which lets admins define defaults for new Spaces of an Account. The following example creates the Account account-default-space-metadata which defines default labels and annotations for all Spaces created with this Account:

# Run this as cluster admin:
# Create Account johns-account-default-space-metadata
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account-default-space-metadata.yaml
View: account-default-space-metadata.yaml
apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Account
metadata:
  name: johns-account-default-space-metadata
spec:
  space: 
    clusterRole: kiosk-space-admin
    spaceTemplate:
      metadata:
        labels:
          some-label: "label-value"
          some--other-label: "other-label-value"
        annotations:
          "space-annotation-1": "annotation-value-1"
          "space-annotation-2": "annotation-value-2"
  subjects:
  - kind: User
    name: john
    apiGroup: rbac.authorization.k8s.io


4. Setting Account Limits

With kiosk, you have two options to limit Accounts:


4.1. Limit Number of Spaces

By setting the spec.space.limit in an Account, Cluster Admins can limit the number of Spaces that Account Users can create for a certain Account.

Let's run the following command to update the existing Account johns-account and specify spec.space.limit: 2:

# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account-space-limit.yaml
View: account-space-limit.yaml
apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Account
metadata:
  name: johns-account
spec:
  space:
    limit: 2
  subjects:
  - kind: User
    name: john
    apiGroup: rbac.authorization.k8s.io


Now, let's try to create more than 2 Spaces (note that you may have already created a Space for this Account during earlier steps of this getting started guide):

# List existing spaces:
kubectl get spaces --as=john

# Create space-2 => should work if you had only one Space for this Account so far
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space-2.yaml --as=john

# Create space-3 => should result in an error
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space-3.yaml --as=john

4.2. AccountQuotas

AccountQuotas allow you to define limits for an Account which are aggregated across all Spaces of this Account.

Let's create an AccountQuota for johns-account which will set the aggregated number of Pods across all Spaces to 2 and the aggregated maximum of limits.cpu across all Pods in all Spaces to 4 CPU Cores (see Kubernetes resource limits):

# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/accountquota.yaml
View: accountquota.yaml
apiVersion: config.kiosk.sh/v1alpha1
kind: AccountQuota
metadata:
  name: default-user-limits
spec:
  account: johns-account
  quota:
    hard:
      pods: "2"
      limits.cpu: "4"

AccountQuotas allow you to restrict the same resources as Kubernetes ResourceQuotas but unlike ResourceQuotas, AccountQuotas are not restricted to a single Namespace. Instead, AccountQuotas add up all used resources across all Spaces of an Account to generate an aggregated value which is then compared to the max value defined in the AccountQuota.

If there are multiple AccountQuotas referencing the same Account via spec.account, kiosk merges the Quotas. In case multiple AccountQuotas define different limits for the same resource type, kiosk uses the lowest value.




5. Working with Templates

Templates in kiosk are used to initialize Namespaces and share common resources across namespaces (e.g. secrets). When creating a Space, kiosk will use these Templates to populate the newly created Namespace for this Space. Templates:


5.1. Manifest Templates

The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.

The following command will create a Template called space-restrictions which defined 2 manifests, a Network Policy which will make sure that the users of this Space/Namespace cannot create privileged containers and a LimitRange for default CPU limits of containers in this Namespace:

# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/template-manifests.yaml
View: template-manifests.yaml
apiVersion: config.kiosk.sh/v1alpha1
kind: Template
metadata:
  name: space-restrictions
# This section defines parameters that can be used for this template
# Can be used in resources.manifests and resources.helm.values
parameters:
# Name of the parameter
- name: DEFAULT_CPU_LIMIT
  # The default value of the parameter
  value: "1"
- name: DEFAULT_CPU_REQUESTS
  value: "0.5"
  # If a parameter is required the template instance will need to set it
  # required: true
  # Make sure only values are entered for this parameter
  validation: "^[0-9]*\\.?[0-9]+$"
resources:
  manifests:
  - kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: deny-cross-ns-traffic
    spec:
      podSelector:
        matchLabels:
      ingress:
      - from:
        - podSelector: {}
  - apiVersion: v1
    kind: LimitRange
    metadata:
      name: space-limit-range
    spec:
      limits:
      - default:
          # Use the DEFAULT_CPU_LIMIT parameter here and
          # parse it as json, which renders the "1" as 1. 
          cpu: "${{DEFAULT_CPU_LIMIT}}"
        defaultRequest:
          cpu: "${{DEFAULT_CPU_REQUESTS}}"
        type: Container


5.2. Helm Chart Templates

Instead of manifests, a Template can specify a Helm chart that will be installed (using helm template) when the Template is being instantiated. Let's create a Template called redis which installs the stable/redis Helm chart:

# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/template-helm.yaml
View: template-helm.yaml
apiVersion: config.kiosk.sh/v1alpha1
kind: Template
metadata:
  name: redis
resources:
  helm:
    releaseName: redis
    chart:
      repository:
        name: redis
        repoUrl: https://kubernetes-charts.storage.googleapis.com
    values: |
      redisPort: 6379      
      # Use a predefined parameter here
      myOtherValue: ${NAMESPACE}


5.3. Using Templates

By default, only admins can list Templates. To allow users to view templates, you need to set up RBAC accordingly. Run the following code to allow every cluster user to list and view all Templates:

# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/rbac-template-viewer.yaml
View: rbac-template-viewer.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kiosk-template-view
rules:
- apiGroups:
  - config.kiosk.sh
  resources:
  - templates
  verbs:
  - get
  - list
  - watch
  - create
  - delete
  - deletecollection
  - patch
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kiosk-template-viewer
subjects:
- kind: Group
  name: system:authenticated
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: kiosk-template-view
  apiGroup: rbac.authorization.k8s.io

To view a list of available Templates, run the following command:

kubectl get templates --as=john

To instantiate a Template, users need to have permission to create TemplateInstances within their Namespaces. You can grant this permission by running this command:

# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/rbac-template-instance-admin.yaml

Note: Creating a TemplateInstance in a Space is only possible if a RoleBinding exists that binds the Role kiosk-template-admin to the user. Because kiosk-template-admin has the label rbac.kiosk.sh/aggregate-to-space-admin: "true" (see rbac-instance-admin.yaml below), it is also possible to create a RoleBinding for the Role kiosk-space-admin (which automatically includes kiosk-template-admin).

View: rbac-instance-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kiosk-template-admin
  labels:
    rbac.kiosk.sh/aggregate-to-space-admin: "true"
rules:
- apiGroups:
  - config.kiosk.sh
  resources:
  - templateinstances
  verbs:
  - get
  - list
  - watch
  - create
  - delete
  - deletecollection
  - patch
  - update

After creating the ClusterRole kiosk-template-admin as shown above, users can instantiate templates inside their Namespaces by creating so-called TemplateInstances. The following example creates an instance of the Helm Chart Template redis which has been created above:

kubectl apply --as=john -n space-2 -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/template-instance.yaml

Note: In the above example, we are using space-2 which belongs to Account johns-account-deletable-spaces. This Account defines space.clusterRole: kiosk-space-admin which automatically creates a RoleBinding for the Role kiosk-space-admin when creating a new Space for this Account.

View: template-instance.yaml
apiVersion: config.kiosk.sh/v1alpha1
kind: TemplateInstance
metadata:
  name: redis-instance
spec:
  template: redis
  # You can also specify a parameter value here
  # parameters:
  # - name: DEFAULT_CPU_REQUESTS
  #   value: "1"


5.4. Mandatory vs. Optional Templates

Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.space.templateInstances array within the Account configuration. All Templates listed in spec.space.templateInstances will always be instantiated within every Space/Namespace that is created for the respective Account.

Let's see this in action by updating the Account johns-account and referencing our space-restrictions Template from 5.1. in spec.space.templateInstances:

# Run this as cluster admin:
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/account-default-template.yaml
View: account-default-template.yaml
apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Account
metadata:
  name: johns-account-deletable-spaces
spec:
  space:
    clusterRole: kiosk-space-admin
    templateInstances:
    - spec:
        template: space-restrictions
        # Specifying parameter values here is also possible
        parameters:
          - name: DEFAULT_CPU_REQUESTS
            value: "2"
  subjects:
  - kind: User
    name: john
    apiGroup: rbac.authorization.k8s.io

If you are using ServiceAccounts instead of impersonation, adjust the subjects section of this Account similar to account-sa.yaml in 2.1.


Now, let's create a Space without specifying any templates and see how this Template will automatically be instantiated:

kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space-template-mandatory.yaml --as=john
View: space-template-mandatory.yaml
apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Space
metadata:
  name: johns-space-template-mandatory
spec:
  account: johns-account # can be omitted if the user only has 1 account

Now, we can run the following command to see that the two resources (PodSecurityPolicy and LimitRange) defined in our Template space-restrictions have been created inside the Space/Namespace:

# Run this as cluster admin:
kubectl get podsecuritypolicy,limitrange -n johns-space-template-mandatory

Mandatory Templates are generally used to enforce security restrictions and isolate namespaces from each other while Optional Templates often provide a set of default applications that a user might want to choose from when creating a Space/Namespace (see example in 5.2).


5.5. TemplateInstances

To keep track of resources created from Templates, kiosk creates a so-called TemplateInstance for each Template that is being instantiated inside a Space/Namespace.

To view the TemplateInstances of the namespace johns-space-template-mandatory, run the following command:

# Run this as cluster admin:
kubectl get templateinstances -n johns-space-template-mandatory

TemplateInstances allow admins and user to see which Templates are being used within a Space/Namespace and they make it possible to upgrade the resources created by a Template if there is a newer version of the Template (coming soon).


5.6. Template Sync

Generally, a TemplateInstance is created from a Template and then, the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true in a TemplateInstance. Setting this option, tells kiosk to keep this TemplateInstance in sync with the underlying template using a 3-way merge (similar to helm upgrade).

The following example creates an instance of the Helm Chart Template redis which has been created above and defines that this TemplateInstance should be kept in sync with the underlying Template:

kubectl apply --as=john -n space-2 -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/template-instance-sync.yaml
View: template-instance-sync.yaml
apiVersion: config.kiosk.sh/v1alpha1
kind: TemplateInstance
metadata:
  name: redis-instance-sync
spec:
  template: redis
  sync: true
  # You can specify parameters here as well
  # parameters:
  # - name: DEFAULT_CPU_REQUESTS
  #   value: "1"


Upgrade kiosk

helm upgrade kiosk --repo https://charts.devspace.sh/ kiosk -n kiosk --atomic --reuse-values

Check the release notes for details on how to upgrade to a specific release.
Do not skip releases with release notes containing upgrade instructions!


Uninstall kiosk

helm delete kiosk -n kiosk

Extra: User Management & Authentication

kiosk does not provide a built-in user management system.

To manage users in your cluster, you can either use vendor-neutral solutions such as dex or DevSpace Cloud or alternatively, if you are in a public cloud, you may be able to use provider-specific solutions such as AWS IAM for EKS or GCP IAM for GKE.

Using ServiceAccounts For Authentication

If you like to use ServiceAccounts for a small and easy to set up authentication and user management, you can use the following instructions to create new users / kube-configs.

Use bash to run the following commands.

1. Create a ServiceAccount

USER_NAME="john"
kubectl -n kiosk create serviceaccount $USER_NAME

2. Create Kube-Config For ServiceAccount

# If not already set, then:
USER_NAME="john"

KUBECONFIG_PATH="$HOME/.kube/config-kiosk"

kubectl config view --minify --raw >$KUBECONFIG_PATH
export KUBECONFIG=$KUBECONFIG_PATH

CURRENT_CONTEXT=$(kubectl config current-context)
kubectl config rename-context $CURRENT_CONTEXT kiosk-admin

CLUSTER_NAME=$(kubectl config view -o jsonpath="{.clusters[].name}")
ADMIN_USER=$(kubectl config view -o jsonpath="{.users[].name}")

SA_NAME=$(kubectl -n kiosk get serviceaccount $USER_NAME -o jsonpath="{.secrets[0].name}")
SA_TOKEN=$(kubectl -n kiosk get secret $SA_NAME -o jsonpath="{.data.token}" | base64 -d)

kubectl config set-credentials $USER_NAME --token=$SA_TOKEN
kubectl config set-context kiosk-user --cluster=$CLUSTER_NAME --user=$USER_NAME
kubectl config use-context kiosk-user

# Optional: delete admin context and user
kubectl config unset contexts.kiosk-admin
kubectl config unset users.$ADMIN_USER

export KUBECONFIG=""

3. Use ServiceAccount Kube-Config

# If not already set, then:
KUBECONFIG_PATH="$HOME/.kube/config-kiosk"

export KUBECONFIG=$KUBECONFIG_PATH

kubectl ...

4. Reset Kube-Config

export KUBECONFIG=""

kubectl ...

Contributing

There are many ways to get involved:

  • Open an issue for questions, to report bugs or to suggest new features
  • Open a pull request to contribute improvements to the code base or documentation
  • Email one of the maintainers (Fabian, Lukas) to find out more about the project and how to get involved

For more detailed information, see our Contributing Guide.

This is a very new project, so we are actively looking for contributors and maintainers. Reach out if you are interested.


About kiosk

kiosk is an open-source project licensed under Apache-2.0 license. The project will be contributed to CNCF once it reaches the required level of popularity and maturity. The first version of kiosk was developed by DevSpace Technologies as core component for their DevSpace Cloud on-premise edition.

kiosk's People

Contributors

czhujer avatar danielthiry avatar fabiankramm avatar floriankutz avatar jerlam06 avatar xiaods avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kiosk's Issues

for README, create a Space johns-space-deletable with cluster-admin not working

in README's tutorial, when I come across here step: I got some confused

Now, let's create a Space for this Account:

# Run this as cluster-admin:
# Create Space johns-space-deletable
kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space-deletable.yaml 

actually the new account created for john user, cluster-admin can't create the space. How to do this?

Proposal: Sync Template & TemplateInstance

Problem

Templates are a great tool to define resources that should be created on space creation. After the resources are created by the TemplateInstance, changes to the corresponding Template do not change any deployed resources created by the TemplateInstance anymore (which is initially good and should stay the default behavior).

However, I argue that there are cases where a user wants to sync a TemplateInstance with a Template: imagine an organization that has a common helm chart that takes care of isolation practices the organization wants to enforce for (certain) spaces (e.g. LimitRanges, NetworkPolicies etc.). With the current features of kiosk, the organization has to handle the synchronization of the common helm chart and actual deployed instances of this chart itself, which is not an easy task especially if tenants should be still able to create namespaces on demand.

Solution

We allow and take care of synchronization between TemplateInstances and Templates. I propose a new property in the TemplateInstanceSpec called sync that, when enabled, ensures changes to the Template are reflected by the corresponding TemplateInstance. This makes it easy to enable synchronization on a case to case basis and plays nicely with the proposed changes in #32.

Potential Implementation

Currently the TemplateInstance already records several metadata fields of the deployed resources, which can be used to uniquely identify them. On template change we can compute a delta between the before deployed resources and new resources and apply the changes. However, there are still several challenges on the implementation part:

  • What happens to changes that were made by the user / admin to deployed resources by the TemplateInstance? Should we use the three way merge helm uses?
  • Should we convert the to deploy resources on the fly into a helm chart and let helm do the delta / merging part on a parent Template update? (especially since helm3 does not require a tiller anymore and we have helm in the container image anyways) One potential problem I see, is that namespace users potentially could delete the created helm secret and make updating the chart harder.
  • If we fail once, should we retry or stay in a failed state? If we retry, can we ensure that the sync is an atomic operation and is reverted on failure? What if this cannot be ensured?

Having placeholder to targeted namespace in Template

Hi,

It would be nice If we could define some kind of placeholder for the Namespace in a template definition. For example, the namespace is mandatory If I want to create a ClusterRoleBinding or RoleBinding in the Template ex:

cat << EOF > template-manifests.yaml
apiVersion: config.kiosk.sh/v1alpha1
kind: Template
metadata:
  name: space-template
resources:
  manifests:
  - kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: deny-cross-ns-traffic
    spec:
      podSelector:
        matchLabels:
      ingress:
      - from:
        - podSelector: {}
  - apiVersion: v1
    kind: LimitRange
    metadata:
      name: space-limit-range
    spec:
      limits:
      - default:
          cpu: 1
        defaultRequest:
          cpu: 0.5
        type: Container
  - apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: utils-admin
  - apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: utils-admin-dev-space3
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: utils-admin
      namespace: {{namespace}} #<-- HERE

do you think this kind of feature can be added ?

thanks

About network isolation

Hi,
I wonder if there are any mechanism to isolate network traffic between different tenants in this project ?
I see that you apply network policy in template-manifest, but this policy seems to be applied only in "default" namespace, not for all other namespaces.
Thank you.

go through README's tutorial, john account can't view space in example way

Environment:
mac
docker desktop
kubernetes 1.16.5

I follow README's tutorial, when I step to create space steps, I came across bug here:

~ took 15s 
❯ kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space.yaml --as=john
space.tenancy.kiosk.sh/johns-space created

~ took 35s 
❯ kubectl get spaces --as=john
No resources found.

~ took 12s 
❯ kubectl get spaces --as=john
No resources found.

~ took 3s 
❯ kubectl get space johns-space -o yaml --as=john
Error from server (NotFound): space.tenancy.kiosk.sh "johns-space" not found

how to debug this situation?

feat: leader election & high availability

Currently kiosk can be only run as a single pod. Scaling up the replicas leads to errors since the controllers are started multiple times. We should add an option with which kiosk can be run in a leader election mode to make kiosk high available.

Question: How to setup in existing cluster - Namespace migration

Hi team,

first of all: this extension is awesome and exactly what I was searching for! Amazing work, including the comprehensive documentation and examples :)

I have a question regarding setting kiosk up in an already existing cluster with several "customer namespaces".
How is it possible to migrate existing namespaces to the kiosk space/account structure?

From the documentation I got that:

  • Space ownership can be changed, by changing the ownership annotation on the namespace
  • During Space creation (or Space ownership changes) a RoleBinding for the owning Account is created in the corresponding Space namespace. The referenced RBAC ClusterRole can be configured in the account.

But apart from the fact that ownership relation has been changed from annotations to labels(?!), this does not seem to work for namespaces that existed before installing kiosk.

Is there any documentation and/or example on how to make this working?
This might be related to and potentially already is the answer: #140

Many thanks in advance and keep up the great work!

Access problem need help

Currently I am studying custom resource . The basic RBAC is not enough,I couldn't implement feature such like “ Every User only sees the resource the User has access to.”

For example, user can only list the resource which they created.

But I notice that this problem have been solved in this project :

A Space is a non-persistent, virtual resource that represents exactly one Kubernetes namespace. Spaces have the following characteristics:
Every User only sees the Spaces the User has access to. 

So I ask you for help. How to implement ? Thank you for your help.

User can't create TemplateInstance in its own Space

I followed the README steps and everything works fine until the part when the users are given access to view Template and to create TemplateInstance resources.

Im using current latest release of Kiosk.

~/kiosk_test $ kubectl get templates.config.kiosk.sh --as=john
NAME                 AGE
redis                8m57s
space-restrictions   9m2s

~/kiosk_test $ kubectl apply -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/rbac-template-instance-admin.yaml
clusterrole.rbac.authorization.k8s.io/kiosk-template-admin unchanged

~/kiosk_test $ kubectl apply --as=john -n johns-space -f https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/template-instance.yaml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "config.kiosk.sh/v1alpha1, Resource=templateinstances", GroupVersionKind: "config.kiosk.sh/v1alpha1, Kind=TemplateInstance"
Name: "redis-instance", Namespace: "johns-space"
Object: &{map["apiVersion":"config.kiosk.sh/v1alpha1" "kind":"TemplateInstance" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "name":"redis-instance" "namespace":"johns-space"] "spec":map["template":"redis"]]}
from server for: "https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/template-instance.yaml": templateinstances.config.kiosk.sh "redis-instance" is forbidden: User "john" cannot get resource "templateinstances" in API group "config.kiosk.sh" in the namespace "johns-space"

Permanent template resources

Is it possible to prevent the deletion of some template resources, or redeploy them automatically if they are deleted?

I'm thinking specifically of things like a NetworkPolicy object that prevents communication between Account namespaces. In the getting started documentation, the "John" user can delete the NetworkPolicy object created by the template when creating the "johns-space-template-mandatory" Space.

apiserver: add meaningful columns for spaces & accounts

With the new kubernetes version it is possible to tell kubectl what a table should look like for a custom virtual resource. We should do that for spaces and accounts to improve the kubectl get spaces and kubectl get accounts output

Loft

This is less of an issue than a request for comment. I see there's a pseudo-new solution call Loft, https://loft.sh. Does it have any affiliation with Kiosk?

ValidatingWebhookConfiguration "kiosk" in namespace "" exists and cannot be imported into the current release

Hello. Nice product you have here with kiosk.
I installed loft a couple of times and removed it a couple of times. One time it got stuck and I could not delete loft namespace so I hade to forcefully delete that namespace by removing it's finalizer.
Now I can not install kiosk anymore because it complains about ownership about key "meta.helm.sh/release-namespace" that equals "loft" and not "kiosk.

How can I remove that key manually?

helm install kiosk --repo https://charts.devspace.sh/ kiosk --namespace kiosk --atomic --debug
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/myname/.cache/helm/repository/kiosk-0.1.23.tgz

Error: rendered manifests contain a resource that already exists. Unable to continue with install: ValidatingWebhookConfiguration "kiosk" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "kiosk": current value is "loft"
helm.go:81: [debug] ValidatingWebhookConfiguration "kiosk" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "kiosk": current value is "loft"
rendered manifests contain a resource that already exists. Unable to continue with install
helm.sh/helm/v3/pkg/action.(*Install).Run
	/home/circleci/helm.sh/helm/pkg/action/install.go:276
main.runInstall
	/home/circleci/helm.sh/helm/cmd/helm/install.go:242
main.newInstallCmd.func2
	/home/circleci/helm.sh/helm/cmd/helm/install.go:120
github.com/spf13/cobra.(*Command).execute
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:850
github.com/spf13/cobra.(*Command).ExecuteC
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:958
github.com/spf13/cobra.(*Command).Execute
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
main.main
	/home/circleci/helm.sh/helm/cmd/helm/helm.go:80
runtime.main
	/usr/local/go/src/runtime/proc.go:204
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1374

Debug Mode to see failed rendered chart?

I have a TemplateInstance that's failing a cryptic Helm-related error and I was wondering if there's a way to turn on debug mode for the helm template command so I can see where the chart failed to render?

Validating Webhook accountquotas.config.kiosk.sh

AccountQuota needs a validating webhook to check for the following things:

  • Create/Update:
    • Ensure valid resource list (currently you can specify invalid resources), basically do the same checks as ResourceQuota is doing

Account spaceTemplate is not reconciled

Hi,

I'm using Kiosk 0.2.8 and got an Account with field spec.space.spaceTemplate to define Network Policies that should be present in each space of that account. Readme: https://github.com/loft-sh/kiosk#37-defaults-for-spaces

I have a Template resource which is used in that Account resource:

apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Account
metadata:
  name: testbenchframework-account
  ...
spec:
  space:
    clusterRole: kiosk-space-admin
    limit: 2
    spaceTemplate:
      metadata:
        creationTimestamp: null
        labels:
          some/label: whatever-account
    templateInstances:
    - metadata:
        creationTimestamp: null
      spec:
        template: space-restrictions
apiVersion: config.kiosk.sh/v1alpha1
kind: Template
metadata:
  name: space-restrictions
  ...
resources:
  manifests:
  - apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-some-communication
    spec:
      ...
  - apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-some-egress-communication
    spec:
      ...

Upon creation of the space, the template was successfully executed and I could see my Network Policy being created in the Space. Then, I went ahead and changed the template by adding a second Network Policy and let ArgoCD upgrade the resource.

Now, 30 minutes after the Template was changed in the cluster, the updated Template was still not executed in the Space.
Is there a way to trigger the reconciliation of the template instances?

kubens support

Hi,

kubens allow for fast switching between namespaces by allowing to choose a namespace in a list. The drawback is that current implementation get the list of namespaces from ns resource.

kubens is licensed under Apache 2 and modifying the code to support spaces API is trivial. It could benefit the kiosk community to provide such a tool to the space admin users.

Kiosk server port is not configurable

Hi,

I have found out that the Kiosk server port is not configurable at the moment:

o.RecommendedOptions.SecureServing.BindPort = 8443

Kiosk listens on the default port 8443, which is currently hardcoded.

When using the Helm chart with hostNetwork: true, this will cause issues when other applications already use that port. Kiosk will then crash on startup with the (expected) error:

panic: failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use

Using the hostNetwork: true is required when using an overlay network (e.g. with WeaveNet, Cilium) and still want admission webhooks to work in environments like AWS EKS. Running the pods in the host network is required because otherwise the API Server managed by EKS is not able to communicate with the pod.

unable to retrieve the complete list of server APIs: tenancy.kiosk.sh/v1alpha1

hi all,

We are stuck when installing Kiosk on new AKS clusters 1.20.06 with the following error : unable to retrieve the complete list of server APIs: tenancy.kiosk.sh/v1alpha1: the server is currently unable to handle the request

The logs are in the Kiosk pod which was installed with Helm. We've got the same error with kubectl api-resources

Kiosk 0.2.0 version

Important information : the same helm installation works fine when we are on a older version of kubernetes 1.18.

Any help will be appreciated.

Regards.

Field is immutable

When trying to create a space as a user (not cluster-admin) I get the following error:

$ cat << 'EOF' | kubectl apply --kubeconfig joe_bloggs_config -f -
apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Space
metadata:
  name: joes-space
EOF
The Space "joes-space" is invalid: metadata.annotations: Invalid value: map[string]string{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"tenancy.kiosk.sh/v1alpha1\",\"kind\":\"Space\",\"metadata\":{\"annotations\":{},\"name\":\"joes-space\"}}\n"}: field is immutable, try updating the namespace
kubectl auth can-i --kubeconfig joe_bloggs_config create Space
Warning: resource 'spaces' is not namespace scoped in group 'tenancy.kiosk.sh'
yes

Account: allow space metadata & template instance metadata

Currently it is not possible to predefine labels and annotations for spaces that will be created on demand by a certain account. The same problem holds true for template instances that are created by default on space creation.

We think this is a very prominent use case (e.g. selecting namespaces for applying certain admission rules) and should be addressed. It is probably best to restructure the account schema for this:

apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Account
spec:
  space:
    clusterRole: ... (the old Account.spec.spaceClusterRole)
    limit: ... (the old Account.spec.spaceLimit)
    templateInstances: []TemplateInstance (the old Account.spec.spaceDefaultTemplates and Switch type from TemplateInstanceSpec to TemplateInstance (with metadata))
    spaceTemplate: Space (new)

This new schema would allow users to specify metadata for the created spaces and templateInstances. If several sets of annotations or labels should be allowed for a certain account, it is probably best to create two accounts for these users and change the needed set on space creation via the Space.spec.account property.

accountquotas resources cannot be edited after upgrade k8s to 1.22

Hi,

we are happy user of kiosk :)

But after upgrade our k8s cluster (AKS) to version 1.22, we can't edit kiosk accountquotas resources.

for creating resources we are using this helm chart.

$ kc apply -f xxx2.yaml (i want change pods quota "manualy"..)
Warning: resource accountquotas/patrik.majer is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
Error from server (strict decoder error for {"apiVersion":"config.kiosk.sh/v1alpha1","kind":"AccountQuota","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"config.kiosk.sh/v1alpha1","kind":"AccountQuota","metadata":{"annotations":{"meta.helm.sh/release-name":"od-account-patrik-majer-ataccama-com","meta.helm.sh/release-namespace":"kube-system"},"creationTimestamp":"2022-01-23T13:49:36Z","generation":5,"labels":{"app.kubernetes.io/instance":"od-account-patrik-majer-ataccama-com","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ondemand-user-account","app.kubernetes.io/version":"1.0.0","helm.sh/chart":"ondemand-user-account-0.1.0"},"name":"patrik.majer","resourceVersion":"18730242","uid":"b59fa6a7-eaa3-4b00-8e52-1e7c3e9435c9"},"spec":{"account":"patrik.majer","quota":{"hard":{"limits.cpu":"32","limits.memory":"128G","pods":"165"}}},"status":{"namespaces":[{"namespace":"patrik-majer","status":{"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}},{"namespace":"patrik-majer-1","status":{"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}}],"total":{"hard":{"limits.cpu":"32","limits.memory":"128G","pods":"65"},"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}}}\n","meta.helm.sh/release-name":"od-account-patrik-majer-ataccama-com","meta.helm.sh/release-namespace":"kube-system"},"creationTimestamp":"2022-01-23T13:49:36Z","generation":6,"labels":{"app.kubernetes.io/instance":"od-account-patrik-majer-ataccama-com","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ondemand-user-account","app.kubernetes.io/version":"1.0.0","helm.sh/chart":"ondemand-user-account-0.1.0"},"managedFields":[{"apiVersion":"config.kiosk.sh/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/version":{},"f:helm.sh/chart":{}}},"f:spec":{".":{},"f:account":{},"f:quota":{".":{},"f:hard":{}}}},"manager":"helm","operation":"Update","time":"2022-01-23T13:49:36Z"},{"apiVersion":"config.kiosk.sh/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:status":{".":{},"f:total":{".":{},"f:hard":{".":{},"f:limits.cpu":{},"f:limits.memory":{},"f:pods":{}},"f:used":{}}}},"manager":"kiosk","operation":"Update","time":"2022-02-03T20:08:23Z"},{"apiVersion":"config.kiosk.sh/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:quota":{"f:hard":{"f:limits.cpu":{},"f:limits.memory":{}}}}},"manager":"kubectl-edit","operation":"Update","time":"2022-02-03T20:43:08Z"},{"apiVersion":"config.kiosk.sh/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:namespaces":{},"f:total":{"f:used":{"f:limits.cpu":{},"f:limits.memory":{},"f:pods":{}}}}},"manager":"kiosk","operation":"Update","subresource":"status","time":"2022-02-04T11:24:06Z"},{"apiVersion":"config.kiosk.sh/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:quota":{"f:hard":{"f:pods":{}}}}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2022-02-11T09:33:39Z"}],"name":"patrik.majer","resourceVersion":"18730242","uid":"b59fa6a7-eaa3-4b00-8e52-1e7c3e9435c9"},"spec":{"account":"patrik.majer","quota":{"hard":{"limits.cpu":"32","limits.memory":"128G","pods":"165"}}},"status":{"namespaces":[{"namespace":"patrik-majer","status":{"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}},{"namespace":"patrik-majer-1","status":{"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}}],"total":{"hard":{"limits.cpu":"32","limits.memory":"128G","pods":"65"},"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}}}: v1alpha1.AccountQuota.ObjectMeta: v1.ObjectMeta.ManagedFields: []v1.ManagedFieldsEntry: v1.ManagedFieldsEntry.Time: ReadObject: found unknown field: subresource, error found in #10 byte of ...|bresource":"status",|..., bigger context ...|anager":"kiosk","operation":"Update","subresource":"status","time":"2022-02-04T11:24:06Z"},{"apiVers|...): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"config.kiosk.sh/v1alpha1","kind":"AccountQuota","metadata":{"annotations":{"meta.helm.sh/release-name":"od-account-patrik-majer-ataccama-com","meta.helm.sh/release-namespace":"kube-system"},"creationTimestamp":"2022-01-23T13:49:36Z","generation":5,"labels":{"app.kubernetes.io/instance":"od-account-patrik-majer-ataccama-com","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ondemand-user-account","app.kubernetes.io/version":"1.0.0","helm.sh/chart":"ondemand-user-account-0.1.0"},"name":"patrik.majer","resourceVersion":"18730242","uid":"b59fa6a7-eaa3-4b00-8e52-1e7c3e9435c9"},"spec":{"account":"patrik.majer","quota":{"hard":{"limits.cpu":"32","limits.memory":"128G","pods":"165"}}},"status":{"namespaces":[{"namespace":"patrik-majer","status":{"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}},{"namespace":"patrik-majer-1","status":{"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}}],"total":{"hard":{"limits.cpu":"32","limits.memory":"128G","pods":"65"},"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}}}\n"}},"spec":{"quota":{"hard":{"pods":"165"}}}}
to:
Resource: "config.kiosk.sh/v1alpha1, Resource=accountquotas", GroupVersionKind: "config.kiosk.sh/v1alpha1, Kind=AccountQuota"
Name: "patrik.majer", Namespace: ""
for: "xxx2.yaml": admission webhook "config.kiosk.sh" denied the request: strict decoder error for {"apiVersion":"config.kiosk.sh/v1alpha1","kind":"AccountQuota","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"config.kiosk.sh/v1alpha1","kind":"AccountQuota","metadata":{"annotations":{"meta.helm.sh/release-name":"od-account-patrik-majer-ataccama-com","meta.helm.sh/release-namespace":"kube-system"},"creationTimestamp":"2022-01-23T13:49:36Z","generation":5,"labels":{"app.kubernetes.io/instance":"od-account-patrik-majer-ataccama-com","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ondemand-user-account","app.kubernetes.io/version":"1.0.0","helm.sh/chart":"ondemand-user-account-0.1.0"},"name":"patrik.majer","resourceVersion":"18730242","uid":"b59fa6a7-eaa3-4b00-8e52-1e7c3e9435c9"},"spec":{"account":"patrik.majer","quota":{"hard":{"limits.cpu":"32","limits.memory":"128G","pods":"165"}}},"status":{"namespaces":[{"namespace":"patrik-majer","status":{"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}},{"namespace":"patrik-majer-1","status":{"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}}],"total":{"hard":{"limits.cpu":"32","limits.memory":"128G","pods":"65"},"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}}}\n","meta.helm.sh/release-name":"od-account-patrik-majer-ataccama-com","meta.helm.sh/release-namespace":"kube-system"},"creationTimestamp":"2022-01-23T13:49:36Z","generation":6,"labels":{"app.kubernetes.io/instance":"od-account-patrik-majer-ataccama-com","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ondemand-user-account","app.kubernetes.io/version":"1.0.0","helm.sh/chart":"ondemand-user-account-0.1.0"},"managedFields":[{"apiVersion":"config.kiosk.sh/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/version":{},"f:helm.sh/chart":{}}},"f:spec":{".":{},"f:account":{},"f:quota":{".":{},"f:hard":{}}}},"manager":"helm","operation":"Update","time":"2022-01-23T13:49:36Z"},{"apiVersion":"config.kiosk.sh/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:status":{".":{},"f:total":{".":{},"f:hard":{".":{},"f:limits.cpu":{},"f:limits.memory":{},"f:pods":{}},"f:used":{}}}},"manager":"kiosk","operation":"Update","time":"2022-02-03T20:08:23Z"},{"apiVersion":"config.kiosk.sh/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:quota":{"f:hard":{"f:limits.cpu":{},"f:limits.memory":{}}}}},"manager":"kubectl-edit","operation":"Update","time":"2022-02-03T20:43:08Z"},{"apiVersion":"config.kiosk.sh/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:namespaces":{},"f:total":{"f:used":{"f:limits.cpu":{},"f:limits.memory":{},"f:pods":{}}}}},"manager":"kiosk","operation":"Update","subresource":"status","time":"2022-02-04T11:24:06Z"},{"apiVersion":"config.kiosk.sh/v1alpha1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:quota":{"f:hard":{"f:pods":{}}}}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2022-02-11T09:33:39Z"}],"name":"patrik.majer","resourceVersion":"18730242","uid":"b59fa6a7-eaa3-4b00-8e52-1e7c3e9435c9"},"spec":{"account":"patrik.majer","quota":{"hard":{"limits.cpu":"32","limits.memory":"128G","pods":"165"}}},"status":{"namespaces":[{"namespace":"patrik-majer","status":{"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}},{"namespace":"patrik-majer-1","status":{"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}}],"total":{"hard":{"limits.cpu":"32","limits.memory":"128G","pods":"65"},"used":{"limits.cpu":"0","limits.memory":"0","pods":"0"}}}}: v1alpha1.AccountQuota.ObjectMeta: v1.ObjectMeta.ManagedFields: []v1.ManagedFieldsEntry: v1.ManagedFieldsEntry.Time: ReadObject: found unknown field: subresource, error found in #10 byte of ...|bresource":"status",|..., bigger context ...|anager":"kiosk","operation":"Update","subresource":"status","time":"2022-02-04T11:24:06Z"},{"apiVers|...

any idea where is problem?

can't create template instance based on helm

Hi,

thanks for the great work you've done with this repo!

While testing the helm based instance template, I'm ending with this error in kiosk controller, using the redis from the Readme

Template instance dev-space3/redis-instance failed: Error during helm template: Failed to unmarshal manifest: error unmarshaling JSON: Object 'Kind' is missing in '{\"WARNING\":\"This chart is deprecated\"}'"}

also is it possible to have a mix of manifests + helm in the template ?

thanks

Unable to create template instance with parameters

When trying to create a TemplateInstance resource for a Template that has parameters, the status of the resource has the message: parameter TEST_NAME does not exist in template vcluster

# template.yaml
apiVersion: config.kiosk.sh/v1alpha1
kind: Template
metadata:
  name: vcluster
  namespace: kiosk
resources:
  helm:
    releaseName: vcluster
    chart:
      repository:
        name: vcluster
        repoUrl: https://charts.loft.sh
        version: "0.11.1"
    values: |
      sync:
        serviceaccounts:
          enabled: true
      syncer:
        kubeConfigContextName: "${NAME}"
      ingress:
        enabled: true
        ingressClassName: nginx-internal
        host: "${NAME}.clusters.${PLATFORM}.my-domain.dev"
# template-instance.yaml
apiVersion: config.kiosk.sh/v1alpha1
kind: TemplateInstance
metadata:
  name: test-vcluster-vcluster
  namespace: test-vcluster-space
spec:
  parameters:
    - name: NAME
      value: test-vcluster-vcluster
    - name: PLATFORM
      value: moshpit
  template: vcluster
status:
  message: parameter NAME does not exist in template vcluster
  reason: ErrorValidatingParameters
  status: Failed

I've tried defining the parameters with ${{..}}, ${..}. and $.. but none of them work.
Kiosk Version: v0.2.11

Kiosk Prometheus Metrics

Hi there,

are there any plans for adding prometheus metrics (accounts, spaces, quotasets)? Or do you have some recommendations for me how to monitor tenant resources?

Cheers
Chris

Hey is it possible/ how to use kiosk with saml2-based ADFS ?

I am currently using openunison as oidc provider that works well with saml2 ldp of the company. I saw kiosk works ith Dex. I am wondering if it also worked with openunison that is similar thing to Dex. the problem is the user group in smal2 assertin does not have pre-defined user group recource in k8s becasue k8s does not have provide api to create user groups by end-user. So mu question is does kiosk automatically link user group in saml2 assertion with account cdr resource in kiosk level?

Unsupport kubernetes server-side apply

apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Account
metadata:
  name: e-chop-account
spec:
  subjects:
    - kind: User
      name: e-chop
      apiGroup: rbac.authorization.k8s.io

kubectl apply --dry-run=server --server-side -f account.yaml

error message:
UnsupportedMediaType, error: 415: Unsupported Media Type

TemplateInstance: fail if non namespaced scope resource should be deployed

At the moment it is possible to define cluster scoped resources in a template and a TemplateInstance in a namespace that deploys these resources. This can lead to problematic behavior, since two templateinstances in two different namespaces that would point to the same template (which is allowed and encouraged) would override the clusterresource of each other and is basically currently the result of some kind of undefined behavior.

Hence, we should restrict and check during deployment of a TemplateInstance if a given resource is namespaced or cluster scoped and let the TemplateInstance fail if it is a cluster scoped resource.

Account: Allow to apply custom label/annotation on namespace

The issue #32 let an account to add predefined metadata on all namespaces. The idea is to open this possibility to some more dynamic value.
The use case is to be able for an account to label/annotate its namespaces with custom value so he can use label selector.

If security is a concern, maybe the space can allow only some prefix to be used.

apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Account
spec:
  space:
    # existing fields are kept
    allowCustomMetadata: true/false
    # or
    allowedCustomMetadataPrefixes:
      - custom.foo.bar/
apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Space
metadata:
  name: johns-space
spec:
  labels:
    env: test
    # or
    custom.foo.bar/env: test
  annotations:
    domain: bar    
    # or
    custom.foo.bar/domain: bar

Another design would be to replicate label and annotation present on the space itself.

apiVersion: tenancy.kiosk.sh/v1alpha1
kind: Space
metadata:
  name: johns-space
  labels:
    env: test
    # or
    custom.foo.bar/env: test
  annotations:
    domain: bar    
    # or
    custom.foo.bar/domain: bar
spec:

Spaces can not properly be created by ArgoCD, resources show as OutOfSync

Hello,

I try to create acccounts and spaces in a GitOps way with ArgoCD (version 2.0.0).
To me it looks like the Space and also implicitly the namespace for that Space is created successfully by ArgoCD:

> kubectl get space johns-space
NAME          OWNER           CREATED AT
johns-space   johns-account   2021-06-16T10:25:39Z

> kubectl get namespace johns-space
NAME          STATUS    AGE
johns-space   Active    5m41s

The problem is that ArgoCD can not recognize the live manifest of the Space resource and shows that the resource is OutOfSync and disappears from the ArgoCD web ui occasionally:
image

The same problem occured when I tried to add an Account from resource tenancy.kiosk.sh/Account. When using the config.kiosk.sh/Account resource, ArgoCD can sync the resources and ends up in Sync ok state.

The documentation outlines that tenancy.kiosk.sh is some kind of virtual api extension and is not persisted to etcd.

Any idea on how to fix this problem?

Add watch endpoint for tenancy.kiosk.sh

Currently the kiosk apiserver does not support Watch operations on spaces and accounts. The problem with this operation is that we would have to create a filtered view based on the requesting user and group memberships, which the current auth cache implementation not supports.

An obvious workaround for priviledged users currently is to watch the underlying resource (namespaces & accounts.config.kiosk.sh) and do the filtering themselves, however for unpriviledged users this is currently not possible. While I think the Watch operation is certainly necessary (at least for sake of completion), I'm not sure about the priority of it. Are there any tools that would need / require this?

Delete Account doesn't delete the underline spaces

Environment:
k8s: 1.19.7
kiosk: 0.2.4

Hello,
I just noticed that if the Account is deleted, the spaces under that account are still exists. I think it can be nice, if all existing resource under the Account will be deleted as well. Like deleting namespace deletes all underline resources.

What do you think?

Thanks.

Option to choose namespace destination for helm chart

Hello,

We would like to use the template system of kiosk to let user install some packages without all the permissions to do it (like an operator)

For eg we need to install some helm chart in a specific namespace (created with the AKS cluster) with specific permissions (more restricted that the permissions in the space).
For example, our monitoring package need a service account with a ClusterRoleBinding+ClusterRole. This ClusterRole has more permissions than a normal user.
In the dedicated namespace named monitoring, user can not have access to secret or service account (namespace and permissions are not created by kiosk or the user and not managed by kiosk)

I’ve created a template for a helm chart with a value namespace: monitoring to force the installation in this namespace.
My issue is that if the user create the template-instance in, let say, project-dev namespace, the helm chart is installed in project-dev and not monitoring namespace.

Your helm template command render the files with « namespace: monitoring », but later in your code your replace the value of the namespace with an another one (the one use to create the template-instance) ...

Is it possible to have an option in the template to choose the namespace where the helm chart will be installed ?

Users can not modify template, so each time they want to install a template-instace, i can (for some of my charts with extended permission) set the target namespace per template if needed.

Let says : useTargetNamespace: true/false
If not set or set to true : same behaviour as before
If set to false : helm decision (so a forced namespace or the TargetNamespace)

Does it make sense ?

Kiosk restarted several times

Kiosk restarted several times with below error

Version: kiosksh/kiosk:0.2.4

~ k get po -l app=kiosk
NAME                     READY   STATUS    RESTARTS   AGE
kiosk-6bbff86fd4-gts8n   1/1     Running   9          40d

➜  ~ k logs -p kiosk-6bbff86fd4-gts8n
E0513 03:51:13.342092       1 deleg.go:144] setup: unable to create client Get "https://10.80.0.1:443/api?timeout=32s": dial tcp 10.80.0.1:443: connect: connection refused

Is it possible to limit GPU usage with kiosk

Hi,

I tried to limit the number of GPU allowed in my account, but this not seams to be taken into account

Here is my AccountQuota

apiVersion: config.kiosk.sh/v1alpha1
kind: AccountQuota
metadata:
  name: dev-account-quota
spec:
  account: dev-account
  quota:
    hard:
      pods: "8"
      limits.cpu: "8"
      limits.nvidia.com/gpu: 4

But It does not seams to find and add the actual gpu already used by my pods

Proposal: accountquotaset to automatically manage accountquotas

Problem

The current state of AccountQuota allows specifying exactly one account the account quota applies to. All resources within namespaces the account owns count towards the quota. Hence, a common setup is to have an account for each User, which in return also has an AccountQuota.

One observation in this setup is, that a lot of the accountquotas across accounts do not really differ and are required to be in sync with some form of meta quota that should be applied to certain groups of accounts (e.g. developers, admins etc.). Accomplishing this goal is not trivial with the current set of features in kiosk, since this requires some sort of manual work or scripting to keep those accountquotas in sync.

Solution

A solution to this problem could be the AccountQuotaSet. This resource ensures that for a certain set of accounts an accountquota exists with the specified hard limits. Changes to the AccountQuotaSet would be applied to the children accountquotas. If a new account satisfies the account label selector an accountquota for this account is created. Conversely, if the labels of an account change (or the account is deleted) the corresponding accountquota would be deleted.

Potential Implementation

The implementation of the AccountQuotaSet would require 3 parts:

  • CRD: we would need a new crd that defines the AccountQuotaSet (mostly a AccountQuota template and account label selector)
  • Controller: obviously we also need a new controller that handles the creation / updating / deletion of corresponding accountquotas. One open question I have, how we can efficiently cache the accounts by their label selectors? Is it enough to just search everytime the informer cache via label selector?
  • Admission: should be straight forward and checks for basically the same things the AccountQuota admission controller checks.

Can't create space

Hi, I am testing out kiosk on openshift and trying to create a space as shown in the tutorial at

https://github.com/kiosk-sh/kiosk#32-create-spaces

But I am getting the following error:

Error from server (Conflict): error when creating "https://raw.githubusercontent.com/kiosk-sh/kiosk/master/examples/space.yaml": Operation cannot be fulfilled on namespaces "johns-space": the object has been modified; please apply your changes to the latest version and try again

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.