kcp-dev / kcp Goto Github PK
View Code? Open in Web Editor NEWKubernetes-like control planes for form-factors and use-cases beyond Kubernetes and container workloads.
Home Page: https://kcp.io
License: Apache License 2.0
Kubernetes-like control planes for form-factors and use-cases beyond Kubernetes and container workloads.
Home Page: https://kcp.io
License: Apache License 2.0
For now Core KCP resources, suc as ConfigMap, Secret, RoleBinding, ServiceAccount, etc ... are not synced by the syncer, even if added in the list of resources to sync when starting KCP.
They should be taken into account by the syncer.
This is a pre-requisite for issue #159
Tuesday June 8, at noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below!
Tuesday May 25, at noon Eastern, 9am Pacific, 4pm UTC; find your time
Recording: https://www.youtube.com/watch?v=XMdXJqAV8oE
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below.
kcp currently hard-codes 6443 as its port, it could be 6443 by default with a --port flag to override.
Roughly make https://github.com/kcp-dev/kcp#what-does-it-do-for-me real
Demo
Meta-goals:
User goals:
Needs to cover the features a user might want like:
And discuss the various efforts in the space like SPIFFE, ory, the cloud identity solutions, how someone could accumulate their own and expose via a mesh/egress proxy, etc.
when a namespace is created on the logical cluster and not created on the KCP admin cluster no resource can be created on in the namespace
step to reproduce:
rm -rf .kcp
make build
./bin/kcp start
export KUBECONFIG=.kcp/data/admin.kubeconfig
kubectl use-context user
kubectl apply -f ./contrib/crds/apps/apps_deployments.yaml
kubectl apply -f ./contrib/examples/deployment.yaml
kubectl create namespace default
Error from server (NotFound): error when creating "./contrib/examples/deployment.yaml": namespaces "yoloswag" not found
the above scenario have been performed for non "default" namespace with the same result
further experimentation show that after a namespace is created on the admin cluster the namespace can than be use by other logical cluster
At noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below!
Follow the guidance at https://github.com/kcp-dev/kcp/blob/main/DEVELOPMENT.md , and I can add a OCP cluster to kcp as follows.
Guangyas-MacBook-Pro:kcp guangyaliu$ kubectl get cluster -oyaml
apiVersion: v1
items:
- apiVersion: cluster.example.dev/v1alpha1
kind: Cluster
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cluster.example.dev/v1alpha1","kind":"Cluster","metadata":{"annotations":{},"name":"local"},"spec":{"kubeconfig":"apiVersion: v1\nclusters:\n- cluster:\n insecure-skip-tls-verify: true\n server: https://api.crupper.cp.fyre.ibm.com:6443\n name: api-crupper-cp-fyre-ibm-com:6443\ncontexts:\n- context:\n cluster: api-crupper-cp-fyre-ibm-com:6443\n namespace: default\n user: kube:admin/api-crupper-cp-fyre-ibm-com:6443\n name: default/api-crupper-cp-fyre-ibm-com:6443/kube:admin\ncurrent-context: default/api-crupper-cp-fyre-ibm-com:6443/kube:admin\nkind: Config\npreferences: {}\nusers:\n- name: kube:admin/api-crupper-cp-fyre-ibm-com:6443\n user:\n token: xxxxx\n"}}
clusterName: admin
creationTimestamp: "2021-05-11T03:22:24Z"
generation: 1
managedFields:
- apiVersion: cluster.example.dev/v1alpha1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:kubeconfig: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-05-11T03:22:24Z"
name: local
resourceVersion: "124"
selfLink: /apis/cluster.example.dev/v1alpha1/clusters/local
uid: 2a34a49d-eb13-4aa7-8831-314111f1575a
spec:
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://api.crupper.cp.fyre.ibm.com:6443
name: api-crupper-cp-fyre-ibm-com:6443
contexts:
- context:
cluster: api-crupper-cp-fyre-ibm-com:6443
namespace: default
user: kube:admin/api-crupper-cp-fyre-ibm-com:6443
name: default/api-crupper-cp-fyre-ibm-com:6443/kube:admin
current-context: default/api-crupper-cp-fyre-ibm-com:6443/kube:admin
kind: Config
preferences: {}
users:
- name: kube:admin/api-crupper-cp-fyre-ibm-com:6443
user:
token: xxxxx
kind: List
metadata:
resourceVersion: ""
selfLink: ""
But when I create an deployment, the syncer-from-admin-xxx pod crashed in downstream cluster.
Guangyas-MacBook-Pro:~ guangyaliu$ oc get pods -n syncer-system
NAME READY STATUS RESTARTS AGE
syncer-from-admin-54fdf49475-8hpj5 0/1 CrashLoopBackOff 1 22s
Guangyas-MacBook-Pro:~ guangyaliu$ oc logs -f syncer-from-admin-54fdf49475-8hpj5 -n syncer-system
F0511 03:23:32.232588 1 main.go:102] Get "https://[::1]:6443/api?timeout=32s": dial tcp [::1]:6443: connect: connection refused
Guangyas-MacBook-Pro:~ guangyaliu$ oc version
Client Version: 4.4.9
Server Version: 4.6.16
Kubernetes Version: v1.19.0+e49167a
Using kind cluster also have same issue.
I was following the guidance here https://github.com/kcp-dev/kcp/blob/main/DEVELOPMENT.md to add a KIND cluster to kcp, and it works well.
Guangyas-MacBook-Pro:kcp guangyaliu$ kubectl get cluster
NAME AGE
local 32m
Guangyas-MacBook-Pro:kcp guangyaliu$ kubectl get cluster -oyaml
apiVersion: v1
items:
- apiVersion: cluster.example.dev/v1alpha1
kind: Cluster
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cluster.example.dev/v1alpha1","kind":"Cluster","metadata":{"annotations":{},"name":"local"},"spec":{"kubeconfig":"apiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdPVEV6TURReE5sb1hEVE14TURVd056RXpNRFF4Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFRkCkttQXc4cmJjTVV5OE92dzZiVk5RdTdhLy84Kys1aDBtUmFQNU91K2trMW5hSVZEbWhDZ1FjZm9jTlZjQzZTbWEKY3NHbENrdTdlR3lhUzBiakJHYlgwWkFkbXBFQ2hSLzdpODRMcThiL2tTVDIwKzE3bnBZYzJKRHlPVnJwcENDbAo5WG05MXF6SkVDakM4N1dQclNML2s3MUQ0bmYyTjRReUZQcVc1T1c1WWtZVE9aTVBNQlZyVTlSdkV4NFI5TkFTCnB3WmxyTFZnMmFlMlpOaTVQNDJVTVo3ZDZyRzlnZWxpelJyZk1wU0YrZFY3ZlZ1T3pqMjdNZGp5RytpamxIbXQKVFpRQ2tjTS9FRGVYSUpiaW1NNEZKSDd5YmRCd1Ywc0RQSmtBMW9BeEErMmVOb3doaGpJUmNOY2ZqNEZKUk1ZdQpWTG51WUtObGZDWnQ1b2ZPQUFVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKKzlmbWkxN3VRQ3FtVkg2UlBrS2VVWWVRWDBNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCMDczNzI5UmdUSzkzWllXcUhhNTQ0SVRzdTlHTFBCNkhXWVdSd3NFUEdtTW1qRnVDeQowZzFSMVZSWWlrUnhiNFFRbjJBbGEvMzZoNy91Wm1lWkNxQ01ON1FJdnZueHdoQjBoWXNuYVUrQ1diZm9qd3BYCnowQWFtdXdjZmVWTWlvTFg1M3JXam12bmJZZWJVYWdwNVNRdHp1dmkyVEtmU1k0QW9XMktjUHZZd0EwVjl1NVQKRUJEYjYvU2JQaGwvNXFPelhpeEUrOEFaYmlHWW5zdndnVDVrWks3RmgvRzBqdzZMeUxFL2F6VDVMUWRweWhDOQp4eHB6K0R6QTVHOWFRMlVhSHExaWJ1eXVFTjFKOUxScERjcHZLSGYxN2taTzRMUzVuZ1JIQjErN1BvNEVQa29iCkcrUmsxR0xjcGRBeDRmQXFJQjZremZ5dFNYRk5JS09xdHZCdwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==\n server: https://127.0.0.1:59510\n name: kind-kcp\ncontexts:\n- context:\n cluster: kind-kcp\n user: kind-kcp\n name: kind-kcp\ncurrent-context: kind-kcp\nkind: Config\npreferences: {}\nusers:\n- name: kind-kcp\n user:\n client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJWEI4WWV0NGV3MU13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1Ea3hNekEwTVRaYUZ3MHlNakExTURreE16QTBNVGhhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBSOW55NlQ5YkVENytsVVEKNTI1SzJ4eGh0bW1CQ1hOVitvQ1lFaDdtOC9tbGo3WUt0ZFBwSEE3NmRMeUZJaG5zSEcrbjNveWh5NGpMLzZkTAptVWRwMTdURG5MVERFak1jNTFKTWhGQWxJTWNIbnRFckFVWk1JSDcrS3NQZnpJaHJJbVlKN2x5Nkl4TmVpVlhQCkIzQkV6bjhBekF1dEJBR2lsY0VKbVljNHBoMjBOMmlocURUUmdRSHByYWd0WTdFMUFiNnIwclIzL1M5cVg5RmUKOXh1YkUzSlZNalpYMjF5dFRmUTV0VnlFdkhOMnlpdzlLU2xFdlhTckVoRmlnYTVrM291Z2lxb1FtNkFNR09EQgpFVFNselk1ZVFIRkVQbHM1eWZvei9IY3BGVTU1MWpHaGFxVTdLYUxycng4a2lQWmxQZm5HOFZzdnQ0Z3hHSnV3CjlnYXNsUUlEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVVuNzErYUxYdTVBS3FaVWZwRStRcDVSaDVCZlF3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFLT3VLWHdTUGdoZEZOdThhM0NyQjZ2RGlLT3NxS3dKRGpOTVF6UXVhclU1OHhDNEFjWWt2TWRHCmJhT3Bjc0sxNm94UjdDMlNrR0pjcEZ4M0NMMFJBR1EvSE9qWmE3aVlxT3FIa010UHRhNkZFcVlRejUyTkJwdWMKeFNBK2p5ZWhtbi9Ec2xxUjIzbGI4SVRONjZMY3RPZnI2RDhyRlRKaW9XQnZsa3JSbG9MemlCRTRWa2s1eHZQVAozdUNFYjNNNWV0bVgwQnpLak9MWXU3SXlPaVVNTlJxdUVUazFGVXhMSlczU3JWRWYxRkdpZEtmQXJJTmF4UDFwCkxEdGc5OHpOTE5FSGlUMmFRdzh4N0huZ3I3bmlPZ250eUFiamhXaDhhMWt0ZlVuekI1TVc2RHllMi9LRmFLMlkKWUFZRG9XWjhGby9UWStFZnZtWkRFdEhFbnU2NEtEQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=\n client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMFI5bnk2VDliRUQ3K2xVUTUyNUsyeHhodG1tQkNYTlYrb0NZRWg3bTgvbWxqN1lLCnRkUHBIQTc2ZEx5RklobnNIRytuM295aHk0akwvNmRMbVVkcDE3VERuTFRERWpNYzUxSk1oRkFsSU1jSG50RXIKQVVaTUlINytLc1BmeklockltWUo3bHk2SXhOZWlWWFBCM0JFem44QXpBdXRCQUdpbGNFSm1ZYzRwaDIwTjJpaApxRFRSZ1FIcHJhZ3RZN0UxQWI2cjByUjMvUzlxWDlGZTl4dWJFM0pWTWpaWDIxeXRUZlE1dFZ5RXZITjJ5aXc5CktTbEV2WFNyRWhGaWdhNWszb3VnaXFvUW02QU1HT0RCRVRTbHpZNWVRSEZFUGxzNXlmb3ovSGNwRlU1NTFqR2gKYXFVN0thTHJyeDhraVBabFBmbkc4VnN2dDRneEdKdXc5Z2FzbFFJREFRQUJBb0lCQUJ1OTVORVpKQjFXU1pjZAo3YzRselh4ZnVYNnZaRlRTbmhkTE0rRkRoRFBkYkY4eU1SMko3U1N4di92NGZsalZ3NExLTlNUSzF6UGZBK0Q0Cldva005amVOWFYzT2hRcEhDWkNPVkdSQkZZdlJtMnN3S0ZwVTd3aG9rR012OU9KM1BtOGp4akYrejVxVG1UankKenpJWGJLbFFJOVR0djVnR2Q0RVNTS203VElLTXRNeVd6QWdUUG1rbDU1V3V1d01mODlNYVFXNU9zdGd3TndYUwplazJwR1pLZTBWUi8ybkhHWDIxbTRXWWdlV01qZlB1RjBFMFdSM1VsZ3U1ell0LzdoSStXck9GSUFXZUViQ0dQClkyWkhQc3A0U2xkRmw5NUpVaEdRQVl5UDJRb1ptVEFlbU0xQjVjQ3EyLzUwelQ0Mk1kMWVNZVRPQkVOSTA0dU8KWVBmQ3lRa0NnWUVBNzRxK1NSNndlQnYyUm9DUGw2RmFzMERGOHdDNjRNUG41aSswcG9TMU5TRldNQlQ1ckR2aApVeXNyREtVR2hia1VhUW9DWmVRb1Y0ZGFUVFU1Yjk0eVpzT0xPL1pvQjkvUG03UEttdjV3YjlZaUtkY2p3YSsxCnhXVkpYMGhPa2F1WS9CV1FqNUF2bXprMXRsZ3pYL2NzZDErSHJDdWdXUEFtUWd1ZzhSQk5IcGNDZ1lFQTMzMmYKak95MkNmOXk2VFcrWnZIa3dvL2FuT09qZFRlb2VYZjVZZEJaV05JOUFmbnRJN2FMQ21kaWtHSXl5eVEwOFdUcQo1REwzUUo1QXd1dks1TkZaV1R5SWVnUk5ReHZlNzhiclAyZ0ZtNERYeG1wZFlJV0t4RGxKSms5cDJzazJNSDdhCm8yQzE5bzB0U3J5NGJIQTh3OFpNYjVycktENzFYM1owc3FTS0g3TUNnWUJPcjZ6Q0tDcWZ5YytrYVNiQ0VHYlMKNnp3YkR1cFVXd1lhUHlHQWNhZDB4SGFqWk1CL0sweGhIWlVPbWtjQ05rSFdIMHVhWE5CRHNGcWhjaEprQlFGWgpjSEtVUitUMGNUaXBWTzRBN0FQVE9Pd1FBblBrYyt1cDVCT3VFUHArTDNnWmxwdmVET2NXZmp4K29ZcCt3NXIvCnU0bTlyTGNIZ0J1UkpuQy92ek1XRVFLQmdRQ3dObU9MZ01RVFkvZGRpNE9CdGEzeC9leVhrU0M3ZGxQKzJpcW0KQmRtOG41OThwR3Rtb2pKRTFMa3hNRXZ1UWJFQXQ5cEFiVExvSHg3ZTBYMWJKdmwwMDdhanhpcUpCRHVtQU1oUgphUm9xdnM3aTRkQ1lIeE1IbmtkZnpuT2ZEdEVNTWFqLzhtdm1adS9VSXJLaXhXZ1QwSkZKMmZNWiszSUtmK0tKClRCU2Y0d0tCZ0hpM00zcjFLU25SSUR3emhVQzk3emlmZ2Y4UlV6STVUU0VJWWRzSUtHTUNmcVNWNmJuanIzN3YKWnRoOHRTYkhwTnA2cHp2YXN0VWt3L05kK1A1dEI0bGdTMjhCaGRTSkhKWjFYNHVPK1lsMmtWdnQ4WGQ5ZzlFTApPVmhjQkhtVWhsKzVBMVdTOEdzZEljWVJhSVN2S0xQZXk3ZGNCWlc2VVU4S1pyUEJPbXdWCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==\n"}}
clusterName: admin
creationTimestamp: "2021-05-09T13:05:55Z"
generation: 1
managedFields:
- apiVersion: cluster.example.dev/v1alpha1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:kubeconfig: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-05-09T13:05:55Z"
name: local
resourceVersion: "126"
selfLink: /apis/cluster.example.dev/v1alpha1/clusters/local
uid: ddd022a4-7df5-4628-8337-bbbd5d7cc95a
spec:
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdPVEV6TURReE5sb1hEVE14TURVd056RXpNRFF4Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFRkCkttQXc4cmJjTVV5OE92dzZiVk5RdTdhLy84Kys1aDBtUmFQNU91K2trMW5hSVZEbWhDZ1FjZm9jTlZjQzZTbWEKY3NHbENrdTdlR3lhUzBiakJHYlgwWkFkbXBFQ2hSLzdpODRMcThiL2tTVDIwKzE3bnBZYzJKRHlPVnJwcENDbAo5WG05MXF6SkVDakM4N1dQclNML2s3MUQ0bmYyTjRReUZQcVc1T1c1WWtZVE9aTVBNQlZyVTlSdkV4NFI5TkFTCnB3WmxyTFZnMmFlMlpOaTVQNDJVTVo3ZDZyRzlnZWxpelJyZk1wU0YrZFY3ZlZ1T3pqMjdNZGp5RytpamxIbXQKVFpRQ2tjTS9FRGVYSUpiaW1NNEZKSDd5YmRCd1Ywc0RQSmtBMW9BeEErMmVOb3doaGpJUmNOY2ZqNEZKUk1ZdQpWTG51WUtObGZDWnQ1b2ZPQUFVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKKzlmbWkxN3VRQ3FtVkg2UlBrS2VVWWVRWDBNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCMDczNzI5UmdUSzkzWllXcUhhNTQ0SVRzdTlHTFBCNkhXWVdSd3NFUEdtTW1qRnVDeQowZzFSMVZSWWlrUnhiNFFRbjJBbGEvMzZoNy91Wm1lWkNxQ01ON1FJdnZueHdoQjBoWXNuYVUrQ1diZm9qd3BYCnowQWFtdXdjZmVWTWlvTFg1M3JXam12bmJZZWJVYWdwNVNRdHp1dmkyVEtmU1k0QW9XMktjUHZZd0EwVjl1NVQKRUJEYjYvU2JQaGwvNXFPelhpeEUrOEFaYmlHWW5zdndnVDVrWks3RmgvRzBqdzZMeUxFL2F6VDVMUWRweWhDOQp4eHB6K0R6QTVHOWFRMlVhSHExaWJ1eXVFTjFKOUxScERjcHZLSGYxN2taTzRMUzVuZ1JIQjErN1BvNEVQa29iCkcrUmsxR0xjcGRBeDRmQXFJQjZremZ5dFNYRk5JS09xdHZCdwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://127.0.0.1:59510
name: kind-kcp
contexts:
- context:
cluster: kind-kcp
user: kind-kcp
name: kind-kcp
current-context: kind-kcp
kind: Config
preferences: {}
users:
- name: kind-kcp
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJWEI4WWV0NGV3MU13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1Ea3hNekEwTVRaYUZ3MHlNakExTURreE16QTBNVGhhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBSOW55NlQ5YkVENytsVVEKNTI1SzJ4eGh0bW1CQ1hOVitvQ1lFaDdtOC9tbGo3WUt0ZFBwSEE3NmRMeUZJaG5zSEcrbjNveWh5NGpMLzZkTAptVWRwMTdURG5MVERFak1jNTFKTWhGQWxJTWNIbnRFckFVWk1JSDcrS3NQZnpJaHJJbVlKN2x5Nkl4TmVpVlhQCkIzQkV6bjhBekF1dEJBR2lsY0VKbVljNHBoMjBOMmlocURUUmdRSHByYWd0WTdFMUFiNnIwclIzL1M5cVg5RmUKOXh1YkUzSlZNalpYMjF5dFRmUTV0VnlFdkhOMnlpdzlLU2xFdlhTckVoRmlnYTVrM291Z2lxb1FtNkFNR09EQgpFVFNselk1ZVFIRkVQbHM1eWZvei9IY3BGVTU1MWpHaGFxVTdLYUxycng4a2lQWmxQZm5HOFZzdnQ0Z3hHSnV3CjlnYXNsUUlEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVVuNzErYUxYdTVBS3FaVWZwRStRcDVSaDVCZlF3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFLT3VLWHdTUGdoZEZOdThhM0NyQjZ2RGlLT3NxS3dKRGpOTVF6UXVhclU1OHhDNEFjWWt2TWRHCmJhT3Bjc0sxNm94UjdDMlNrR0pjcEZ4M0NMMFJBR1EvSE9qWmE3aVlxT3FIa010UHRhNkZFcVlRejUyTkJwdWMKeFNBK2p5ZWhtbi9Ec2xxUjIzbGI4SVRONjZMY3RPZnI2RDhyRlRKaW9XQnZsa3JSbG9MemlCRTRWa2s1eHZQVAozdUNFYjNNNWV0bVgwQnpLak9MWXU3SXlPaVVNTlJxdUVUazFGVXhMSlczU3JWRWYxRkdpZEtmQXJJTmF4UDFwCkxEdGc5OHpOTE5FSGlUMmFRdzh4N0huZ3I3bmlPZ250eUFiamhXaDhhMWt0ZlVuekI1TVc2RHllMi9LRmFLMlkKWUFZRG9XWjhGby9UWStFZnZtWkRFdEhFbnU2NEtEQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMFI5bnk2VDliRUQ3K2xVUTUyNUsyeHhodG1tQkNYTlYrb0NZRWg3bTgvbWxqN1lLCnRkUHBIQTc2ZEx5RklobnNIRytuM295aHk0akwvNmRMbVVkcDE3VERuTFRERWpNYzUxSk1oRkFsSU1jSG50RXIKQVVaTUlINytLc1BmeklockltWUo3bHk2SXhOZWlWWFBCM0JFem44QXpBdXRCQUdpbGNFSm1ZYzRwaDIwTjJpaApxRFRSZ1FIcHJhZ3RZN0UxQWI2cjByUjMvUzlxWDlGZTl4dWJFM0pWTWpaWDIxeXRUZlE1dFZ5RXZITjJ5aXc5CktTbEV2WFNyRWhGaWdhNWszb3VnaXFvUW02QU1HT0RCRVRTbHpZNWVRSEZFUGxzNXlmb3ovSGNwRlU1NTFqR2gKYXFVN0thTHJyeDhraVBabFBmbkc4VnN2dDRneEdKdXc5Z2FzbFFJREFRQUJBb0lCQUJ1OTVORVpKQjFXU1pjZAo3YzRselh4ZnVYNnZaRlRTbmhkTE0rRkRoRFBkYkY4eU1SMko3U1N4di92NGZsalZ3NExLTlNUSzF6UGZBK0Q0Cldva005amVOWFYzT2hRcEhDWkNPVkdSQkZZdlJtMnN3S0ZwVTd3aG9rR012OU9KM1BtOGp4akYrejVxVG1UankKenpJWGJLbFFJOVR0djVnR2Q0RVNTS203VElLTXRNeVd6QWdUUG1rbDU1V3V1d01mODlNYVFXNU9zdGd3TndYUwplazJwR1pLZTBWUi8ybkhHWDIxbTRXWWdlV01qZlB1RjBFMFdSM1VsZ3U1ell0LzdoSStXck9GSUFXZUViQ0dQClkyWkhQc3A0U2xkRmw5NUpVaEdRQVl5UDJRb1ptVEFlbU0xQjVjQ3EyLzUwelQ0Mk1kMWVNZVRPQkVOSTA0dU8KWVBmQ3lRa0NnWUVBNzRxK1NSNndlQnYyUm9DUGw2RmFzMERGOHdDNjRNUG41aSswcG9TMU5TRldNQlQ1ckR2aApVeXNyREtVR2hia1VhUW9DWmVRb1Y0ZGFUVFU1Yjk0eVpzT0xPL1pvQjkvUG03UEttdjV3YjlZaUtkY2p3YSsxCnhXVkpYMGhPa2F1WS9CV1FqNUF2bXprMXRsZ3pYL2NzZDErSHJDdWdXUEFtUWd1ZzhSQk5IcGNDZ1lFQTMzMmYKak95MkNmOXk2VFcrWnZIa3dvL2FuT09qZFRlb2VYZjVZZEJaV05JOUFmbnRJN2FMQ21kaWtHSXl5eVEwOFdUcQo1REwzUUo1QXd1dks1TkZaV1R5SWVnUk5ReHZlNzhiclAyZ0ZtNERYeG1wZFlJV0t4RGxKSms5cDJzazJNSDdhCm8yQzE5bzB0U3J5NGJIQTh3OFpNYjVycktENzFYM1owc3FTS0g3TUNnWUJPcjZ6Q0tDcWZ5YytrYVNiQ0VHYlMKNnp3YkR1cFVXd1lhUHlHQWNhZDB4SGFqWk1CL0sweGhIWlVPbWtjQ05rSFdIMHVhWE5CRHNGcWhjaEprQlFGWgpjSEtVUitUMGNUaXBWTzRBN0FQVE9Pd1FBblBrYyt1cDVCT3VFUHArTDNnWmxwdmVET2NXZmp4K29ZcCt3NXIvCnU0bTlyTGNIZ0J1UkpuQy92ek1XRVFLQmdRQ3dObU9MZ01RVFkvZGRpNE9CdGEzeC9leVhrU0M3ZGxQKzJpcW0KQmRtOG41OThwR3Rtb2pKRTFMa3hNRXZ1UWJFQXQ5cEFiVExvSHg3ZTBYMWJKdmwwMDdhanhpcUpCRHVtQU1oUgphUm9xdnM3aTRkQ1lIeE1IbmtkZnpuT2ZEdEVNTWFqLzhtdm1adS9VSXJLaXhXZ1QwSkZKMmZNWiszSUtmK0tKClRCU2Y0d0tCZ0hpM00zcjFLU25SSUR3emhVQzk3emlmZ2Y4UlV6STVUU0VJWWRzSUtHTUNmcVNWNmJuanIzN3YKWnRoOHRTYkhwTnA2cHp2YXN0VWt3L05kK1A1dEI0bGdTMjhCaGRTSkhKWjFYNHVPK1lsMmtWdnQ4WGQ5ZzlFTApPVmhjQkhtVWhsKzVBMVdTOEdzZEljWVJhSVN2S0xQZXk3ZGNCWlc2VVU4S1pyUEJPbXdWCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Then I try to create a deployment on this cluster via the file at https://github.com/kcp-dev/kcp/blob/main/contrib/demo/deployment.yaml , but the deploy does not sync to the cluster.
Guangyas-MacBook-Pro:kcp guangyaliu$ kubectl get deploy
NAME AGE
my-deployment 28m
KCP log
I0509 21:34:31.349351 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:34:31.443680 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:34:31.444215 72552 store.go:618] DEBUG: GET key func returned: /apiextensions.k8s.io/customresourcedefinitions/admin/deployments.apps
I0509 21:34:31.450223 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:34:43.299999 72552 store.go:959] Setting ClusterName admin in appendListItem
E0509 21:34:47.552727 72552 reflector.go:178] k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: the server could not find the requested resource
E0509 21:35:06.291678 72552 reflector.go:178] k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: the server could not find the requested resource
I0509 21:35:31.707097 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:35:31.707674 72552 store.go:618] DEBUG: GET key func returned: /apiextensions.k8s.io/customresourcedefinitions/admin/pods.core
I0509 21:35:31.714302 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:35:31.794398 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:35:31.794985 72552 store.go:618] DEBUG: GET key func returned: /apiextensions.k8s.io/customresourcedefinitions/admin/deployments.apps
I0509 21:35:31.801134 72552 store.go:922] Setting ClusterName admin in appendListItem
E0509 21:35:36.010887 72552 reflector.go:178] k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: the server could not find the requested resource
I0509 21:35:43.285823 72552 store.go:788] DEBUG: key=/registry/core/configmaps/admin/kube-system willExtractCluster=false
E0509 21:36:01.107079 72552 reflector.go:178] k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: the server could not find the requested resource
E0509 21:36:16.784841 72552 reflector.go:178] k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: the server could not find the requested resource
I0509 21:36:16.857157 72552 store.go:959] Setting ClusterName admin in appendListItem
I0509 21:36:32.065317 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:36:32.065984 72552 store.go:618] DEBUG: GET key func returned: /apiextensions.k8s.io/customresourcedefinitions/admin/pods.core
I0509 21:36:32.072383 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:36:32.154347 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:36:32.155253 72552 store.go:618] DEBUG: GET key func returned: /apiextensions.k8s.io/customresourcedefinitions/admin/deployments.apps
I0509 21:36:32.162371 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:36:33.830929 72552 store.go:788] DEBUG: key=/registry/core/namespaces/admin willExtractCluster=false
W0509 21:36:33.874206 72552 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
I0509 21:36:35.395479 72552 store.go:956] Setting ClusterName admin in appendListItem
I0509 21:36:35.395540 72552 store.go:956] Setting ClusterName admin in appendListItem
I0509 21:36:35.395583 72552 store.go:956] Setting ClusterName admin in appendListItem
I0509 21:36:35.396221 72552 store.go:788] DEBUG: key=/registry/core/namespaces/admin willExtractCluster=false
E0509 21:36:46.590196 72552 reflector.go:178] k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: the server could not find the requested resource
E0509 21:37:06.906926 72552 reflector.go:178] k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: the server could not find the requested resource
I0509 21:37:32.435733 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:37:32.436276 72552 store.go:618] DEBUG: GET key func returned: /apiextensions.k8s.io/customresourcedefinitions/admin/pods.core
I0509 21:37:32.442313 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:37:32.519805 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:37:32.520412 72552 store.go:618] DEBUG: GET key func returned: /apiextensions.k8s.io/customresourcedefinitions/admin/deployments.apps
I0509 21:37:32.526447 72552 store.go:922] Setting ClusterName admin in appendListItem
E0509 21:37:38.476219 72552 reflector.go:178] k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: the server could not find the requested resource
E0509 21:37:44.432423 72552 reflector.go:178] k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: the server could not find the requested resource
I0509 21:37:57.943093 72552 store.go:959] Setting ClusterName admin in appendListItem
I0509 21:37:59.434644 72552 store.go:788] DEBUG: key=/registry/cluster.example.dev/clusters willExtractCluster=true
I0509 21:38:02.367513 72552 store.go:959] Setting ClusterName admin in appendListItem
I0509 21:38:32.799359 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:38:32.799898 72552 store.go:618] DEBUG: GET key func returned: /apiextensions.k8s.io/customresourcedefinitions/admin/pods.core
I0509 21:38:32.805933 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:38:32.882014 72552 store.go:922] Setting ClusterName admin in appendListItem
I0509 21:38:32.883069 72552 store.go:618] DEBUG: GET key func returned: /apiextensions.k8s.io/customresourcedefinitions/admin/deployments.apps
I0509 21:38:32.889971 72552 store.go:922] Setting ClusterName admin in appendListItem
E0509 21:38:35.031209 72552 reflector.go:178] k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: the server could not find the requested resource
syncer log
Guangyas-MacBook-Pro:kcp guangyaliu$ go run ./cmd/cluster-controller \
> --syncer_image=$(ko publish ./cmd/syncer) \
> --kubeconfig=.kcp/data/admin.kubeconfig
2021/05/09 20:58:37 Using base gcr.io/distroless/static:nonroot for github.com/kcp-dev/kcp/cmd/syncer
2021/05/09 20:58:37 No matching credentials were found for "gcr.io/distroless/static", falling back on anonymous
2021/05/09 20:58:44 Building github.com/kcp-dev/kcp/cmd/syncer for linux/amd64
2021/05/09 20:58:48 Publishing docker.io/gyliu/syncer-c2e3073d5026a8f7f2c47a50c16bdbec:latest
2021/05/09 20:59:00 existing blob: sha256:72164b581b02b1eb297b403bcc8fc1bfa245cb52e103a3a525a0835a58ff58e2
2021/05/09 20:59:00 existing blob: sha256:96061d0346c5121ef1e42a1142295d9a8cbd985e4177d2d7e276f9623434b6d5
2021/05/09 20:59:00 existing blob: sha256:5dea5ec2316d4a067b946b15c3c4f140b4f2ad607e73e9bc41b673ee5ebb99a3
2021/05/09 20:59:00 existing blob: sha256:e466343204d2e2456849b1ef0303a961737f2b1ef978b559ff38abd23c9189a8
2021/05/09 20:59:01 docker.io/gyliu/syncer-c2e3073d5026a8f7f2c47a50c16bdbec:latest: digest: sha256:24bf63f061227bc069e0ec0866294a50bef1c8c735b0405b12c869263855ca81 size: 751
2021/05/09 20:59:01 Published docker.io/gyliu/syncer-c2e3073d5026a8f7f2c47a50c16bdbec@sha256:24bf63f061227bc069e0ec0866294a50bef1c8c735b0405b12c869263855ca81
2021-05-09 20:59:08.539318 I | Starting workers
2021-05-09 21:05:55.446864 I | reconciling cluster local
2021-05-09 21:05:55.920282 I | syncer installing...
2021-05-09 21:05:55.920313 I | no update
2021-05-09 21:05:55.920319 I | Successfully reconciled admin#$#local
2021-05-09 21:06:55.924270 I | reconciling cluster local
2021-05-09 21:06:56.285600 I | syncer installing...
2021-05-09 21:06:56.285624 I | no update
2021-05-09 21:06:56.285629 I | Successfully reconciled admin#$#local
2021-05-09 21:07:56.285758 I | reconciling cluster local
2021-05-09 21:07:56.660182 I | syncer installing...
2021-05-09 21:07:56.660208 I | no update
2021-05-09 21:07:56.660214 I | Successfully reconciled admin#$#local
2021-05-09 21:08:56.663536 I | reconciling cluster local
2021-05-09 21:08:57.034414 I | syncer installing...
2021-05-09 21:08:57.034440 I | no update
2021-05-09 21:08:57.034446 I | Successfully reconciled admin#$#local
2021-05-09 21:09:57.036355 I | reconciling cluster local
2021-05-09 21:09:57.398941 I | syncer installing...
2021-05-09 21:09:57.398968 I | no update
2021-05-09 21:09:57.398972 I | Successfully reconciled admin#$#local
2021-05-09 21:10:57.401106 I | reconciling cluster local
2021-05-09 21:10:57.772594 I | syncer installing...
2021-05-09 21:10:57.772620 I | no update
2021-05-09 21:10:57.772624 I | Successfully reconciled admin#$#local
2021-05-09 21:11:57.776206 I | reconciling cluster local
2021-05-09 21:11:58.120752 I | syncer installing...
2021-05-09 21:11:58.120782 I | no update
2021-05-09 21:11:58.120789 I | Successfully reconciled admin#$#local
2021-05-09 21:12:58.125169 I | reconciling cluster local
2021-05-09 21:12:58.462011 I | syncer installing...
2021-05-09 21:12:58.462041 I | no update
2021-05-09 21:12:58.462045 I | Successfully reconciled admin#$#local
2021-05-09 21:13:58.464764 I | reconciling cluster local
2021-05-09 21:13:58.816481 I | syncer installing...
2021-05-09 21:13:58.816505 I | no update
2021-05-09 21:13:58.816512 I | Successfully reconciled admin#$#local
2021-05-09 21:14:58.803503 I | reconciling cluster local
2021-05-09 21:14:59.167277 I | syncer installing...
2021-05-09 21:14:59.167302 I | no update
2021-05-09 21:14:59.167307 I | Successfully reconciled admin#$#local
2021-05-09 21:15:59.167426 I | reconciling cluster local
2021-05-09 21:15:59.515296 I | syncer installing...
2021-05-09 21:15:59.515323 I | no update
2021-05-09 21:15:59.515328 I | Successfully reconciled admin#$#local
It would be greatly useful to write tests for the API Negotiation controller.
The idea is to write declarative tests that clearly and formally capture the intent of the API Negotiation.
Preferred framework would be KUTTL.
Tuesday June 8, at noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below!
I have tested kcp
with two kubernetes cluster, and sync CRD by default(deployment and pod).
I think the CRDs is exactly the same one, see it's deployment
and pod
, but actually not, the deployment
come from different cluster has different .spec
(one cluser is v1.19.x another is v1.20.x), so they are treat as different thing and keeping replace each other(since with same name).
So I wonder to know, by design what's the behavior is such scenario?
As described in this investigation doc, API resources are imported inside a KCP logical cluster from external clusters through CRDs, that can be built from the discovery and openapiv2 information published by the external clusters.
When importing resources as CRDs, if the API resource has already been added as a CRD in the logical cluster, we should perform a diff between the OpenAPI V3 schema of the imported API resources and the schema of the already installed CRD.
We should also provide utilities to:
At noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below!
Demo 2 should build on the first demo and show concrete progress towards the three exploration goals of KCP:
Demo 2 would be a Kubecon or post-Kubecon update of the current demo and validate progress towards these goals.
go build
able main
function that starts a new kube-apiserver and defines hooksWorkspace
kinds inWorkspace
gating the ability to access to a workspaceShard
type, add assignment to a shard to a Workspace
object, then show LIST/WATCH across two independent KCP instances
At noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below!
I try the latest version, but the CRD: Pod can not learn from local cluster, and get the error message in the logs:
E0822 18:16:46.481733 1458886 apiimporter.go:178] error creating APIResourceImport pods.local.v1.core: APIResourceImport.apiresource.kcp.dev "pods.local.v1.core" is invalid: spec.groupVersion.group: Invalid value: "null": spec.groupVersion.group in body must be of type string: "null"
Can fix it with this issue:
kubernetes/kubernetes#58311 (comment)
Our first ever community meeting is Tuesday May 11, at noon EST (9am Pacific, 4pm UTC, etc) 🎉
Meet link: https://meet.google.com/uet-cjpd-qof ** old link, don't join this **
This issue will collect prospective agenda topics. Add topics you'd like to discuss below.
If you can't make it to the meeting, that's okay. We'll record the meeting, and please reach out with your questions.
Join the #kcp-prototype
channel in the K8s Slack!
After the cluster was deleted, check cm
under syncer-system
, found kube-root-ca.crt
was not deleted.
$ kubectl get cm -n syncer-system
NAME DATA AGE
kube-root-ca.crt 1 14m
But if we can delete the ns syncer-system
, then all resources under this ns will be deleted.
At noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below!
It should be possible to call kubectl get pods / kubectl logs
and have the apiserver pass through the call directly to 0..N underlying clusters, which would open the door to keeping pods on the underlying cluster. Then the next step is multiple source unification - a kubectl get pods
could perform a merge between the underlying physical cluster and a pod stored directly in etcd at the kcp level.
This "blurring of the lines" would allow important behavior to be delegated to sub clusters without having to tell an end user "go get credentials to the underyling cluster" (note that is probably still valuable for other reasons, especially with unified identity)
The CRD virtualization, inheritance, and normalization part of the logicial clusters investigation doc contains the following statement:
Inheritance allows an admin to control which resources a client might use - this would be particularly useful in more opinionated platform flows for organizations that wish to offer only a subset of APIs. The simplest approach here is that all logical clusters inherit the admin virtual cluster (the default)
This is currently not implemented.
Maybe this should already be done even more generically, by using the parents
of the given cluster. Of course this would require defining more precisely how and when the parents
would be set.
One area where kcp could also be useful is for testing controllers and operators and having a container image could simplify the test set-up as example, I'm currently working on an operator written using the java-operator-sdk where I don't need all the features and controller provided by a standard kubernetes distro but only a control plane so having kcp available as a container image would be very useful as I can leverage testcontainers to spin up a kcp instance as part of the test lifecycle.
With the power of CustomResourceDefinitions, Kubernetes provides a flexible platform for declarative APIs of all types, and the reconciliation pattern common to Kubernetes controllers is a powerful tool in building robust, expressive systems. (README)
While etcd is great for distributed systems, it's less ideal for embedded systems where single binary deployments are ideal. Do you aim to provide support for something like BoltDB or Badger?
Tuesday June 1, at noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below.
Programming note: Monday May 31 is a US holiday, and Friday May 28 is a day off for Red Hat folks, so this will only have been 3 working days since the last meeting for lots of us
A key component of large Kubernetes clusters is shared use, where the usage pattern might vary from externally controlled (via gitops / existing operational tools) to a permissive self-service model. The most common partitioning model in Kubernetes is namespace, and the second most common model is cluster. Self-service is currently limited by the set of resources that are namespace scoped for the former, and by the need to parameterize and configure multiple clusters consistently for the later. Cluster partitioning can uniquely offer distinct sets of APIs to consumers. Namespace partitioning is cheap up until the scale limits of the cluster (~10k namespaces), while cluster partitioning usually has a fixed cost per cluster in operational and resource usage, as well as lower total utilization. Once a deployment reaches the scale limit of a single cluster, operators often need to redefine their policies and tools to work in a multi-cluster environment. Many large deployers create their own systems for managing self-service policy above their clusters and leverage individual subsystems within Kubernetes to accomplish those goals.
Explore the goal
logical clusters + api composition + policy should allow organizations to provide teams "chunks of capacity" that they can use for workloads and services in a more comprehensive way than a single Kubernetes cluster.
Output
an investigation doc
The in-cluster syncer runs in syncer-system
, and today if you create something in that namespace in kcp, the syncer will apply it to its own local cluster, possibly even replacing/breaking its own deployment.
Let's just keep syncer from messing with its own namespace entirely, to avoid a whole bunch of problems.
In a multi-cluster scenario, it would be really helpful to have a load balancer at the top level that takes care of sending "end-user" traffic to a deployed application running in the k8s clusters.
Example:
Diagram:
┌───────────────┐
│ │
│ ┌─┴───────────────────┐
┌────────▶│ Cluster 1 │myapp.cluster1.domain│◀──────┐
│ │ └─┬───────────────────┘ │
│ │ │ │
│ └───────────────┘ │
│ │
│ ┌───────────────┐ │
│ │ │ │
│ │ ┌─┴───────────────────┐ │
├────────▶│ Cluster 2 │myapp.cluster2.domain│◀──────┤
│ │ └─┬───────────────────┘ │
│ │ │ │
Sync │ └───────────────┘ │
│ │
│ ┌───────────────┐ │
│ │ │ │
│ │ ┌─┴───────────────────┐ │
├────────▶│ Cluster 3 │myapp.cluster3.domain│◀──────┤
│ │ └─┬───────────────────┘ │
│ │ │ │
│ └───────────────┘ │
│ │
│ │
│ │
┌───────────────┐ ┌───────────────┐
│ │ │ │
│ │ │ Global KCP │
│ KCP │──────────Discover Ingresses────────▶│ Load Balancer │
│ │ │ │
│ │ │ ┌───────┴─────────────┐
└───────────────┘ └───────┤ myapp.domain │◀─────── Client
▲ └─────────────────────┘
│
│
│
│
│
│
kubectl apply -f
MyApplication
Usually, this part (networking) is left to the reader as an exercise. Perhaps KCP can provide a reference implementation of a global load balancer to allow us to explore some of the complexities of multicluster setups.
Is this a topic that the KCP project wants to explore?
-- UPDATE
A more in-depth diagram, following the same pattern as the Deployment Splitter
:
┌─────────────────────────────────────────────────────────┐
│ │
│ KCP ┌────────────────────────┐ │
│ │ │ │ ┌───────────────────────────────┐
│ ┌───────│ KCP-Ingress Controller │─────Creates─┐ │ │ │
│ │ │ │ │ │ │ ┌────────────────┐ │
│ │ └────────────────────────┘ │ │ │ ┌▶│ Leaf Ingress │ │
│ │ │ ▼ │ Sync Object and status │ └────────────────┘ │
│ │ │ ┌───────────────────┴────┐ │ │ ┌──┴───────┐
│ │ ▼ │ │ ┌─────┴────────┐ │ │ │
┌────────────────┐ │ │ ┌────────────────────────┐│ Leaf Ingress │◀─────────▶│ Syncer │─┘8s cluster┌─────▶│ Gateway │◀──┐
│Ingress │ │ │ │ ││ │ └─────┬────────┘ │ │ │ │
│HTTPRoute │──┼───┼──────▶│ Root Ingress │├────────────────────────┤ │ │ └──┬───────┘ │
│Route │ │ │ │ ││ │ │ │ │ │
└────────────────┘ │ │ └────────────────────────┘│ Leaf Ingress │◀───────────┐ │ ┌───────────────────────┴──┐ │
│ │ ▲ │ │ │ │ │ │ │
│ │ │ ├────────────────────────┤ │ │ │ gateway-api controller │ │
│On Ready │ │ │ │ └───────┤ │ │
│Creates │ │ Leaf Ingress │◀───────┐ │ └──────────────────────────┘ │
│ │ │ │ │ │ │ ┌────────────────────────────────┐ │
│ │ │ └───────────────────┬────┘ │ │ │ ┌────────────────┐│ │
│ │ │ │ │ │ │ │ ┌▶│ Leaf Ingress ││ │ ┌─────────────────┐
│ │ │ │ │ │ │ │ │ └────────────────┘│ │ │ │
│ │ └─────────────────────────┘ │ │ │┌───┴──────────┐ │ │ │ │ │
│ │ Merge Status │ │ ▼│ Syncer │─┘ ┌───┴──────┐ │ │ Global load │
│ │ │ │ └───┬──────────┘ │ │ │ │ balancer │
│ │ │ │ │ k8s cluster ┌────▶│ Gateway │◀──┼───│ │
│ │ ┌────────────────────────┐ │ │ │ │ │ │ │ │ ALB/NLB... │
│ │ │ │ │ │ │ │ └───┬──────┘ │ │ │
│ └─▶│ Global Ingress Object │◀──┐ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ │ │ └─────────────────┘
│ └────────────────────────┘ │ │ │ │ ┌───────────────────────┴──┐ │ ▲
│ │ │ │ │ │ │ │ │
│ │ │ │ └────────┤ gateway-api controller │ │ │
│ ┌──────────────────────────┐ │ │ │ │ │ │
│ │ Global Load Balancer │ │ │ └──────────────────────────┘ │ │
└─────────────────────┤ Controller ├────────┘ │ ┌───────────────────────────────┐ │ │
└──────────────────────────┘ │ │ ┌────────────────┐│ │ │
│ │ │ │ ┌▶│ Leaf Ingress ││ │ │
│ │ │ ┌────┴─────────┐ │ └────────────────┘│ │ │
│ │ └───┤ Syncer │─┘ │ │ │
│ │ └────┬─────────┘ ┌──┴───────┐ │ │
│ │ │ │ │ │ │
│ │ │ k8s cluster ┌───▶│ Gateway │◀──┘ │
│ │ │ │ │ │ │
│ │ │ │ └──┬───────┘ │
┌──────────────────┐ │ │ │ │ │ │
│ │ │ │ │ │ │ │
│ │ │ │ │ ┌─────────────────────┴────┐ │
│ DNS │◀────┘ │ │ │ │ │
│ │ │ └─────────┤ gateway-api controller │ │
│ │ │ │ │ │
└──────────────────┘ │ └──────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────────────────────────────┘
For the initial PoC we will use Ingress v1beta1 that is covered by the KCP forked libraries.
Created based on the conversation in #67.
${KCP_ROOT}/bin/cluster-controller -push_mode=true -pull_mode=false -kubeconfig=${KUBECONFIG} services.serving.knative.dev &> cluster-controller.log &
CC_PID=$!
I0819 16:36:36.719458 254618 discovery.go:157] processing discovery for resource services (services.serving.knative.dev)
I0819 16:36:36.726736 254618 discovery.go:157] processing discovery for resource services (services.serving.knative.dev)
E0819 16:36:36.740409 254618 apiimporter.go:178] error creating APIResourceImport services.us-west1.v1.serving.knative.dev: APIResourceImport.apiresource.kcp.dev "services.us-west1.v1.serving.knative.dev" is invalid: [spec.columnDefinitions.format: Invalid value: "null": spec.columnDefinitions.format in body must be of type string: "null", spec.columnDefinitions.description: Invalid value: "null": spec.columnDefinitions.description in body must be of type string: "null"]
E0819 16:36:36.746479 254618 apiimporter.go:178] error creating APIResourceImport services.us-east1.v1.serving.knative.dev: APIResourceImport.apiresource.kcp.dev "services.us-east1.v1.serving.knative.dev" is invalid: [spec.columnDefinitions.description: Invalid value: "null": spec.columnDefinitions.description in body must be of type string: "null", spec.columnDefinitions.format: Invalid value: "null": spec.columnDefinitions.format in body must be of type string: "null"]
At noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below!
Tuesday May 18, at noon Eastern, 9am Pacific, 4pm UTC; find your time
Meet link: https://meet.google.com/squ-dtxk-xdi
This issue will collect prospective agenda topics. Add topics you'd like to discuss below.
Add more topics below!
At noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below!
@smarterclayton I love this project! Long been thinking this is really at the core of all of it.
Re. https://github.com/kcp-dev/kcp#this-sounds-cool-and-i-want-to-help "feedback can have a big impact", here's a thought:
What if one wanted to build an EVEN MORE lightweight "API server" than the resource types that are currently "built in" here? It's what got me interested in playing with and poking around in @detiber's https://github.com/thetirefire, see his KubeCon presentation - before hearing about this today. That currently has ONLY customresourcedefinitions
and apiservices
built-in - for some scenarios, that may be all one wants.
For example, for something like what @andrewrynhard is building in https://github.com/cosi-project/runtime, see his KubeCon presentation, or in say a similar hypothetical agent for a basic KRM-inspired Machine Management CRD with a purely localhost controller running e.g. in a Static Pod, having the types currently baked-in to kcp may be too much already?
On the fine day when (eventually...) Kubernetes can be configured to run as kcp runs today, perhaps the "minimal required base resources" could be configurable? Or (better..) there simply could be separate binaries with "nothing at all" baked in (~badidea), "some base types" (~kcp), "original fat one" ( ~apiserver)?
Full disclosure: I'm only half way through reading up on this project, and just starting to learn more about what's what in space. Perhaps this is already possible using the "base apiserver library" (?) - but even just better illustrating and documenting how to really do that IMHO could have value for the community (if that turns out to be all that's "technically" required).
Right now if someone wants to run kcp they have to build and run the code themselves. This isn't too bad for cmd/kcp
and cmd/cluster-controller
, but for users that want to run the in-cluster ("pull") syncer, this requires them to build that image themselves.
It'd be better if that image was automatically built and available for them somewhere, so they could just run the Cluster Controller directly. We should run some CI to build and publish quay.io/kcp-dev/kcp-syncer:latest
on every merged PR.
This would also be good practice for an eventual future where we have actual releases.
$ go test -cover ./pkg/schemacompat
ok github.com/kcp-dev/kcp/pkg/schemacompat 0.469s coverage: 34.0% of statements
To see which lines aren't covered (GitHub won't let me attach it):
go test -coverprofile=coverage.out ./pkg/schemacompat && go tool cover -html=coverage.out
More coverage of this code should give us more confidence in the implementation, and it should be fairly easy to cover. We can maintain high coverage using codecov, to note coverage deltas in PR comments.
(As always, the goal isn't 100% coverage, it's confidence in the code, whatever coverage %age is reported)
Once coverage is higher we can also consider adding mutation tests, using e.g., https://github.com/zimmski/go-mutesting, which can help us find even more behavior that isn't tested, for example cases where the input to a method doesn't affect test failure.
I have the kcp server, cluster controller and syncer all running separately locally with a cluster resource created with the kubeconfig of a physical cluster.
Cluster controller command
go run ./cmd/cluster-controller --kubeconfig=.kcp/data/admin.kubeconfig --pull_model=false mycustomresource
Syncer command
go run ./cmd/syncer/ --cluster=local --from_kubeconfig=.kcp/data/admin.kubeconfig --from_context=admin --to_kubeconfig=pathtomykubeconfig mycustomresource
The mycustomresource CRD from my physical cluster is pulled successfully with Davids new branch. However, when I create an instance of mycustomresource in kcp, this is not synced downstream to the physical cluster.
I0510 19:20:45.173659 32948 main.go:117] Set up informer for managedkafka.bf2.org/v1alpha1, Resource=managedkafkas
I0510 19:20:45.173881 32948 main.go:126] Starting workers```
There are no logs to indicate any reconciliation in the syncer.
At noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below!
Currently, the prototype cluster-controller connects to the target cluster to install the syncer as a Pod (soon Deployment) in the cluster, where it connects to the local cluster's API server using rest.InClusterConfig
.
This requires the user to build and push an image for the syncer, which isn't ideal, and isn't really strictly necessary.
Instead, syncer code should be agnostic to where it's run and not depend on InClusterConfig
, just taking targetKubeconfig
and kcpKubeconfig
and construct clients from those.
That would make it portable to run outside the cluster, either as another loop in cluster-controller, or as a separate process. If it turns out we want the syncer to run in the cluster, we can still do that too.
Wondering if things like the validating and mutating webhooks have a place in KCP? Maybe they are already present?
At noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below!
When creating a deployment in a KCP cluster that imported the deployments
resource from a physical cluster, the deployment is created, but the content of the spec.template.metadata
deployment field is pruned.
This is probably related to a problem in the way the Deploment schema is managed, that triggers the pruning while it is not expected.
This prevents the syncing of Deployments, since the deployment that is being synced s therefore invalid.
At noon Eastern, 9am Pacific, 4pm UTC; find your time
Video call link: https://meet.google.com/ohf-kwvd-mrp
Or dial: (US) +1 617-675-4444 PIN: 936 182 087 7398#
More phone numbers: https://tel.meet/ohf-kwvd-mrp?pin=9361820877398
Or join via SIP: sip:[email protected]
This issue will collect prospective agenda topics. Add topics you'd like to discuss below!
While trying to run kcp
against a physical Openshift dedicated cluster, the cluster controller panics while building the CRDs. Part of the stack trace
goroutine 102 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x22d6400, 0x30b21b0)
/Users/dffrench/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa6
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/Users/dffrench/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x86
panic(0x22d6400, 0x30b21b0)
/usr/local/Cellar/go/1.16.3/libexec/src/runtime/panic.go:965 +0x1b9
github.com/kcp-dev/kcp/pkg/crdpuller.(*schemaPuller).PullCRDs(0xc0037729c0, 0x26bf430, 0xc0000420a8, 0xc0003bb060, 0x2, 0x2, 0x26bf4a0, 0xc00062a3f0, 0x0)
/Users/dffrench/go/src/github.com/kcp-dev/kcp/pkg/crdpuller/discovery.go:151 +0xbbe
github.com/kcp-dev/kcp/pkg/reconciler/cluster.(*Controller).reconcile(0xc0001702a0, 0x26bf430, 0xc0000420a8, 0xc0000e8280, 0xc0000e8280, 0x100fb01)
/Users/dffrench/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cluster/cluster.go:66 +0x874
From debugging, this looks to be from the pods.metrics.k8s.io
resource
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.