Coder Social home page Coder Social logo

konk's People

Contributors

abalaven avatar amatsuo-infoblox avatar daniel-garcia avatar dependabot-preview[bot] avatar dependabot[bot] avatar drewwells avatar kd7lxl avatar pkbinfoblox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

konk's Issues

Add a test to verify proxy-client-certificate is valid

The example-apiserver should have been verifying client certificates and caught this issue. Below is a openssl command to verify the certificates.

openssl verify -CAfile <(k -n default get secret -o yaml drew-apiserver-konk-proxy-client | \
   awk '/^  ca.crt/ {print $2}' | base64 -d) <(k -n default get secret -o yaml drew-apiserver-konk-proxy-client | \
   awk '/^  tls.crt/ {print $2}' | base64 -d)

Document how API service pods can use config/secret reloaders to track cert rotation

Konk uses certificate manager to sign relatively short lifespan certificates. After certificates are automatically rotated, the api service using them needs to begin using the new cert value. For cases where the service itself cannot do this itself, there is a common pattern in Kubernetes that uses a config reloader sidecar. This is a common use case and should be described in konk documentation.

konk-service resources should be deleted upon KonkService deletion

The resources created by konk-service are not managed by the helm-operator, nor do they have ownerReferences. This leaves nothing to clean them up after a deletion. After deleting a KonkService, these resources remain:

apiVersion: v1
kind: Service
metadata:
name: {{` {{ SERVICENAME }} `}}
namespace: {{` {{ NAMESPACE }} `}}
spec:
type: ExternalName
externalName: {{` {{ SERVICENAME }} `}}.{{` {{ NAMESPACE }} `}}
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: {{ .Values.version }}.{{ .Values.group.name }}
spec:
{{- if .Values.insecureSkipTLSVerify }}
insecureSkipTLSVerify: {{ .Values.insecureSkipTLSVerify }}
{{- else }}
caBundle: {{` {{ CERT }} `}}
{{- end }}
group: {{ .Values.group.name }}
groupPriorityMinimum: 1000
service:
name: {{` {{ SERVICENAME }} `}}
namespace: {{` {{ NAMESPACE }} `}}
version: {{ .Values.version }}
versionPriority: 100
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.group.name }}-edit
labels:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups:
- {{ .Values.group.name }}
resources:
{{- range .Values.group.kinds }}
- {{ . }}
{{- end }}
verbs:
{{- range .Values.group.verbs }}
- {{ . }}
{{- end }}

Some mechanism should be used to cleanup these resources, such as refactoring so the helm-operator manages them, or adding ownerReferences so they are cleaned up by kube-controller-manager.

konk-service-operator should check that apiservice is ready before marking itself ready

KONK service operator will deploy apiregistration objects pointing to the apiservice. We can verify apiservice readiness with kubectl. kubectl get apiservice tends not to be a good check of actual readiness. It will report ready even when there's no endpoints matching the service. Requesting the kind directly will verify availability ie. until kubectl get contact.example.infoblox.com; do sleep 5s; done

Helm operator then should mark the CR status ready which will let users verify their resources are deployed correctly to KONK. CR status does not currently verify pod availability, see
operator-framework/operator-sdk#4102

Ingress smoke test

We need a way to verify ingress functionality running in a cluster. The approach needs to be independent of DNS records as devops does not support a declarative DNS model

The simplest way would be to start a pod in-cluster, issue calls directly to ingress controller with HOST header of our ingress route. Controller routes calls to this hostname to the correct Konk instance. Ingress object would be started with some KONK url fake or otherwise ie. konk.example.infoblox.com.

curl '--header Host: konk.example.infoblox.com' -H 'Authorization: {identity trusted creds}' http://localhost:80/apis/example.infoblox.com/v1alpha1/namespaces/default/contacts?limit=500

A/C

  • ingress setup works in-cluster
  • smoke test works with kubectl access or in-cluster

Konk CA should be managed by CertManager

Provision script is creating brand new CA/Etcd CA on every start. It then walks through the k8s secrets to see if any are missing and replaces them. If it happens to replace a secret, it is most likely to break client-server communication due to CA being generated on every start.

We should manage CA as separate CertManager owned certificates. They can share an issuer, but Etcd and K8s should use different CAs. The init pod will mount these CAs and be read by kubeadm to create client/server certificates.

Kubeadm may not have flags, but has predefined file locations where the CA should be found. Mount the secret in these locations.

kubeadm init phase certs etcd-server                                                            stg-1
W1111 16:24:06.679820 2604580 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
error execution phase certs/etcd-server: couldn't load CA certificate etcd-ca: couldn't load etcd/ca certificate authority from /etc/kubernetes/pki
To see the stack trace of this error execute with --v=5 or higher
  • k8s ca survives pod restarts k8s secret storage
  • etcd ca survives pod restarts k8s secret storage

support cross-namespace konk usage

Currently, to use konk, the service must be in the same namespace as konk and mount the konk-kubeconfig secret. Secrets cannot be mounted across namespaces, so that means the services must be in the same namespace.

Find some way to eliminate this requirement.

Kubernetes v1.22 compatibility

A few API versions need to be updated for Kubernetes v1.22 compatibility.

  • networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress #233
  • no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
  • #248

KonkService and Konk have mismatched scope

If you install Konk with scope=cluster and KonkService with konk.scope=namespace, the cluster does not start in obscure ways. We need to surface an error when the scope is mismatched.

One idea is instead of setting Konk values like scope in KonkService, we could read them directly from the named Konk CR.

Generate new kubeconfig before expiration

The certs used in the kubeconfig in konk-services are maintained/renewed by cert-manager and default to a 90-day expiration. However, after using them to build the kubeconfig, konk-service never replaces them with the renewed certificate. After 90 days, the apiservice fails with certificate errors. After a cert is renewed, konk-service should rebuild and replace the kubeconfig secret.

if [ "$(kubectl -n $NAMESPACE get secret {{ include "konk-service.fullname" . }}-kubeconfig --ignore-not-found 2>&1 )" = "" ]

https://infoblox.atlassian.net/browse/ATLAS-9340

verify x-remote-extra headers are passed through KONK

X-Remote-User/Group headers are being passed through but the X-Remote-Extra-OBO is missing.

The client (kubectl, ingress, bulk) will be passing additional headers X-Remote-[User/Group/Extra-OBO]. These headers are configured in KONK to be passed along to the backend apiservice here: https://github.com/infobloxopen/konk/blob/main/helm-charts/konk/values.yaml#L29-L31

We need to verify these headers are present in the example-apiserver. It should be in a way that tests can verify it. My suspicion is that an auth plugin is the way to verify and check for these headers. https://github.com/kubernetes/sample-apiserver#authentication-plugins Anything that can intercept headers to the API Server endpoints will due here. An easy way to turn this into a test is to intercept headers and write them to log. A command line test could issue client calls and check the k8s pod logs for specific strings.

Kubectl can not be extended with custom headers. Use cURL to send custom headers and authenticate to k8s with client certificates.

k get secrets runner-konk-ingress-client -o yaml
# nginx certs can be used by cURL comand below
curl -k --cacert ca.crt -vvv -XGET  -H "User-Agent: kubectl/v1.18.8 (linux/amd64) kubernetes/9f2892a" 
-H "Accept: application/json, */*" 
-H "X-Remote-User: user-test-passed" # {unique token easy to find in logs}
'https://localhost:6443/api?timeout=32s' 2>&1

Current State
A sample auth plugin has been introduced to example-apiserver, but it does not currently receive any traffic when example-apiserver is contacted. https://github.com/infobloxopen/konk/blob/main/test/apiserver/pkg/auth/remoteheader/remoteheader.go This is one approach to intercept incoming traffic and look for headers

A/C

  • PR test that verifies headers sent by a client appear in the server.

Add a smoketest to konkservice

Adding a smoketest to konkservice could be helpful. The konkservice has all the information it needs to report whether the apiservice is healthy: the konk name, the apiservice name, and creds. You can run a command manually like this:

% k -n aggregate exec -it tagging-aggregate-api-apiservice-konk-service-kubectl-apis7ms4n -- kubectl get apiservice v1alpha1.tagging.bulk.infoblox.com
NAME                                 SERVICE                                      AVAILABLE   AGE
v1alpha1.tagging.bulk.infoblox.com   aggregate/tagging-aggregate-api-apiservice   True        45h

If something like this were to run regularly and report readiness, it could be used to drive dashboards/alerts.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.