Coder Social home page Coder Social logo

openshift / cluster-kube-apiserver-operator Goto Github PK

View Code? Open in Web Editor NEW
71.0 19.0 157.0 82.72 MB

The kube-apiserver operator installs and maintains the kube-apiserver on a cluster

License: Apache License 2.0

Makefile 0.64% Go 99.24% Shell 0.12%

cluster-kube-apiserver-operator's Introduction

Kubernetes API Server Operator

The Kubernetes API Server operator manages and updates the Kubernetes API server deployed on top of OpenShift. The operator is based on OpenShift library-go framework and it is installed via Cluster Version Operator (CVO).

It contains the following components:

  • Operator
  • Bootstrap manifest renderer
  • Installer based on static pods
  • Configuration observer

By default, the operator exposes Prometheus metrics via metrics service. The metrics are collected from following components:

  • Kubernetes API Server Operator

Configuration

The configuration observer component is responsible for reacting on external configuration changes. For example, this allows external components (registry, etcd, etc..) to interact with the Kubernetes API server configuration (KubeAPIServerConfig custom resource).

Currently changes in following external components are being observed:

  • host-etcd endpoints in kube-system namespace
    • The observed endpoint addresses are used to configure the storageConfig.urls in Kubernetes API server configuration.
  • cluster image.config.openshift.io custom resource
    • The observed CR resource is used to configure the imagePolicyConfig.internalRegistryHostname in Kubernetes API server configuration
  • cluster-config-v1 configmap in kube-system namespace
    • The observed configmap install-config is decoded and the networking.podCIDR and networking.serviceCIDR is extracted and used as input for admissionPluginConfig.openshift.io/RestrictedEndpointsAdmission.configuration.restrictedCIDRs and servicesSubnet

The configuration for the Kubernetes API server is the result of merging:

  • a default config
  • observed config (compare observed values above) spec.spec.unsupportedConfigOverrides from the kubeapiserveroperatorconfig.

All of these are sparse configurations, i.e. unvalidated json snippets which are merged in order to form a valid configuration at the end.

Debugging

Operator also expose events that can help debugging issues. To get operator events, run following command:

$ oc get events -n  openshift-cluster-kube-apiserver-operator

This operator is configured via KubeAPIServer custom resource:

$ oc describe kubeapiserver
apiVersion: operator.openshift.io/v1
kind: KubeAPIServer
metadata:
  name: cluster
spec:
  managementState: Managed

The current operator status is reported using the ClusterOperator resource. To get the current status you can run follow command:

$ oc get clusteroperator/kube-apiserver

Developing and debugging the operator

In the running cluster cluster-version-operator is responsible for maintaining functioning and non-altered elements. In that case to be able to use custom operator image one has to perform one of these operations:

  1. Set your operator in umanaged state, see here for details, in short:
oc patch clusterversion/version --type='merge' -p "$(cat <<- EOF
spec:
  overrides:
  - group: apps
    kind: Deployment
    name: kube-apiserver-operator
    namespace: openshift-kube-apiserver-operator
    unmanaged: true
EOF
)"
  1. Scale down cluster-version-operator:
oc scale --replicas=0 deploy/cluster-version-operator -n openshift-cluster-version

IMPORTANT: This approach disables cluster-version-operator completely, whereas the previous patch only tells it to not manage a kube-apiserver-operator!

After doing this you can now change the image of the operator to the desired one:

oc patch pod/kube-apiserver-operator-<rand_digits> -n openshift-kube-apiserver-operator -p '{"spec":{"containers":[{"name":"kube-apiserver-operator","image":"<user>/cluster-kube-apiserver-operator"}]}}'

Developing and debugging the bootkube bootstrap phase

The operator image version used by the https://github.com/openshift/installer/blob/master/pkg/asset/ignition/bootstrap/bootstrap.go#L178 bootstrap phase can be overridden by creating a custom origin-release image pointing to the developer's operator :latest image:

$ IMAGE_ORG=sttts make images
$ docker push sttts/origin-cluster-kube-apiserver-operator

$ cd ../cluster-kube-apiserver-operator
$ oc adm release new --from-release=registry.svc.ci.openshift.org/openshift/origin-release:v4.0 cluster-kube-apiserver-operator=docker.io/sttts/origin-cluster-kube-apiserver-operator:latest --to-image=sttts/origin-release:latest

$ cd ../installer
$ OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=docker.io/sttts/origin-release:latest bin/openshift-install cluster ...

cluster-kube-apiserver-operator's People

Contributors

abhinavdahiya avatar atiratree avatar benluddy avatar bertinatto avatar bparees avatar damemi avatar danwinship avatar deads2k avatar dgrisonnet avatar enj avatar ingvagabund avatar marun avatar mfojtik avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar p0lyn0mial avatar ravisantoshgudimetla avatar s-urbaniak avatar sanchezl avatar sjenning avatar smarterclayton avatar soltysh avatar stlaz avatar sttts avatar swghosh avatar tkashem avatar tnozicka avatar vareti avatar vrutkovs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cluster-kube-apiserver-operator's Issues

Use ServiceAccountToken volumes

I have a deployment of Openshift 4.3 and I wanted to use Service Account Token Volume Projection however to do this I have to pass these flags to the API server:

--service-account-issuer
--service-account-signing-key-file
--service-account-api-audiences

How can this be accomplished? Is it by editing the resource kubeapiservers.operator.openshift.io/cluster if so how because looking at the resource and reading what doc was available I was not able to figure it out?

[Audit log policy profiles] Could you add a profile or other mechanism to disable audit logging

Hello,

Audit produces a lot of logs.

[root@control-plane-0 kube-apiserver]# ls -ltrh
total 664M
-rw-r--r--. 1 root root 100M  3 janv. 16:56 audit-2021-01-03T16-56-54.361.log
-rw-r--r--. 1 root root 100M  3 janv. 17:16 audit-2021-01-03T17-16-50.803.log
-rw-r--r--. 1 root root 100M  3 janv. 17:28 audit-2021-01-03T17-28-49.458.log
-rw-r--r--. 1 root root 100M  3 janv. 17:44 audit-2021-01-03T17-44-29.228.log
-rw-r--r--. 1 root root 100M  3 janv. 18:04 audit-2021-01-03T18-04-00.401.log
-rw-r--r--. 1 root root 100M  3 janv. 18:23 audit-2021-01-03T18-23-17.021.log
-rw-r--r--. 1 root root  43M  5 janv. 21:27 audit.log

More than 600M only for 1h30... it's quite huge.

Could it be possible to add a profile or other mechanism to disable the auditing feature ?

I am running OpenShift version 4.5.0-0.okd-2020-10-15-235428

Keep looking for the profile feature available in 4.6.

Damien

building images destroys IDE data

[deads@deads-02 cluster-openshift-apiserver-operator]$ make build-images 
hack/build-images.sh
[openshift/origin-cluster-openshift-apiserver-operator] --> FROM openshift/origin-release:golang-1.10 as 0
[openshift/origin-cluster-openshift-apiserver-operator] --> COPY . /go/src/github.com/openshift/cluster-openshift-apiserver-operator
[openshift/origin-cluster-openshift-apiserver-operator] --> RUN cd /go/src/github.com/openshift/cluster-openshift-apiserver-operator && go build ./cmd/cluster-openshift-apiserver-operator
[openshift/origin-cluster-openshift-apiserver-operator] --> FROM centos:7 as 1
[openshift/origin-cluster-openshift-apiserver-operator] --> COPY --from=0 /go/src/github.com/openshift/cluster-openshift-apiserver-operator/cluster-openshift-apiserver-operator /usr/bin/cluster-openshift-apiserver-operator
[openshift/origin-cluster-openshift-apiserver-operator] --> Committing changes to openshift/origin-cluster-openshift-apiserver-operator:d78ed24 ...
[openshift/origin-cluster-openshift-apiserver-operator] --> Tagged as openshift/origin-cluster-openshift-apiserver-operator:latest
[openshift/origin-cluster-openshift-apiserver-operator] --> Done
[openshift/origin-cluster-openshift-apiserver-operator] Removing .idea/
[openshift/origin-cluster-openshift-apiserver-operator] Removing _output/

@mfojtik @sttts any idea what would case this?

Access to a privileged container allows for breakout to the underlying host

1 as the kube-apiserver has set the 'privileged: true'

securityContext:
privileged: true

2 but even without the 'privileged: true', the kube-apiserver can also write audit to /var/log/kube-apiserver

3 when using standard container runtimes (for example ContainerD or CRI-O) access to a privileged container allows for easy breakout to the underlying host, which in turn allows for access to all other workloads on that host and credentials for the node agent (Kubelet)

maybe we should remove the "privileged: true"

Allow runtime/default seccomp profile in the built-in SCCs

Currently, all default SCCs (except privileged) block users from setting seccomp to runtime/default. The current behaviour seems to be a disservice as it blocks workloads to use more restrictive security controls, which may lead to folks simply set a workload SCC to privileged in order to "get it to work".
This is becoming a larger problem as folks around the OSS community and the private sector start shipping workloads with seccomp set to runtime/default - which is the recommended setting by CIS Benchmark for a few years now. They are now facing a few options:

The suggested change is to allow all default SCCs to support:

  • unconfined (current Kubernetes default for backwards compatibility)
  • runtime/default (future Kubernetes default and safer position)

I am not entirety sure of the longevity and future plans of SCC. However, making this change will:

  • Help developers to seamless support OpenShift (from a seccomp perspective).
  • Help companies to be CIS Benchmark compliant.
  • Support the new defaults from Kubernetes upstream.

Looking forward to hear some thoughts around and understand how receptive the maintainers would be to the above.

cc: @JAORMX @jhrozek @saschagrunert


Upstream Context:

  • Around 2016 Docker created a default seccomp profile and enabled by default. The same profile was introduced into Kubernetes as docker/default and was later renamed to runtime/default.
  • Kubernetes 1.19: Seccomp made GA having profile unconfined by default.
  • Kubernetes 1.22: SeccompDefault feature gate created, enabling users to switch from unconfined to runtime/default across the entire cluster.
  • Kubernetes 1.25 (planned): SeccompDefault feature gate is enabled by default, meaning that all workloads will have seccomp profile runtime/default unless otherwise set on a per workload (pod or container) basis.

stomping system:openshift:operator:openshift-kube-apiserver-installer cluster role

While working on events, I noticed that this operator stomp the cluster role binding several times per second. It seems like the cluster version does not have the APIGroup set?

I1123 14:17:18.786035       1 rbac.go:67] cluster role binding changed: %!(EXTRA string={"metadata":{"name":"system:openshift:operator:openshift-kube-apiserver-installer","selfLink":"/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system%3Aopenshift%3Aoperator%3Aopenshift-kube-apiserver-installer","uid":"7b95c8e9-ef2a-11e8-96b5-126065f7d37a","resourceVersion":"1690","creationTimestamp":"2018-11-23T14:17:17Z"},"subjects":[{"kind":"ServiceAccount","name":"installer-sa","namespace":"openshift-kube-apiserver"}],"roleRef":{"apiGroup":"

A: rbac.authorization.k8s.io","kind":"ClusterRole","name":"cluster-admin"}}

B: ","kind":"ClusterRole","name":"cluster-admin"}}

)

/cc @deads2k

Skip generate cert when network config status.serviceNetwork is nil

  1. When the network config status.serviceNetwork is nil , the problems is shown as follows:
Sep 20 15:35:41 master1 hyperkube[8624]: E0920 15:35:41.603133    8624 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_karmada-aggregated-apiserver-7f6c7d8dd5-gm7gb_karmada-system_ff9e40b2-245a-4b7c-95d4-917bf94cf3a6_0(b1b5310d3645d82e7c4f468cf48bd512a1f226c6de7d57ee312a8bc9fbec6c01): error adding pod karmada-system_karmada-aggregated-apiserver-7f6c7d8dd5-gm7gb to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): [karmada-system/karmada-aggregated-apiserver-7f6c7d8dd5-gm7gb/ff9e40b2-245a-4b7c-95d4-917bf94cf3a6:k8s-pod-network]: error adding container to network \"k8s-pod-network\": error getting ClusterInformation: Get \"https://21.101.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": x509: cannot validate certificate for 21.101.0.1 because it doesn't contain any IP SANs"

  1. and the network cr is shown as follows:
oc get network cluster -oyaml
apiVersion: config.openshift.io/v1
kind: Network
metadata:
  creationTimestamp: "2023-09-20T07:10:47Z"
  generation: 1
  name: cluster
  resourceVersion: "715"
  uid: bb3082a0-cebb-4a64-b662-db7ce2e76837
spec:
  apiAddress: 10.255.71.88
  clusterNetwork:
  - cidr: 21.100.0.0/16
    hostPrefix: 24
  externalIP:
    policy: {}
  serviceNetwork:
  - 21.101.0.0/16
status: {}
  1. and the service-network-serving-certkey secret is shown as follows:
oc get secret service-network-serving-certkey -n openshift-kube-apiserver -oyaml
apiVersion: v1
data:
  tls.crt: 
  tls.key: 
kind: Secret
metadata:
  annotations:
    auth.openshift.io/certificate-hostnames: openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local
    auth.openshift.io/certificate-issuer: kube-apiserver-service-network-signer
    auth.openshift.io/certificate-not-after: "2023-10-20T04:31:27Z"
    auth.openshift.io/certificate-not-before: "2023-09-20T04:31:26Z"
  creationTimestamp: "2023-09-20T04:31:31Z"
  labels:
    auth.openshift.io/managed-certificate-type: target
  name: service-network-serving-certkey
  namespace: openshift-kube-apiserver
  resourceVersion: "13561"
  uid: 5d551145-1de8-4ff6-a94c-7b511f3ac2bf
  1. The problem is the network operator has not updated the network cr, but the kube-apiserver-operator should wait until it found the k8s svc ip available, but no rollout the kube-apiserver to new revision , which will result in tearing down the temporary control plane, and finally result the cluster not available.

outage calculation in upgrade looks incorrect. See

outage calculation in upgrade looks incorrect. See

status:
  conditions:
  - lastTransitionTime: "2020-07-08T15:09:11Z"
    message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
      succeeded'
    reason: TCPConnectSuccess
    status: "True"
    type: Reachable
  failures:
  - latency: 10.005827597s
    message: 'openshift-apiserver-service-172-30-151-149-443: failed to establish
      a TCP connection to 172.30.151.149:443: dial tcp 172.30.151.149:443: i/o timeout'
    reason: TCPConnectError
    success: false
    time: "2020-07-08T15:41:50Z"
  - latency: 10.001308826s
    message: 'openshift-apiserver-service-172-30-151-149-443: failed to establish
      a TCP connection to 172.30.151.149:443: dial tcp 172.30.151.149:443: i/o timeout'
    reason: TCPConnectError
    success: false
    time: "2020-07-08T15:41:36Z"
  - latency: 10.002833732s
    message: 'openshift-apiserver-service-172-30-151-149-443: failed to establish
      a TCP connection to 172.30.151.149:443: dial tcp 172.30.151.149:443: i/o timeout'
    reason: TCPConnectError
    success: false
    time: "2020-07-08T15:40:20Z"
  - latency: 10.000673641s
    message: 'openshift-apiserver-service-172-30-151-149-443: failed to establish
      a TCP connection to 172.30.151.149:443: dial tcp 172.30.151.149:443: i/o timeout'
    reason: TCPConnectError
    success: false
    time: "2020-07-08T15:40:17Z"
  successes:
  - latency: 1.281624ms
    message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
      succeeded'
    reason: TCPConnect
    success: true
    time: "2020-07-08T15:54:16Z"
  - latency: 1.549382ms
    message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
      succeeded'
    reason: TCPConnect
    success: true
    time: "2020-07-08T15:54:15Z"
  - latency: 1.819134ms
    message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
      succeeded'
    reason: TCPConnect
    success: true
    time: "2020-07-08T15:54:14Z"
  - latency: 192.237ยตs
    message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
      succeeded'
    reason: TCPConnect
    success: true
    time: "2020-07-08T15:54:13Z"
  - latency: 676.948ยตs
    message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
      succeeded'
    reason: TCPConnect
    success: true
    time: "2020-07-08T15:54:12Z"
  - latency: 256.643ยตs
    message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
      succeeded'
    reason: TCPConnect
    success: true
    time: "2020-07-08T15:54:11Z"
  - latency: 1.815262ms
    message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
      succeeded'
    reason: TCPConnect
    success: true
    time: "2020-07-08T15:54:10Z"
  - latency: 435.609ยตs
    message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
      succeeded'
    reason: TCPConnect
    success: true
    time: "2020-07-08T15:54:09Z"
  - latency: 1.409842ms
    message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
      succeeded'
    reason: TCPConnect
    success: true
    time: "2020-07-08T15:54:08Z"
  - latency: 2.111243ms
    message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
      succeeded'
    reason: TCPConnect
    success: true
    time: "2020-07-08T15:54:07Z"

Originally posted by @deads2k in #893 (comment)

get restrictedCIDRs from the cluster API

The installer now creates a Cluster.cluster.k8s.io/v1alpha1 object, which is the correct way for operators to determine information about IP space.

Instead of parsing the old tectonic installer configuration, the operator should determine its list of restricted CIDRs from this object instead.

Key rotation breaks cluster if installer fails

We saw this on a cluster built with CI images which disappear after some hours. openshift-apiserver is disfunctional with E0126 15:03:51.543533 1 authentication.go:62] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid.

$ kubectl get -n openshift-kube-apiserver pods
NAME                                                   READY     STATUS             RESTARTS   AGE
installer-1-ip-10-0-28-37.ec2.internal                 0/1       Completed          0          25h
installer-1-ip-10-0-34-137.ec2.internal                0/1       Completed          0          25h
installer-2-ip-10-0-28-37.ec2.internal                 0/1       Completed          0          25h
installer-2-ip-10-0-34-137.ec2.internal                0/1       Completed          0          25h
installer-2-ip-10-0-7-66.ec2.internal                  0/1       Completed          0          25h
installer-3-ip-10-0-28-37.ec2.internal                 0/1       Completed          0          25h
installer-3-ip-10-0-34-137.ec2.internal                0/1       Completed          0          25h
installer-4-ip-10-0-28-37.ec2.internal                 0/1       Completed          0          25h
installer-4-ip-10-0-34-137.ec2.internal                0/1       Completed          0          25h
installer-4-ip-10-0-7-66.ec2.internal                  0/1       Completed          0          25h
installer-5-ip-10-0-28-37.ec2.internal                 0/1       Completed          0          23h
installer-5-ip-10-0-34-137.ec2.internal                0/1       Completed          0          23h
installer-5-ip-10-0-7-66.ec2.internal                  0/1       Completed          0          23h
installer-6-ip-10-0-34-137.ec2.internal                0/1       ImagePullBackOff   0          21h
openshift-kube-apiserver-ip-10-0-28-37.ec2.internal    1/1       Running            0          23h
openshift-kube-apiserver-ip-10-0-34-137.ec2.internal   1/1       Running            0          23h
openshift-kube-apiserver-ip-10-0-7-66.ec2.internal     1/1       Running            0          23h
revision-pruner-0-ip-10-0-28-37.ec2.internal           0/1       Completed          0          25h
revision-pruner-0-ip-10-0-34-137.ec2.internal          0/1       Completed          0          25h
revision-pruner-0-ip-10-0-7-66.ec2.internal            0/1       Completed          0          25h
revision-pruner-3-ip-10-0-28-37.ec2.internal           0/1       Completed          0          25h
revision-pruner-3-ip-10-0-34-137.ec2.internal          0/1       Completed          0          25h
revision-pruner-4-ip-10-0-28-37.ec2.internal           0/1       Completed          0          25h
revision-pruner-4-ip-10-0-34-137.ec2.internal          0/1       Completed          0          25h
revision-pruner-4-ip-10-0-7-66.ec2.internal            0/1       Completed          0          25h
revision-pruner-5-ip-10-0-28-37.ec2.internal           0/1       Completed          0          23h
revision-pruner-5-ip-10-0-34-137.ec2.internal          0/1       Completed          0          23h
revision-pruner-5-ip-10-0-7-66.ec2.internal            0/1       Completed          0          23h
revision-pruner-6-ip-10-0-34-137.ec2.internal          0/1       ImagePullBackOff   0          21h

aggregator-client-signer got inconsistent cert validity period

In the following commit, cert validity period minimum changed.

  • signer certs got 60 days validity and 30 days refresh
  • target certs got 30 days validity and 15 days refresh

2afd461

However, the aggregator-client-signer got the 30 / 15 days one. This is a signer cert so I think it should have 60 / 30 days.

kube-apiserver-cert-regeneration-controller in weird state

Hi,

I'm running 4.5.0-0.okd-2020-09-04-180756. During the September, cluster was turned off. After turning it back on, some certificates got expired, they were good until Sep 29th. For example kube-apiserver containers on masters are full of:

I1020 11:59:06.252656       1 controller.go:127] OpenAPI AggregationController: action for item v1.quota.openshift.io: Rate Limited Requeue.
E1020 11:59:06.338161       1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:07.540815       1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:14.837577       1 reflector.go:178] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io)
E1020 11:59:18.497351       1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:22.366040       1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:28.014356       1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:30.063245       1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:30.611518       1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:32.988534       1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid

Together with @vrutkovs we did some debugging, and cert-regeneration-controller seems to be stuck in some loop. I'm attaching logs from its container.

At the moment, oc get pods --all-namespaces show only 12 pods in Running state: a set of etcd, kube-apiserver, kube-controller-manager, openshift-kube-scheduler for each of 3 masters. All other pods are in Pending state.

kube-apiserver-cert-regeneration-controller-logs.txt

@tnozicka , could you take a look?

Only one openshift-apiserver pod out of three is responding

On GCP 4.0 on CentOS apiserver runs on 3 masters, but only one pod is responding. This breaks openshift-kube-apiserver, which uses api server to communicate - most of the requests are failing.

$ oc get pods -n openshift-apiserver -o wide
NAME              READY     STATUS    RESTARTS   AGE       IP            NODE
apiserver-dz57l   1/1       Running   0          21m       10.131.0.44   vrutkovs-ig-m-4m83
apiserver-nrc9c   1/1       Running   1          3m        10.129.0.23   vrutkovs-ig-m-23z7
apiserver-zsl6n   1/1       Running   1          3m        10.130.0.23   vrutkovs-ig-m-42bb
# curl -kLvs https://10.129.0.23:8443/apis/apps.openshift.io/v1
<hangs>
# curl -kLvs https://10.130.0.23:8443/apis/apps.openshift.io/v1
<hangs>
# curl -kLvs https://10.131.0.44:8443/apis/apps.openshift.io/v1
<works immediately>

This works fine if I adjust ds/apiserver to match only one known-to-work master node. All masters are created from the same instance group and get the same firewall rules applied.

This doesn't seem to affect AWS, but should happen on libvirt with 3 masters

The kube-apiserver pods can't resolve internal cluster DNS names.

In OCP 4.2 the kube-apiservers pods are running with

  1. hostNetwork: true
  2. dnsPolicy: ClusterFirst

And as a result, ( I assume) the kube-apiservers pod unable to resolve internal cluster DNS names, a.k.a K8S services (e.g web-app.my-project.svc.cluster.local)

I think, the valid dnsPolicy when hostNetwork: true should be dnsPolicy: ClusterFirstWithHostNet, which should allow to resolve first the internal cluster name, and after external names.

Probably it's not a OpenShift bug, but more probably the cluster-kube-apiserver-operator related bug, since that operator (I assume :) ) is the one who deploying kube-apiservers pods.

add TCP connection status label

No status codes as it's simply a TCP connection. Would you like to see the kind of error somehow?

yes. Not all failure modes are the same. connection refused, no route to host, dial error, timeout, they alll mean differen things

Originally posted by @deads2k in #893

regenerate-certificates has too many CA's in csr-signer

Running cluster-kube-apiserver-operator regenerate-certificates on a failed master causes the /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-5/secrets/csr-signer/tls.crt file to contain two certificates. The controller-manager CSR signer then fails signing certificates with this error:

E0509 20:04:43.804526       1 controllermanager.go:541] Error starting "csrsigning"
F0509 20:04:43.804566       1 controllermanager.go:240] error starting controllers: failed to start certificate controller: error parsing CA cert file "/etc/kubernetes/static-pod-resources/secrets/cs
r-signer/tls.crt": {"code":1003,"message":"the PEM file should contain only one object"}

Manually removing the offending certificates from tls.crt and rebooting the node fixes the issue allowing the controller-manager to sign certificates.

/cc @tnozicka

kube-apiserver has too much error installer pod

  1. When something wrong happened in the environment, the instller error pod will listed as follows:
openshift-kube-apiserver                           installer-24-retry-1-master2                                   0/1     Error              0                  2d11h
openshift-kube-apiserver                           installer-24-retry-10-master2                                  0/1     Error              0                  2d11h
openshift-kube-apiserver                           installer-24-retry-100-master2                                 0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-101-master2                                 0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-102-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-103-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-104-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-105-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-106-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-107-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-108-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-109-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-11-master2                                  0/1     Error              0                  2d10h
openshift-kube-apiserver                           installer-24-retry-110-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-111-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-112-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-113-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-114-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-115-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-116-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-117-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-118-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-119-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-12-master2                                  0/1     Error              0                  2d10h
openshift-kube-apiserver                           installer-24-retry-120-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-121-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-122-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-123-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-124-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-125-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-126-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-127-master2                                 0/1     Error              0                  2d
openshift-kube-apiserver                           installer-24-retry-128-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-129-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-13-master2                                  0/1     Error              0                  2d10h
openshift-kube-apiserver                           installer-24-retry-130-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-131-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-132-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-133-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-134-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-135-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-136-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-137-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-138-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-139-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-14-master2                                  0/1     Error              0                  2d10h
openshift-kube-apiserver                           installer-24-retry-140-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-141-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-142-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-143-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-144-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-145-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-146-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-147-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-148-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-149-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-15-master2                                  0/1     Error              0                  2d10h
openshift-kube-apiserver                           installer-24-retry-150-master2                                 0/1     Error              0                  47h
openshift-kube-apiserver                           installer-24-retry-151-master2                                 0/1     Completed          0                  47h
openshift-kube-apiserver                           installer-24-retry-16-master2                                  0/1     Error              0                  2d9h
openshift-kube-apiserver                           installer-24-retry-17-master2                                  0/1     Error              0                  2d9h
openshift-kube-apiserver                           installer-24-retry-18-master2                                  0/1     Error              0                  2d9h
openshift-kube-apiserver                           installer-24-retry-19-master2                                  0/1     Error              0                  2d9h
openshift-kube-apiserver                           installer-24-retry-2-master2                                   0/1     Error              0                  2d11h
openshift-kube-apiserver                           installer-24-retry-20-master2                                  0/1     Error              0                  2d9h
openshift-kube-apiserver                           installer-24-retry-21-master2                                  0/1     Error              0                  2d8h
openshift-kube-apiserver                           installer-24-retry-22-master2                                  0/1     Error              0                  2d8h
openshift-kube-apiserver                           installer-24-retry-23-master2                                  0/1     Error              0                  2d8h
openshift-kube-apiserver                           installer-24-retry-24-master2                                  0/1     Error              0                  2d8h
openshift-kube-apiserver                           installer-24-retry-25-master2                                  0/1     Error              0                  2d8h
openshift-kube-apiserver                           installer-24-retry-26-master2                                  0/1     Error              0                  2d7h
openshift-kube-apiserver                           installer-24-retry-27-master2                                  0/1     Error              0                  2d7h
openshift-kube-apiserver                           installer-24-retry-28-master2                                  0/1     Error              0                  2d7h
openshift-kube-apiserver                           installer-24-retry-29-master2                                  0/1     Error              0                  2d7h
openshift-kube-apiserver                           installer-24-retry-3-master2                                   0/1     Error              0                  2d11h
openshift-kube-apiserver                           installer-24-retry-30-master2                                  0/1     Error              0                  2d7h
openshift-kube-apiserver                           installer-24-retry-31-master2                                  0/1     Error              0                  2d6h
openshift-kube-apiserver                           installer-24-retry-32-master2                                  0/1     Error              0                  2d6h
openshift-kube-apiserver                           installer-24-retry-33-master2                                  0/1     Error              0                  2d6h
openshift-kube-apiserver                           installer-24-retry-34-master2                                  0/1     Error              0                  2d6h
openshift-kube-apiserver                           installer-24-retry-35-master2                                  0/1     Error              0                  2d6h
openshift-kube-apiserver                           installer-24-retry-36-master2                                  0/1     Error              0                  2d5h
openshift-kube-apiserver                           installer-24-retry-37-master2                                  0/1     Error              0                  2d5h
openshift-kube-apiserver                           installer-24-retry-38-master2                                  0/1     Error              0                  2d5h
openshift-kube-apiserver                           installer-24-retry-39-master2                                  0/1     Error              0                  2d5h
openshift-kube-apiserver                           installer-24-retry-4-master2                                   0/1     Error              0                  2d11h
openshift-kube-apiserver                           installer-24-retry-40-master2                                  0/1     Error              0                  2d5h
openshift-kube-apiserver                           installer-24-retry-41-master2                                  0/1     Error              0                  2d4h
openshift-kube-apiserver                           installer-24-retry-42-master2                                  0/1     Error              0                  2d4h
openshift-kube-apiserver                           installer-24-retry-43-master2                                  0/1     Error              0                  2d4h
openshift-kube-apiserver                           installer-24-retry-44-master2                                  0/1     Error              0                  2d4h
openshift-kube-apiserver                           installer-24-retry-45-master2                                  0/1     Error              0                  2d3h
openshift-kube-apiserver                           installer-24-retry-46-master2                                  0/1     Error              0                  2d3h
openshift-kube-apiserver                           installer-24-retry-47-master2                                  0/1     Error              0                  2d3h
openshift-kube-apiserver                           installer-24-retry-48-master2                                  0/1     Error              0                  2d3h
openshift-kube-apiserver                           installer-24-retry-49-master2                                  0/1     Error              0                  2d3h
openshift-kube-apiserver                           installer-24-retry-5-master2                                   0/1     Error              0                  2d11h
openshift-kube-apiserver                           installer-24-retry-50-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-51-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-52-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-53-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-54-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-55-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-56-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-57-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-58-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-59-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-6-master2                                   0/1     Error              0                  2d11h
openshift-kube-apiserver                           installer-24-retry-60-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-61-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-62-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-63-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-64-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-65-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-66-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-67-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-68-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-69-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-7-master2                                   0/1     Error              0                  2d11h
openshift-kube-apiserver                           installer-24-retry-70-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-71-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-72-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-73-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-74-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-75-master2                                  0/1     Error              0                  2d2h
openshift-kube-apiserver                           installer-24-retry-76-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-77-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-78-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-79-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-8-master2                                   0/1     Error              0                  2d11h
openshift-kube-apiserver                           installer-24-retry-80-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-81-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-82-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-83-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-84-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-85-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-86-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-87-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-88-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-89-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-9-master2                                   0/1     Error              0                  2d11h
openshift-kube-apiserver                           installer-24-retry-90-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-91-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-92-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-93-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-94-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-95-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-96-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-97-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-98-master2                                  0/1     Error              0                  2d1h
openshift-kube-apiserver                           installer-24-retry-99-master2                                  0/1     Error              0                  2d1h
  1. I know this is normal phenomenon as the installer-operator will retry the installer pod when it failed , but maybe we should do someting to reduce the error log number

More details required for the error message.

func (c *webhookSupportabilityController) updateWebhookConfigurationDegraded(ctx context.Context, condition operatorv1.OperatorCondition, webhookInfos []webhookInfo) v1helpers.UpdateStatusFunc {

Present Error message: x509: certificate signed by unknown authority
It does not include which host, what ip. If these details are provided it will be easy to narrow down the issue to a particular webhook.

openshift-kube-apiserver logs are full of TLS handshake errors

It looks like its coming from the pod network, but hard to say for sure.

I1116 23:07:53.364992       1 logs.go:49] http: TLS handshake error from 10.0.11.206:22630: EOF
I1116 23:07:53.468272       1 logs.go:49] http: TLS handshake error from 10.0.22.71:5446: EOF
I1116 23:07:53.485773       1 logs.go:49] http: TLS handshake error from 10.0.81.142:42664: EOF
I1116 23:07:53.571327       1 logs.go:49] http: TLS handshake error from 10.0.56.194:57970: EOF
I1116 23:07:53.601100       1 logs.go:49] http: TLS handshake error from 10.0.11.206:7721: EOF
I1116 23:07:53.947873       1 logs.go:49] http: TLS handshake error from 10.0.67.121:4099: EOF
I1116 23:07:54.203697       1 logs.go:49] http: TLS handshake error from 10.0.74.56:45909: EOF
I1116 23:07:54.309702       1 logs.go:49] http: TLS handshake error from 10.0.88.7:11357: EOF

https://storage.googleapis.com/origin-ci-test/pr-logs/pull/21497/pull-ci-openshift-origin-master-e2e-aws/857/artifacts/e2e-aws/pods/openshift-kube-apiserver_openshift-kube-apiserver-ip-10-0-47-95.ec2.internal_apiserver.log.gz

Possibly a misconfigured infrastructure component

selinux with audit log

I am receiving the following error on the master node (libvirt install, installer master:87ede7c78af) with the openshift-apiserver:

2019-02-20T16:46:10.271098459Z AUDIT: id="ac456318-4539-4261-90b3-4bc5d2fa938d" stage="ResponseComplete" ip="10.128.0.39" method="get" user="system:serviceaccount:openshift-cluster-samples-operator:cluster-samples-operator" groups="\"system:serviceaccounts\",\"system:serviceaccounts:openshift-cluster-samples-operator\",\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="openshift" uri="/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/fis-karaf-openshift" response="200"
E0220 16:46:10.495607       1 metrics.go:86] Error in audit plugin 'log' affecting 1 audit events: can't open new logfile: open /var/log/openshift-apiserver/audit.log: permission denied
Impacted events:
2019-02-20T16:46:10.474500316Z AUDIT: id="83d3addd-edea-419e-a1d2-c4e2ee8ee5e1" stage="ResponseComplete" ip="10.128.0.39" method="get" user="system:serviceaccount:openshift-cluster-samples-operator:cluster-samples-operator" groups="\"system:serviceaccounts\",\"system:serviceaccounts:openshift-cluster-samples-operator\",\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="openshift" uri="/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/rhdm72-decisioncentral-indexing-openshift" response="200"
E0220 16:46:10.672637       1 metrics.go:86] Error in audit plugin 'log' affecting 1 audit events: can't open new logfile: open /var/log/openshift-apiserver/audit.log: permission denied

Additional, journalctl contains numerous selinux errors:

Feb 20 16:52:49 test1-master-0 kernel: type=1400 audit(1550681568.989:17345): avc:  denied  { write } for  pid=30102 comm="hypershift" name="openshift-apiserver" dev="vda2" ino=31506996 scontext=system_u:system_r:container_t:s0:c898,c993 tcontext=system_u:object_r:container_log_t:s0 tclass=dir permissive=0
Feb 20 16:52:50 test1-master-0 kernel: type=1400 audit(1550681570.030:17346): avc:  denied  { write } for  pid=30102 comm="hypershift" name="openshift-apiserver" dev="vda2" ino=31506996 scontext=system_u:system_r:container_t:s0:c898,c993 tcontext=system_u:object_r:container_log_t:s0 tclass=dir permissive=0
Feb 20 16:52:50 test1-master-0 kernel: type=1400 audit(1550681570.049:17347): avc:  denied  { write } for  pid=30102 comm="hypershift" name="openshift-apiserver" dev="vda2" ino=31506996 scontext=system_u:system_r:container_t:s0:c898,c993 tcontext=system_u:object_r:container_log_t:s0 tclass=dir permissive=0

kube-apiserver rollout too long

  1. I have install the ocp 4.12 release, and found the CKA-o๏ผŒ CKCM-o๏ผŒ KKS-o also met the revison too large, and the cluster has stuck for a long time to complete

  2. and the CKA-o logs show as follows:

I0530 11:20:47.319151       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 129 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:21:30.867123       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 130 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:22:00.823064       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 131 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:22:40.096664       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 132 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:23:10.109621       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 132 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:23:16.670830       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 133 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:23:49.060741       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 134 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:24:27.827461       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 135 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:25:01.859358       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 136 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:25:36.061161       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 136 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:25:44.753436       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 137 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:26:30.119970       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 138 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:27:01.026656       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 138 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:28:33.450015       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 139 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:29:00.979907       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 139 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:29:15.181776       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 140 triggered by "secret/localhost-recovery-client-token has changed"
  1. and the localhost-recovery-client-token show as follows:
openshift-kube-apiserver                           localhost-recovery-client-token                                 kubernetes.io/service-account-token   4      145m
openshift-kube-apiserver                           localhost-recovery-client-token-123                             Opaque                                4      65m
openshift-kube-apiserver                           localhost-recovery-client-token-124                             Opaque                                4      64m
openshift-kube-apiserver                           localhost-recovery-client-token-125                             Opaque                                4      64m
openshift-kube-apiserver                           localhost-recovery-client-token-126                             Opaque                                4      63m
openshift-kube-apiserver                           localhost-recovery-client-token-127                             Opaque                                4      62m
openshift-kube-apiserver                           localhost-recovery-client-token-136                             Opaque                                4      51m
openshift-kube-apiserver                           localhost-recovery-client-token-137                             Opaque                                4      50m
openshift-kube-apiserver                           localhost-recovery-client-token-138                             Opaque                                4      50m
openshift-kube-apiserver                           localhost-recovery-client-token-139                             Opaque                                4      47m
openshift-kube-apiserver                           localhost-recovery-client-token-140                             Opaque                                4      47m
  1. when I change the operator log level to Debug, the log show as follows:
11:59:26.753590       1 revision_controller.go:178] Secret "localhost-recovery-client-token" changes for revision 244: {"data":{"ca.crt":"TU9ESUZJRUQ="},"metadata":{"annotations":{"kubernetes.io/service-account.name":"localhost-recovery-client","kubernetes.io/service-account.uid":"267c2116-4463-4566-b824-14dde1ea79f6"},"creationTimestamp":"2023-06-07T09:28:15Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/service-account.name":{}}},"f:type":{}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2023-06-07T09:28:15Z"},{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca.crt":{},"f:namespace":{},"f:service-ca.crt":{},"f:token":{}},"f:metadata":{"f:annotations":{"f:kubernetes.io/service-account.uid":{}}}},"manager":"kube-controller-manager","operation":"Update","time":"2023-06-07T11:59:26Z"}],"name":"localhost-recovery-client-token","ownerReferences":null,"resourceVersion":"4280369","uid":"12bebcd8-36cc-4e58-a131-9d8b4c5d11dd"},"type":"kubernetes.io/service-account-token"}
11:59:26.754532       1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"27b80946-c3ed-4ab9-b2f4-154a9a74e9ae", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 245 triggered by "secret/localhost-recovery-client-token has changed"

Consume service-account.key instead of service-account.pub

Since openshift/installer@57127ec (coreos/tectonic-installer#3270), the installer sets a kube-apiserver secret in the kube-system with both service-account.key and service-account.pub. We also do that for the openshift-apiserver secret. But the public key can be computed from the private key, so it would be nice to not have to set both in the same secret. It looks like the kube-controller-manager operator uses the private key (although I haven't found code where it pulls it from the installer-generated secret); can we update this operator to use the private key too? Or should the installer be setting separate secrets for each operator? Or something (again, the connections are not very clear to me ;)?

How to change event-ttl of API Server

Hi,
I want to edit event-ttl of API Server.

I found official documents of OCP 3.11
https://docs.openshift.com/container-platform/3.11/install_config/master_node_configuration.html

apiServerArguments:
  event-ttl:
  - "15m"

In OCP 4.3, same as OCP 3.11, I think that I can change event-ttl by editing CRD of "kubeapiservers.operator.openshift.io".

# oc get kubeapiservers.operator.openshift.io/cluster -o yaml
apiVersion: operator.openshift.io/v1
kind: KubeAPIServer
metadata:
  annotations:
    release.openshift.io/create-only: "true"
  creationTimestamp: "2020-03-18T23:46:56Z"
  generation: 3
  name: cluster
  resourceVersion: "428767"
  selfLink: /apis/operator.openshift.io/v1/kubeapiservers/cluster
  uid: 927aefc8-04e6-437b-9965-5404dd5da7e8
spec:
  logLevel: ""
  managementState: Managed
  observedConfig:    
    apiServerArguments:
      feature-gates:
      - RotateKubeletServerCertificate=true
      - SupportPodPidsLimit=true
      - NodeDisruptionExclusion=true
      - ServiceNodeExclusion=true
      - SCTPSupport=true
      - LegacyNodeRoleBehavior=false

Is my understanding right?

observedconfig of kubeapiserver operator

Hi,

This is more of a query and not an issue. I am not sure if this is the right forum.

Here is the query :

we are building some security policies in our application that validate if openshift configuration being used is adhering to security practices listed under "CIS RedHat OpenShift Container Platform Benchmark"
One of the policies validates that --token-auth-file parameter is not set. As per the CIS benchmark document, here are the list of steps to validate --token-auth-file is not set.

Verify that the token-auth-file flag is not present

step1 : oc get configmap config -n openshift-kube-apiserver -ojson | jq -r '.data["config.yaml"]' | jq '.apiServerArguments'
step2: oc get configmap config -n openshift-apiserver -ojson | jq -r '.data["config.yaml"]' | jq '.apiServerArguments'
step3: oc get kubeapiservers.operator.openshift.io cluster -o json | jq '.spec.observedConfig.apiServerArguments'

Step 1 and Step 2 seems pretty straight-forward. I have query regarding how to validate step3 from our policy. Note that, for our application, we only intend to validate the static configuration/deployment files of openshift/kubernetes, and policies does not directly query the running openshift cluster.

Query1
What does step3 verify ?
Does that do a lookup in the configuration/deployment file in a running cluster '.spec.observedConfig.apiServerArguments'
(or)
are we doing a lookup on the response by invoking an api on the running cluster ?

Given our usecase, can we just do a lookup on /spec/observedConfig/apiServerArguments field in the kubeapiserver CRD ?

Query2
How to check if the kubeapiserver operator is cluster operator from the configuration/deployment file of the kubeapiserver operator ? or is it safe to assume that kubeapiserver resource is always condsidered cluster level operator.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.