Coder Social home page Coder Social logo

karmada-io / karmada Goto Github PK

View Code? Open in Web Editor NEW
4.3K 4.3K 855.0 59.56 MB

Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Home Page: https://karmada.io

License: Apache License 2.0

Go 96.14% Shell 3.03% Makefile 0.08% Dockerfile 0.03% Mustache 0.31% Smarty 0.41%
cloud-computing cloud-native containers k8s kubernetes multi-cluster multicloud

karmada's Introduction

Karmada

Karmada-logo

LICENSE Releases Slack CII Best Practices OpenSSF Scorecard build Go Report Card codecov FOSSA Status Artifact HUB CLOMonitor

Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada (Kubernetes Armada) is a Kubernetes management system that enables you to run your cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to your applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes.

Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios, with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling.

cncf_logo

Karmada is an incubation project of the Cloud Native Computing Foundation (CNCF).

Why Karmada:

  • K8s Native API Compatible

    • Zero change upgrade, from single-cluster to multi-cluster
    • Seamless integration of existing K8s tool chain
  • Out of the Box

    • Built-in policy sets for scenarios, including: Active-active, Remote DR, Geo Redundant, etc.
    • Cross-cluster applications auto-scaling, failover and load-balancing on multi-cluster.
  • Avoid Vendor Lock-in

    • Integration with mainstream cloud providers
    • Automatic allocation, migration across clusters
    • Not tied to proprietary vendor orchestration
  • Centralized Management

    • Location agnostic cluster management
    • Support clusters in Public cloud, on-prem or edge
  • Fruitful Multi-Cluster Scheduling Policies

    • Cluster Affinity, Multi Cluster Splitting/Rebalancing
    • Multi-Dimension HA: Region/AZ/Cluster/Provider
  • Open and Neutral

    • Jointly initiated by Internet, finance, manufacturing, teleco, cloud providers, etc.
    • Target for open governance with CNCF

Notice: this project is developed in continuation of Kubernetes Federation v1 and v2. Some basic concepts are inherited from these two versions.

Architecture

Architecture

The Karmada Control Plane consists of the following components:

  • Karmada API Server
  • Karmada Controller Manager
  • Karmada Scheduler

ETCD stores the Karmada API objects, the API Server is the REST endpoint all other components talk to, and the Karmada Controller Manager performs operations based on the API objects you create through the API server.

The Karmada Controller Manager runs the various controllers, the controllers watch Karmada objects and then talk to the underlying clusters' API servers to create regular Kubernetes resources.

  1. Cluster Controller: attach Kubernetes clusters to Karmada for managing the lifecycle of the clusters by creating cluster objects.
  2. Policy Controller: the controller watches PropagationPolicy objects. When the PropagationPolicy object is added, it selects a group of resources matching the resourceSelector and creates ResourceBinding with each single resource object.
  3. Binding Controller: the controller watches ResourceBinding object and create Work object corresponding to each cluster with a single resource manifest.
  4. Execution Controller: the controller watches Work objects. When Work objects are created, it will distribute the resources to member clusters.

Concepts

Resource template: Karmada uses Kubernetes Native API definition for federated resource template, to make it easy to integrate with existing tools that already adopt on Kubernetes

Propagation Policy: Karmada offers a standalone Propagation(placement) Policy API to define multi-cluster scheduling and spreading requirements.

  • Support 1:n mapping of Policy: workload, users don't need to indicate scheduling constraints every time creating federated applications.
  • With default policies, users can just interact with K8s API

Override Policy: Karmada provides standalone Override Policy API for specializing cluster relevant configuration automation. E.g.:

  • Override image prefix according to member cluster region
  • Override StorageClass according to cloud provider

The following diagram shows how Karmada resources are involved when propagating resources to member clusters.

karmada-resource-relation

Quick Start

This guide will cover:

  • Install karmada control plane components in a Kubernetes cluster which is known as host cluster.
  • Join a member cluster to karmada control plane.
  • Propagate an application by using karmada.

Prerequisites

  • Go version v1.22.4+
  • kubectl version v1.19+
  • kind version v0.14.0+

Install the Karmada control plane

1. Clone this repo to your machine:

git clone https://github.com/karmada-io/karmada

2. Change to the karmada directory:

cd karmada

3. Deploy and run Karmada control plane:

run the following script:

hack/local-up-karmada.sh

This script will do the following tasks for you:

  • Start a Kubernetes cluster to run the Karmada control plane, aka. the host cluster.
  • Build Karmada control plane components based on a current codebase.
  • Deploy Karmada control plane components on the host cluster.
  • Create member clusters and join Karmada.

If everything goes well, at the end of the script output, you will see similar messages as follows:

Local Karmada is running.

To start using your Karmada environment, run:
  export KUBECONFIG="$HOME/.kube/karmada.config"
Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.

To manage your member clusters, run:
  export KUBECONFIG="$HOME/.kube/members.config"
Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.

There are two contexts in Karmada:

  • karmada-apiserver kubectl config use-context karmada-apiserver
  • karmada-host kubectl config use-context karmada-host

The karmada-apiserver is the main kubeconfig to be used when interacting with the Karmada control plane, while karmada-host is only used for debugging Karmada installation with the host cluster. You can check all clusters at any time by running: kubectl config view. To switch cluster contexts, run kubectl config use-context [CONTEXT_NAME]

Demo

Demo

Propagate application

In the following steps, we are going to propagate a deployment by Karmada.

1. Create nginx deployment in Karmada.

First, create a deployment named nginx:

kubectl create -f samples/nginx/deployment.yaml

2. Create PropagationPolicy that will propagate nginx to member cluster

Then, we need to create a policy to propagate the deployment to our member cluster.

kubectl create -f samples/nginx/propagationpolicy.yaml

3. Check the deployment status from Karmada

You can check deployment status from Karmada, don't need to access member cluster:

$ kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           20s

Kubernetes compatibility

Kubernetes 1.16 Kubernetes 1.17 Kubernetes 1.18 Kubernetes 1.19 Kubernetes 1.20 Kubernetes 1.21 Kubernetes 1.22 Kubernetes 1.23 Kubernetes 1.24 Kubernetes 1.25 Kubernetes 1.26 Kubernetes 1.27 Kubernetes 1.28 Kubernetes 1.29
Karmada v1.7
Karmada v1.8
Karmada v1.9
Karmada HEAD (master)

Key:

  • Karmada and the Kubernetes version are exactly compatible.
  • + Karmada has features or API objects that may not be present in the Kubernetes version.
  • - The Kubernetes version has features or API objects that Karmada can't use.

Meeting

Regular Community Meeting:

Resources:

Contact

If you have questions, feel free to reach out to us in the following ways:

Talks and References

Link
KubeCon(EU 2021) Beyond federation: automating multi-cloud workloads with K8s native APIs
KubeCon(EU 2022) Sailing Multi Cloud Traffic Management With Karmada
KubeDay(Israel 2023) Simplifying Multi-cluster Kubernetes Management with Karmada
KubeCon(China 2023) Multi-Cloud Multi-Cluster HPA Helps Trip.com Group Deal with Business Downturn and Rapid Recovery
KubeCon(China 2023) Break Through Cluster Boundaries to Autoscale Workloads Across Them on a Large Scale
KubeCon(China 2023) Cross-Cluster Traffic Orchestration with eBPF
KubeCon(China 2023) Non-Intrusively Enable OpenKruise and Argo Workflow in a Multi-Cluster Federation

For blogs, please refer to website.

Contributing

If you're interested in being a contributor and want to get involved in developing the Karmada code, please see CONTRIBUTING for details on submitting patches and the contribution workflow.

License

Karmada is under the Apache 2.0 license. See the LICENSE file for details.

karmada's People

Contributors

a7i avatar carlory avatar chaosi-zju avatar chaunceyjiang avatar chenxianpao avatar dddddai avatar dependabot[bot] avatar fish-pro avatar garrybest avatar gy95 avatar huone1 avatar iawia002 avatar ikaven1024 avatar jwcesign avatar karmada-bot avatar kevin-wangzefeng avatar lfbear avatar liangyuanpeng avatar lonelycz avatar mrlihanbo avatar my-git9 avatar pigletfly avatar rainbowmango avatar wawa0210 avatar whitewindmills avatar wuyingjun-lucky avatar xishanyongye-chang avatar yanfeng1992 avatar yike21 avatar zhzhuang-zju avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

karmada's Issues

Invalid PropagationPolicy could cause infinite reconcile

If the PropagationPolicy is invalid, like setting invalid resourceselector, it will cause infinite reconcile.

We should both:

  1. validating to prevent invalid configs from being created
  2. if some fields are hard to verify, should distinguish different cases when error occurs rather than always reconcile

bootstrap script should able to rebuild clusters in case of cluster exist

What would you like to be added:
The karmada-bootstrap.sh should be able to remove residual clusters before create them:

#step1. create host cluster and member clusters in parallel
util::create_cluster ${HOST_CLUSTER_NAME} ${HOST_CLUSTER_KUBECONFIG} ${CLUSTER_VERSION} ${KIND_LOG_FILE}
util::create_cluster ${MEMBER_CLUSTER_1_NAME} ${MEMBER_CLUSTER_1_KUBECONFIG} ${CLUSTER_VERSION} ${KIND_LOG_FILE}
util::create_cluster ${MEMBER_CLUSTER_2_NAME} ${MEMBER_CLUSTER_2_KUBECONFIG} ${CLUSTER_VERSION} ${KIND_LOG_FILE}

Maybe we can check existence by following commands:

# kind delete cluster --name="abc"
Deleting cluster "abc" ...
# echo $?
0
# kind get clusters 
karmada-host
member1
member2
# kind get nodes --name karmada-host
karmada-host-control-plane

Why is this needed:
The karmada-bootstrap.sh currently used by CI, if CI failed when running E2E, the cluster will be residual and it will block next run:

ERROR: failed to create cluster: node(s) already exist for a cluster with the name "karmada-host"

Replace hardcoded "applied" condition type with constant values

We have defined Applied condition type as a constant for PropagationWork at:

// WorkApplied represents that the resource defined in PropagationWork is
// successfully applied on the managed cluster.
WorkApplied string = "Applied"

We should use this constant instead of hardcode, find two hardcode:
One at:

propagationWorkApplied := "Applied"

The other at:

if condition.Type == "Applied" {

API group and kind improvement umbrella issue

To simplify API and improve API self-explaination, we need to change the API group and kind as the following:

Policy API Group:

Work API group

  • The group name: work.karmada.io (@RainbowMango #195)
  • Kinds under this group:
    • ResourceBinding, which is the replacement of current PropagationBinding (@RainbowMango #193)
    • ClusterResourceBinding, indicates the binding relationship between ClusterPropagationPolicy and a cluster scoped resource (@RainbowMango #197)
    • Work, which is the new name and place of PropagationWork (@RainbowMango #169)

Cluster API Group

The API group for member cluster management relevant APIs

  • The group name, use cluster.karmada.io instead of current membercluster.karmada.io (@kevin-wangzefeng #139)
  • Kinds under this group:
  • Add integration support of Kubernetes Cluster API, need more idea input.

Network API Group

The API Group for network relevant APIs including multi-cluster service discocery, multi-cluster load balancing, multi-cluster ingress etc.

  • The group name: network.karmada.io
  • Kinds under this group:
    • TBD

Scheduling: Automatic migration in case of cluster failure

What would you like to be added:
When one of the member cluster failure, all resources propagated to the member cluster should be migrated to other alternatives.

Why is this needed:

Design Proposal
TBD

Q&A

  • Is there a feature gate for this feature?

  • What if no other available clusters? How to report status and retry?

Default scheduler should be declared

What would you like to be added:
Set default scheduler name default-scheduler to PropagationPolicy and ClusterPropagationPolicy objects:

Why is this needed:
Mandatory if we want to support a third-party scheduler.

PS:
// +kubebuilder:default="default-scheduler"

ImageOverrider Implementation

Originally posted by @RainbowMango in #199 (comment)

Given our original requirement is to override the image registry. So, I propose another overrider, named ImageOverrider, which dedicated to handling images. Hope this will be easier and more user-friendly.

Note: The ImageOverrider didn't mean to reject the solutions we discussed above like RegexOverrider or PrefixOverrider, we still can continue the discussion when there is a clear requirement in the future, such as override arbitrary fields with string type.

// OverrideSpec defines the desired behavior of OverridePolicy.
type OverrideSpec struct {
	// ResourceSelectors restricts resource types that this override policy applies to.
	ResourceSelectors []ResourceSelector `json:"resourceSelectors,omitempty"`

	// TargetCluster defines restrictions on this override policy
	// that only applies to resources propagated to the matching clusters
	TargetCluster ClusterAffinity `json:"targetCluster,omitempty"`

	// Overriders represents the override rules that would apply on resources
	Overriders Overriders `json:"overriders,omitempty"`
}

// Overriders offers various alternatives to represent the override rules.
//
// If more than one alternatives exist, they will be applied with following order:
// - ImageOverrider
// - Plaintext
type Overriders struct {
	// Plaintext represents override rules defined with plaintext overriders.
	// +optional
	Plaintext []PlaintextOverrider `json:"plaintext,omitempty"`

	// ImageOverrider represents the rules dedicated to handling image overrides.
	// +optional
	ImageOverrider []ImageOverrider `json:"imageOverrider,omitempty"`
}

// ImageOverrider represents the rules dedicated to handling image overrides.
type ImageOverrider struct {
	// Predicate filters images before applying the rule.
	//
	// Defaults to nil, in that case, the system will automatically detect image fields if the resource type is
	// Pod, ReplicaSet, Deployment or StatefulSet by following rule:
	//   - Pod: spec/containers/<N>/image
	//   - ReplicaSet: spec/template/spec/<N>/image
	//   - Deployment: spec/template/spec/<N>/image
	//   - StatefulSet: spec/template/spec/<N>/image
	// In addition, all images will be processed if the resource object has more than one containers.
	//
	// If not nil, only images matches the filters will be processed.
	// +optional
	Predicate *ImagePredicate `json:"predicate,omitempty"`

	// Component is part of image name.
	// Basically we presume an image can be make of '[registry/]repository[:tag]'.
	// The registry could be:
	// - k8s.gcr.io
	// - fictional.registry.example:10443
	// The repository could be:
	// - kube-apiserver
	// - fictional/nginx
	// The tag cloud be:
	// - latest
	// - v1.19.1
	// - @sha256:dbcc1c35ac38df41fd2f5e4130b32ffdb93ebae8b3dbe638c23575912276fc9c
	//
	// +kubebuilder:validation:Enum=Registry;Repository;Tag
	// +required
	Component ImageComponent `json:"component"`

	// Operator represents the operator which will apply on the image.
	// +kubebuilder:validation:Enum=add;remove;replace
	// +required
	Operator OverriderOperator `json:"operator"`

	// Value to be applied to image.
	// Must not be empty when operator is 'add' or 'replace'.
	// Defaults to empty and ignored when operator is 'remove'.
	// +optional
	Value string `json:"value,omitempty"`
}

// Predicate describes images filter.
type ImagePredicate struct {
	// Path indicates the path of target field
	// +required
	Path string `json:"path"`
}

// ImageComponent indicates the components for image.
type ImageComponent string

const (
	// Registry is the registry component of an image with format '[registry/]repository[:tag]'.
	Registry ImageComponent = "Registry"

	// Repository is the repository component of an image with format '[registry/]repository[:tag]'.
	Repository ImageComponent = "Repository"

	// Tag is the tag component of an image with format '[registry/]repository[:tag]'.
	Tag ImageComponent = "Tag"
)

Cleanup 'karmadaClient' from controllers

What would you like to be added:

Each controller already has a controller-manager Client which could be used to operate both Karmada APIs and Kubernetes APIs.

We should remove the redundant KarmadaClient from controllers:

I'd prefer to do it with separated PRs, people who are interested in this task, feel free to reply on this issue.

Namespace Autoprovision Feature

Background
For a set of related clusters governed by a single authority, all namespaces of a given name are considered to be the same namespace. A single namespace should have a consistent owner across the set of clusters.

What would you like to be added:

  • New namespace created in Karmada should be synced to all member clusters. (#173)

E2E Test:

  • When a new namespace created in Karmada, the namespace should be synced to all member clusters. (#183)
  • When namespace removed from Karmada, the namespace should be removed from all member clusters. (#183)
  • When a new member cluster joined in Karmada, the namespace in Karmada should be synced to the new member cluster. (#183)

Kubenetes and Karmada reserved namespace should be ignored.

  • kube-public
  • kube-system
  • kube-node-lease
  • default
  • karmada-system
  • karmada-cluster
  • karmada-es-*

Why is this needed:

Override supports the sed syntax replace value.

What would you like to be added:

We suggest adding an OverriderOperator type:

OverriderOpSed     OverriderOperator = "sed"

With this type, user can replace part of value in different member cluster.

Why is this needed:

User may set different values for the same field in the resource with overridepolicy in different member clusters. For example, the image url of deployment resources varies depending on clusters.

apiVersion: policy.karmada.io/v1alpha1
kind: OverridePolicy
metadata:
  name: example-override
  namespace: default
spec:
  # restrict resource types that this override policy applies to
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment     # dc-1.registry.io/nginx:1.17.0-alpine
  targetCluster:
    clusterNames:           
      - dc-2-cluster-1
  overriders:
    plaintext:
    - path: "/spec/template/spec/containers/0/image"
      operator: sed
      value: "s/.*.registry.io/dc-2.registry.io/g"    #  sed string

One implementation demo:

import "github.com/rwtodd/Go.Sed/sed"

// obj is target resource
func parseJSONPatch(obj *unstructured.Unstructured, overriders []policyv1alpha1.PlaintextOverrider) ([]overrideOption, error) {
	patches := make([]overrideOption, 0, len(overriders))
	for _, overrider := range overriders {
		switch overrider.Operator {
		case policyv1alpha1.OverriderOpSed:
			overrideValue, ok, err := unstructured.NestedString(obj.Object, overrider.Path)
			if err != nil {
				return nil, err
			}
			if !ok {
				return nil, fmt.Errorf("path %s is not found in obj", overrider.Path)
			}

			engine, _ := sed.New(strings.NewReader(overrider.Value.String()))
			replaceValue, _ := engine.RunString(overrideValue)

			patches = append(patches, overrideOption{
				Op:    string(policyv1alpha1.OverriderOpReplace),
				Path:  overrider.Path,
				Value: replaceValue,
			})
		default:
			patches = append(patches, overrideOption{
				Op:    string(overrider.Operator),
				Path:  overrider.Path,
				Value: overrider.Value,
			})
		}
	}

	return patches, nil
}

The regular expression rquires two fields to implements the replacement function, one is for match, and the other is for replace. However, only one "value" field can be used, so wo use sed syntax for replacement.

Support differences in the same resource between multiple cluster

What would you like to be added:

This is a useful ability that may NOT necessary for everyone, it may be a piece of configuration to declare the differences in resource (eg: deployment) between multiple clusters. IMO it can be some annotations attached to YAML of resource in the karamda cluster, then karamada render the differences of resource in multiple clusters.

Why is this needed:

There are many differences in most services between multiple clusters, eg: the different boot parameters, the different volume mount, the different CPU or mem quota...
Though the differences should NOT exist, it was. Not every engineer will design their program for multiple clusters.

Add E2E cases for Replica Scheduling Policy feature

What would you like to be added:
We implemented ReplicaSchedulingPolicy feature by #269, but there are no E2E cases that cover this feature.

Maybe we need two cases:

  • Basic functionality: A deployment's replicas could split by static weight list referencing by ReplicaSchedulingPolicy.
  • Once the weight list changes, the replicas should be re-scheduled.

PS: There is some discussion at #192 that can help to understand this feature.

Need your help:
If you interest in this task or have any questions, please feel free to reply on this issue.

Cleanup labels from resource template after policy removed

What would you like to be added:
Once a resource template propagated by a policy, such as PropagationPolicy, we would add labels on the resource template. e.g.

- labels:
    propagationpolicy.karmada.io/name: nginx-propagation
    propagationpolicy.karmada.io/namespace: default

We should remove the labels after the policy been removed as tracked here:

// TODO(RainbowMango): cleanup original resource's label.

// TODO(RainbowMango): cleanup original resource's label.

Why is this needed:

  1. Cleanup the relationship between resource template and policy.
  2. Give the resource template a chance to match another policy.

Resource Detector Feature

What would you like to be added:
Request resource detector feature which keeps monitoring resource change and sync to member clusters according to policies.

Why is this needed:
Resource propagation should be triggered on two factors:

  • Resource change in Karmada apiserver
  • Policy change

Feature Scope

  • As the user, I want to propagate resources immediately once created in Karmada apiserver, so that ...
  • As the user, I want to propagate resources immediately once changed in Karmada apiserver, so that ...

Question

  • When a policy is removed, should we remove the resources it associate with?

Iteration

  • Make AsyncWorker reuseable (#174)
  • Add resource detector which watches all resource changes in Karmada apiserver. (#176 )
  • When resource created, find the matched PropagationPolicy (#180)
  • When resource updated, update PropagationBinding (#180)
  • When resource removed, remove binding (#180)
  • Record isolate resources that not matched with any policies (#200)
  • When PropagationPolicy added, triggers isolate resource propagation. (#202)
  • When PropagationPolicy deleted, remove ResourceBinding and isolate relevant resources. (#203)
  • When PropagationPolicy deleted, throw relevant resources to queue

install use local-up-karmada.sh fail

What happened:
[root@c2 /opt/gopath/src]$git clone https://github.com/karmada-io/karmada
[root@c2 /opt/gopath/src]$cd karmada
[root@c2 /opt/gopath/src/karmada]$hack/local-up-karmada.sh
Error: unknown flag: --kubeconfig

What you expected to happen:
install success

How to reproduce it (as minimally and precisely as possible):
just exec “hack/local-up-karmada.sh”
Anything else we need to know?:

Environment:

  • Karmada version: master
  • Others:

karmada supports proxy to access apiserver

What would you like to be added:
karmada supports proxy to access apiserver. For example. "proxy-url" parameter in kubeconfig.
Why is this needed:
In edge-computing scenarios, when karmada is in cloud , apiserver is in edge. Karmada needs proxy to access apiserver

reflecting object status to Binding object

What would you like to be added:
After reflecting object status to Work, the status should be shown in ResourceBinding as well so that the user can get a full view of object status across all member clusters.

Proposal:

  • API change
// ResourceBindingStatus represents the overall status of the strategy as well as the referenced resources.
type ResourceBindingStatus struct {
	// Conditions contain the different condition statuses.
	// +optional
	Conditions []metav1.Condition `json:"conditions,omitempty"`
	// AggregatedStatus represents status list of the resource running in each member cluster.
	// +optional
	AggregatedStatus []AggregatedStatusItem `json:"aggregatedStatus,omitempty"`
}

// AggregatedStatusItem represents status of the resource running in a member cluster.
type AggregatedStatusItem struct {
	// ClusterName represents the member cluster name which the resource deployed on.
	ClusterName string `json:"clusterName"`

	// Status reflects running status of current manifest.
	// +kubebuilder:pruning:PreserveUnknownFields
	Status runtime.RawExtension `json:",inline"`
}

AggregatedStatus used to store status across all member clusters.

  • Alternative 1(go with this option):
    Watches Work objects, collect status, and updates to Binding objects.

  • Alternative 2:
    Make a goroutine collect status from Work and updates to Binding objects periodically.

Questions

  • TBD

[GoodFirst] fix hack/tools/tools.go gofmt issue

What would you like to be added:
hack/tools/tools.go not gofmted, (don't know why golangci-lint can't catch it).

We usually group import paths in the order of std packages --> third-party packages-->karmada packages`.
So the fix would be:

 import (
-       _ "k8s.io/code-generator"
        _ "github.com/onsi/ginkgo/ginkgo"
+       _ "k8s.io/code-generator"
 )

Please reply to this issue directly if you want to take this task. :)

Reschedule when 'PropagationPolicies' changed

What would you like to be added:

Reschedule when 'propagationpolicies' changed, eg: when the 'clusterAffinity' in CRD 'propagationpolicies' changes, expected change should happen in target cluster.

Why is this needed:

Sometimes, the cluster affinity of resource need to regulate, it should to be done for keeping "Declarative" rule.

😄 This issue may bring lots of commit changes

Add E2E cases for PropagationPolicy/ClusterPropagationPolicy matching with nil resourceSelectors

What would you like to be added:

We implemented PropagationPolicy/ClusterPropagationPolicy matching when more than one ploicy mathed, in #306 we add:

		if policy.Spec.ResourceSelectors == nil {
			matchedPolicies = append(matchedPolicies, policy)
			continue
		}

but we haven't an E2E case for this.

Maybe we need two cases:

  • PropagationPolicy with nil resourceSelectors, how it works on resources.
  • ClusterPropagationPolicy with nil resourceSelectors, how it works on resources.

Why is this needed:

If you interest in this task or have any questions, please feel free to reply on this issue.

Can not see the unsuccess pod when talk with karamda control plane

What would you like to be added:
I'm new for the karmada and take a try with the quick start.
Everything works well but the final deployment is not work as expect, I mean the relevant pod not start.

I have no idea what I missing until I talk to member1 control plane directly, and the pod with ImagePullBackOff since the docker hub's rate limited.

Why is this needed:
I don't know, maybe it's by design, but I guess it's should be better that user can know what happen with karamda control plane

[GoodFirst] Help fix a typo

There is a typo at here.

karmadda --> karmada

- You can check deployment status from karmadda, don't need to access member cluster:
+ You can check deployment status from karmada, don't need to access member cluster:

ReplicaSchedulingPolicy Feature

User case:

  • As a user, I have 2 clusters(cluster1, cluster2), I want to run 100 replicas for my deployment, each cluster has a static weight preference.
  • As a user, I have 2 clusters(cluster1, cluster2), I want to run 100 replicas for my deployment, each cluster has a dynamic weight preference, the system manages the weight according to cluster capacity, load.

Proposal:

  • API:
// ReplicaSchedulingPolicy represents the policy that propagates total number of replicas for deployment.
type ReplicaSchedulingPolicy struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	// Spec represents the desired behavior of ReplicaSchedulingPolicy.
	Spec ReplicaSchedulingSpec `json:"spec"`
}

// PropagationSpec represents the desired behavior of ReplicaSchedulingPolicy.
type ReplicaSchedulingSpec struct {
	// ResourceSelectors used to select resources.
	// +required
	ResourceSelectors []ResourceSelector `json:"resourceSelectors"`

	// TotalReplicas represents the total number of replicas across member clusters.
	// The replicas(spec.replicas) specified for deployment template will be discarded.
	// +required
	TotalReplicas int32 `json:"totalReplicas"`

	// Preferences describes weight for each cluster or for each group of cluster.
	// +required
	Preferences ClusterPreferences `json:"preferences"`
}

// ClusterPreferences describes weight for each cluster or for each group of cluster.
type ClusterPreferences struct {
	// StaticWeightList defines the static cluster weight.
	// +optional
	StaticWeightList []StaticClusterWeight `json:"staticWeightList,omitempty"`

	// DynamicWeightList defines the dynamic cluster weight which maintained by system according to cluster capacity
	// or load, etc.
	// +optional
	DynamicWeightList []DynamicClusterWeight `json:"dynamicWeightList,omitempty"`
}

// StaticClusterWeight defines the static cluster weight.
type StaticClusterWeight struct {
	// TargetCluster describes the filter to select clusters.
	// +required
	TargetCluster ClusterAffinity `json:"targetCluster,omitempty"`

	// Weight expressing the preference to the cluster(s) specified by 'TargetCluster'.
	// Defaults to 0.
	// +optional
	Weight int64 `json:"weight,omitempty"`
}

// DynamicClusterWeight defines the dynamic cluster weight which maintained by system according to cluster capacity
// or load, etc.
type DynamicClusterWeight struct {
	// TODO
}
  • Example: specify static weight by cluster names:
apiVersion: policy.karmada.io/v1alpha1
kind: ReplicaSchedulingPolicy
metadata:
  name: foo
  namespace: foons
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      namespace: foons
      name: deployment-1
  totalReplicas: 100
  preferences:
    staticWeightList:
      - targetCluster:
          clusterNames: [cluster1]
        weight: 1
      - targetCluster:
          clusterNames: [cluster2]
        weight: 3

clusters in the [cluster1] get 25% replicas, that is 25.
clusters in the [cluster2] get 75% replicas, that is 75.

    • Example: specify static weight by cluster labels:
apiVersion: policy.karmada.io/v1alpha1
kind: ReplicaSchedulingPolicy
metadata:
  name: foo
  namespace: foons
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      namespace: foons
      name: deployment-1
  totalReplicas: 100
  preferences:
    staticWeightList:
      - targetCluster:
          labelSelector:
            matchLabels:
              location: us
        weight: 1
      - targetCluster:
          labelSelector:
            matchLabels:
              location: cn
        weight: 3

clusters with label location=us get 25% replicas, that is 25.
clusters with label location=cn get 75% replicas, that is 75.

Iteration Items
TBD

Request for makefile endpoint: 'make update' and 'make verify'

What would you like to be added:
We have more than one hack/update-xxx.sh scripts and will be more in the future, so we should add an update endpoint to Makefile which will make development more convenient.

In the same way, we should add a verify endpoint for hack/verify-xxx.sh scripts.

E2E test for cluster proxy

What would you like to be added:
We need E2E tests to support access member clusters by proxy.

Please refer to #307

Why is this needed:

Karmada API server unexpectedly assigned cluster IP for service

What happened:
When creating Service in Karmada API server, the cluster IP will be unexpectedly assigned.

What you expected to happen:
Usually, the user won't assign spec.clusterIP for a service resource, that will be allocated by the member cluster according to its system configuration.

Karmada API server should not assign spec.clusterIP for service resource, otherwise, there will be a conflict when propagating to member clusters.

How to reproduce it (as minimally and precisely as possible):

  1. Create a service by following YAML:
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  1. Get service by command kubectl get service nginx-service -o yaml:
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2021-01-07T07:37:05Z"
  name: nginx-service
  namespace: default
  resourceVersion: "1260466"
  selfLink: /api/v1/namespaces/default/services/nginx-service
  uid: 621679d7-1076-4793-9598-9c86e4163868
spec:
  clusterIP: 10.104.183.232
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

The spec.clusterIP has been unexpectedly set.

Anything else we need to know?:

Environment:

  • Karmada version: v0.1.0
  • Others:

do not use Kind to install karmada

What would you like to be added:
Can I install karmada without Kind? I already have a running cluster, can I directly install karmada in this cluster?I found no documentation on this.
Why is this needed:
When I installed karmada with Kind, I had a lot of network problems and it was very unfriendly.

[Failing] namespace autoprovision case failing

What happened:
CI(namespace autoprovision) failed: https://github.com/karmada-io/karmada/pull/273/checks?check_run_id=2378212107

apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: "2021-04-19T06:41:29Z"
  deletionTimestamp: "2021-04-19T06:41:44Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:phase: {}
    manager: e2e.test
    operation: Update
    time: "2021-04-19T06:41:29Z"
  name: karmada-e2e-ns-vmd
  resourceVersion: "499"
  selfLink: /api/v1/namespaces/karmada-e2e-ns-vmd
  uid: 9e68240b-62b9-4997-91da-8c2d50eb0372
spec:
  finalizers:
  - kubernetes
status:
  phase: Terminating

The namespace from karmada-apiserver hanging in Terminating status.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Karmada version:
  • Others:

karmadactl should add version command

What would you like to be added:
Currently, we can't check karmadactl version that's the necessary information for such as debug.

-bash-4.2# karmadactl version
Error: unknown command "version" for "karmadactl"
Run 'karmadactl --help' for usage.
unknown command "version" for "karmadactl"

Why is this needed:

Request for bootstrap performance optimization

What would you like to be added:
In the passed the v0.2.0 release, each CI loop takes about 15 minutes that we need to optimize.

Given most of the time taken by E2E test, to be more precise, we should focus on hack/karmada-bootstrap.sh script optimization.

Some idea addressed here:

  • Start kind clusters in parallel.
  • Build images could be started before kind cluster ready.

[HelpWanted] ClusterResourceBinding should only reference cluster-scoped resources

What would you like to be added:
When applying cluster policy for a resource template, we should build ResourceBinding for namespace-scoped resources, and build ClusterResourceBinding for cluster-scoped resources.

But currently, we always build ClusterResourceBinding and regardless of the resource scope. The relevant code at :

binding := d.BuildClusterResourceBinding(object, objectKey, policy)
bindingCopy := binding.DeepCopy()
operationResult, err := controllerutil.CreateOrUpdate(context.TODO(), d.Client, bindingCopy, func() error {
// Just update necessary fields, especially avoid modifying Spec.Clusters which is scheduling result, if already exists.
bindingCopy.Labels = binding.Labels
bindingCopy.OwnerReferences = binding.OwnerReferences
bindingCopy.Spec.Resource = binding.Spec.Resource
return nil
})

Why is this needed:
ResourceBinding is designed for binding namespace scoped resources and ClusterResourceBinding is designed for binding cluster scoped resources.

Optimize installation flow

What would you like to be added:

  1. Split installation script for different purposes, in a stand-alone cluster(for production) or a kind cluster(for development)
  2. No logger using export diff KUBECONFIG to switch multi-clusters in the install script

Why is this needed:

  1. Likes #303, the newcomers may not have a sense for kind when they wanna make a production installation, so we should make it clear in the installation stage
  2. Using a native way to switch multi-clusters for kubectl

Resource identifier enhancement

Background:
Each resource propagated to the member cluster will be set a label with format karmada.io/created-by:<work namespace>.<work name> to indicate its owner(Work). E.g.

karmada.io/created-by: karmada-es-member1.default-deployment-nginx

When reflecting resource status, we can locate the Work by the label. That works fine in normal situations.

Exceptional Case
If a member cluster joined to more than one Karmada with different names, Karmada can't tell if a specific resource managed by itself. Reproduce steps would be:

  1. Member cluster join to Karmada-A with name member-a;
    all resources propagated to the member cluster hold label karmada.io/created-by:karmada-es-member-a.xxx
  2. The same member cluster join to Karmada-B with the name member-b
    all resources propagated to the member cluster hold label karmada.io/created-by:karmada-es-member-b.xxx

In that case, a panic would happen as #170 addressed.

What would you like to be added:
The panic could be easily avoided as #170, but more important is, from Karmada point of view, how to ignore resources which not managed by themselves.

Each Karmada system should have a unique ID to distinguish them from each other. And all resources propagated to by this Karmada system would set a label with the ID.

karmada.io/id=xxxx

As meanwhile, the Work identifier label could be more meaningful:
karmada.io/created-by: karmada-es-xxx.xxx -->


work.karmada.io/namespace=karmada-es-xxx  
work.karmada.io/name=xxx

Why is this needed:

Proposal

  • Deprecate karmada.io/created-by
    • Binding controller
    • HPA controller
    • namespace controller
    • execution controller (#191)

Resync OverridePolicy changes

This issue address resync override policies discussions.

What would you like to be added:

Propose two annotations for workload:

// AppliedOverrides is the annotation which used to record override items an object applied.
// It is intended to set on Work objects to record applied overrides.
// The overrides items should be sorted alphabetically in ascending order by OverridePolicy's name.
AppliedOverrides = "policy.karmada.io/applied-overrides"

// AppliedClusterOverrides is the annotation which used to record override items an object applied.
// It is intended to set on Work objects to record applied overrides.
// The overrides items should be sorted alphabetically in ascending order by ClusterOverridePolicy's name.
AppliedClusterOverrides = "policy.karmada.io/applied-cluster-overrides"

Example:

 -annotations:
    policy.karmada.io/applied-overrides: '[{"policyName":"example-override","overriders":{"plaintext":[{"path":"/metadata/annotations","operator":"add","value":{"foo":"bar"}}]}}]'

Why is this needed:

Cleanup the old configuration file of the cluster when cluster creating is failed

What happened:
When a cluster fails to start, the cluster is recreated,console showing that the configuration file of the cluster already exists.

What you expected to happen:
The scripts delete the old configuration file(/root/.kube/.config)

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:
aliyun ecs 2c4g

  • Karmada version:
  • 4.0
  • Others:
  • centos 7/kubelet 1.20.5/kind 0.10.0

Use github package to host container images

What would you like to be added:
Github package now offers container images hosting, we can upload container images of karmada components to github package

Why is this needed:

karmada apiserver not working "Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.31.22.72"

What happened:
karmada apiserver not working:

kubectl get pod -n karmada-system
NAME                                 READY   STATUS    RESTARTS   AGE
etcd-0                               1/1     Running   0          22m
karmada-apiserver-6b584c9c5f-np7x8   0/1     Running   9          22m

error log:

E0430 08:11:36.017203       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.31.22.72, ResourceVersion: 0, AdditionalErrorMsg:

and then:

E0430 08:13:01.377221       1 controller.go:184] Get "https://localhost:5443/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:5443: connect: connection refused

full log:

[root@ip-172-31-26-35 karmada]# kubectl logs karmada-apiserver-6b584c9c5f-np7x8 -n karmada-system
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0430 08:11:32.296285       1 server.go:625] external host was not specified, using 172.31.22.72
I0430 08:11:32.296698       1 server.go:163] Version: v1.19.1
I0430 08:11:32.602652       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0430 08:11:32.602673       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0430 08:11:32.604266       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0430 08:11:32.604286       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0430 08:11:32.606443       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.606483       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.623739       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.623773       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.634991       1 client.go:360] parsed scheme: "passthrough"
I0430 08:11:32.635065       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}] <nil> <nil>}
I0430 08:11:32.635081       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0430 08:11:32.636058       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.636082       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.676577       1 master.go:271] Using reconciler: lease
I0430 08:11:32.677159       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.677191       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.690920       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.690960       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.703373       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.703412       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.717252       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.717286       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.728719       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.728749       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.740588       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.740614       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.752142       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.752169       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.763561       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.763587       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.775192       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.775225       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.786999       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.787030       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.798432       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.798458       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.809798       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.809823       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.822164       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.822195       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.833477       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.833499       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.845716       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.845740       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.856831       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.856857       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.867977       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.868007       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.879287       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.879319       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.983195       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.983242       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.996808       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.996838       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.008392       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.008425       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.020165       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.020191       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.031695       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.031723       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.043045       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.043075       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.054359       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.054387       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.065905       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.065936       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.079155       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.079183       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.090555       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.090578       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.102029       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.102057       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.113676       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.113706       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.125324       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.125349       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.136762       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.136803       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.148189       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.148215       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.160016       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.160045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.172428       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.172462       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.183718       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.183753       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.195233       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.195263       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.206310       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.206338       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.218958       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.218986       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.230053       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.230076       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.241920       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.241943       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.254645       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.254673       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.266200       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.266229       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.277472       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.277501       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.288697       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.288719       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.301526       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.301556       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.313741       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.313777       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.332603       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.332628       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.343948       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.343971       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.355201       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.355237       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.368404       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.368433       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.379859       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.379890       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.403163       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.403191       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.414380       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.414410       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.426745       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.426775       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.438350       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.438377       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.450768       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.450818       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.463268       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.463292       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.474904       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.474926       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.486581       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.486609       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.497962       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.497991       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.509354       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.509377       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.520532       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.520554       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.532306       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.532332       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.543348       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.543378       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.555294       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.555322       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.600981       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.601018       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
W0430 08:11:33.687084       1 genericapiserver.go:412] Skipping API batch/v2alpha1 because it has no resources.
W0430 08:11:33.700964       1 genericapiserver.go:412] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0430 08:11:33.717048       1 genericapiserver.go:412] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0430 08:11:33.735834       1 genericapiserver.go:412] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0430 08:11:33.738950       1 genericapiserver.go:412] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0430 08:11:33.753816       1 genericapiserver.go:412] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0430 08:11:33.772530       1 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources.
W0430 08:11:33.772553       1 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources.
I0430 08:11:33.784099       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0430 08:11:33.784126       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0430 08:11:33.787476       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.787507       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.808138       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.808185       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.828550       1 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:8080
I0430 08:11:36.002815       1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/server-ca.crt
I0430 08:11:36.002867       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/server-ca.crt
I0430 08:11:36.003263       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/pki/karmada.crt::/etc/kubernetes/pki/karmada.key
I0430 08:11:36.003690       1 secure_serving.go:197] Serving securely on [::]:5443
I0430 08:11:36.003863       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0430 08:11:36.003958       1 available_controller.go:404] Starting AvailableConditionController
I0430 08:11:36.003965       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0430 08:11:36.003998       1 autoregister_controller.go:141] Starting autoregister controller
I0430 08:11:36.004004       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0430 08:11:36.004222       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0430 08:11:36.004253       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/etc/kubernetes/pki/karmada.crt::/etc/kubernetes/pki/karmada.key
I0430 08:11:36.004278       1 controller.go:83] Starting OpenAPI AggregationController
I0430 08:11:36.004358       1 controller.go:86] Starting OpenAPI controller
I0430 08:11:36.004379       1 naming_controller.go:291] Starting NamingConditionController
I0430 08:11:36.004398       1 establishing_controller.go:76] Starting EstablishingController
I0430 08:11:36.004416       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0430 08:11:36.004422       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0430 08:11:36.004427       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0430 08:11:36.004441       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0430 08:11:36.004459       1 crd_finalizer.go:266] Starting CRDFinalizer
I0430 08:11:36.004462       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0430 08:11:36.004469       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0430 08:11:36.004513       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0430 08:11:36.004520       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0430 08:11:36.004547       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/server-ca.crt
I0430 08:11:36.004568       1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/server-ca.crt
E0430 08:11:36.017203       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.31.22.72, ResourceVersion: 0, AdditionalErrorMsg:
I0430 08:11:36.104040       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0430 08:11:36.104067       1 cache.go:39] Caches are synced for autoregister controller
I0430 08:11:36.104557       1 shared_informer.go:247] Caches are synced for crd-autoregister
I0430 08:11:36.104646       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0430 08:11:36.104681       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0430 08:11:37.002894       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0430 08:11:37.003076       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0430 08:11:37.007046       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0430 08:12:06.636055       1 client.go:360] parsed scheme: "passthrough"
I0430 08:12:06.636097       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}] <nil> <nil>}
I0430 08:12:06.636109       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
[root@ip-172-31-26-35 karmada]# kubectl logs karmada-apiserver-6b584c9c5f-np7x8 -n karmada-system -f
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0430 08:11:32.296285       1 server.go:625] external host was not specified, using 172.31.22.72
I0430 08:11:32.296698       1 server.go:163] Version: v1.19.1
I0430 08:11:32.602652       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0430 08:11:32.602673       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0430 08:11:32.604266       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0430 08:11:32.604286       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0430 08:11:32.606443       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.606483       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.623739       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.623773       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.634991       1 client.go:360] parsed scheme: "passthrough"
I0430 08:11:32.635065       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}] <nil> <nil>}
I0430 08:11:32.635081       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0430 08:11:32.636058       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.636082       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.676577       1 master.go:271] Using reconciler: lease
I0430 08:11:32.677159       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.677191       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.690920       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.690960       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.703373       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.703412       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.717252       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.717286       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.728719       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.728749       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.740588       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.740614       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.752142       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.752169       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.763561       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.763587       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.775192       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.775225       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.786999       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.787030       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.798432       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.798458       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.809798       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.809823       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.822164       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.822195       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.833477       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.833499       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.845716       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.845740       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.856831       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.856857       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.867977       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.868007       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.879287       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.879319       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.983195       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.983242       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:32.996808       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:32.996838       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.008392       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.008425       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.020165       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.020191       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.031695       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.031723       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.043045       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.043075       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.054359       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.054387       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.065905       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.065936       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.079155       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.079183       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.090555       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.090578       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.102029       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.102057       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.113676       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.113706       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.125324       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.125349       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.136762       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.136803       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.148189       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.148215       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.160016       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.160045       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.172428       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.172462       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.183718       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.183753       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.195233       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.195263       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.206310       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.206338       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.218958       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.218986       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.230053       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.230076       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.241920       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.241943       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.254645       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.254673       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.266200       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.266229       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.277472       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.277501       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.288697       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.288719       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.301526       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.301556       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.313741       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.313777       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.332603       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.332628       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.343948       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.343971       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.355201       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.355237       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.368404       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.368433       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.379859       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.379890       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.403163       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.403191       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.414380       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.414410       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.426745       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.426775       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.438350       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.438377       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.450768       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.450818       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.463268       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.463292       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.474904       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.474926       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.486581       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.486609       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.497962       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.497991       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.509354       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.509377       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.520532       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.520554       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.532306       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.532332       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.543348       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.543378       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.555294       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.555322       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.600981       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.601018       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
W0430 08:11:33.687084       1 genericapiserver.go:412] Skipping API batch/v2alpha1 because it has no resources.
W0430 08:11:33.700964       1 genericapiserver.go:412] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0430 08:11:33.717048       1 genericapiserver.go:412] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0430 08:11:33.735834       1 genericapiserver.go:412] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0430 08:11:33.738950       1 genericapiserver.go:412] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0430 08:11:33.753816       1 genericapiserver.go:412] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0430 08:11:33.772530       1 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources.
W0430 08:11:33.772553       1 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources.
I0430 08:11:33.784099       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0430 08:11:33.784126       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0430 08:11:33.787476       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.787507       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.808138       1 client.go:360] parsed scheme: "endpoint"
I0430 08:11:33.808185       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}]
I0430 08:11:33.828550       1 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:8080
I0430 08:11:36.002815       1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/server-ca.crt
I0430 08:11:36.002867       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/server-ca.crt
I0430 08:11:36.003263       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/pki/karmada.crt::/etc/kubernetes/pki/karmada.key
I0430 08:11:36.003690       1 secure_serving.go:197] Serving securely on [::]:5443
I0430 08:11:36.003863       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0430 08:11:36.003958       1 available_controller.go:404] Starting AvailableConditionController
I0430 08:11:36.003965       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0430 08:11:36.003998       1 autoregister_controller.go:141] Starting autoregister controller
I0430 08:11:36.004004       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0430 08:11:36.004222       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0430 08:11:36.004253       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/etc/kubernetes/pki/karmada.crt::/etc/kubernetes/pki/karmada.key
I0430 08:11:36.004278       1 controller.go:83] Starting OpenAPI AggregationController
I0430 08:11:36.004358       1 controller.go:86] Starting OpenAPI controller
I0430 08:11:36.004379       1 naming_controller.go:291] Starting NamingConditionController
I0430 08:11:36.004398       1 establishing_controller.go:76] Starting EstablishingController
I0430 08:11:36.004416       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0430 08:11:36.004422       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0430 08:11:36.004427       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0430 08:11:36.004441       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0430 08:11:36.004459       1 crd_finalizer.go:266] Starting CRDFinalizer
I0430 08:11:36.004462       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0430 08:11:36.004469       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0430 08:11:36.004513       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0430 08:11:36.004520       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0430 08:11:36.004547       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/server-ca.crt
I0430 08:11:36.004568       1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/server-ca.crt
E0430 08:11:36.017203       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.31.22.72, ResourceVersion: 0, AdditionalErrorMsg:
I0430 08:11:36.104040       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0430 08:11:36.104067       1 cache.go:39] Caches are synced for autoregister controller
I0430 08:11:36.104557       1 shared_informer.go:247] Caches are synced for crd-autoregister
I0430 08:11:36.104646       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0430 08:11:36.104681       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0430 08:11:37.002894       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0430 08:11:37.003076       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0430 08:11:37.007046       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0430 08:12:06.636055       1 client.go:360] parsed scheme: "passthrough"
I0430 08:12:06.636097       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}] <nil> <nil>}
I0430 08:12:06.636109       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0430 08:12:40.686850       1 client.go:360] parsed scheme: "passthrough"
I0430 08:12:40.686898       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://etcd-client.karmada-system.svc.cluster.local:2379  <nil> 0 <nil>}] <nil> <nil>}
I0430 08:12:40.686909       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0430 08:13:01.374473       1 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0430 08:13:01.374507       1 dynamic_cafile_content.go:182] Shutting down request-header::/etc/kubernetes/pki/server-ca.crt
I0430 08:13:01.374513       1 controller.go:123] Shutting down OpenAPI controller
I0430 08:13:01.374535       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
I0430 08:13:01.374551       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0430 08:13:01.374563       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
I0430 08:13:01.374575       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0430 08:13:01.374592       1 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController
I0430 08:13:01.374606       1 establishing_controller.go:87] Shutting down EstablishingController
I0430 08:13:01.374608       1 dynamic_cafile_content.go:182] Shutting down request-header::/etc/kubernetes/pki/server-ca.crt
I0430 08:13:01.374617       1 naming_controller.go:302] Shutting down NamingConditionController
I0430 08:13:01.374620       1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/etc/kubernetes/pki/server-ca.crt
I0430 08:13:01.374628       1 crd_finalizer.go:278] Shutting down CRDFinalizer
I0430 08:13:01.374637       1 controller.go:89] Shutting down OpenAPI AggregationController
I0430 08:13:01.374639       1 customresource_discovery_controller.go:245] Shutting down DiscoveryController
I0430 08:13:01.374649       1 dynamic_serving_content.go:145] Shutting down aggregator-proxy-cert::/etc/kubernetes/pki/karmada.crt::/etc/kubernetes/pki/karmada.key
I0430 08:13:01.374653       1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/etc/kubernetes/pki/server-ca.crt
I0430 08:13:01.374659       1 available_controller.go:416] Shutting down AvailableConditionController
I0430 08:13:01.374511       1 secure_serving.go:241] Stopped listening on 127.0.0.1:8080
I0430 08:13:01.374712       1 dynamic_serving_content.go:145] Shutting down serving-cert::/etc/kubernetes/pki/karmada.crt::/etc/kubernetes/pki/karmada.key
I0430 08:13:01.374724       1 tlsconfig.go:255] Shutting down DynamicServingCertificateController
I0430 08:13:01.374651       1 autoregister_controller.go:165] Shutting down autoregister controller
I0430 08:13:01.374889       1 secure_serving.go:241] Stopped listening on [::]:5443
E0430 08:13:01.377221       1 controller.go:184] Get "https://localhost:5443/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:5443: connect: connection refused

What you expected to happen:

karmada apiserver running and ready

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Karmada version:
master branch

member cluster name should be validated

What would you like to be added:
The name of MemberCluster should be validated before created.
The name should follow the DNS label standard as defined in RFC 1123:

  • contain at most 63 characters
  • contain only lowercase alphanumeric characters or '-'
  • start with an alphanumeric character
  • end with an alphanumeric character

Tasks for this can be split with:

Why is this needed:
Currently, the cluster names not checked during the join process, but it will be a part of the namespace name.

For example, give it 1.1 as the cluster name, we can't create a namespace with it:

The Namespace "1.1" is invalid: metadata.name: Invalid value: "1.1": a DNS-1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name',  or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')

one questions

Hi, I have one questions.

  • 1、In the roadmap 2021 Q4, Multi-cluster monitoring & Logging, Is to support query log or exec pod which in work cluster from karmada cluster ?

scale and quota guide

What would you like to be added:
As a user, I wan't to know the how many cluster, node, pod can we support. And what's the recommended quota for karmada with different scale.
Why is this needed:
User guide for performance.

Name and label validation requirement

What would you like to be added:

Why is this needed:

Proposal

  • Resource binding name should be consistent no matter it propagated by PropagationPolicy or ClusterPropagationPolicy. (@yangcheng-icbc #248)

The two functions as follows should be combined.

// GenerateBindingName will generate binding name by namespace, kind and name
func GenerateBindingName(namespace, kind, name string) string {
return strings.ToLower(namespace + "-" + kind + "-" + name)
}
// GenerateClusterResourceBindingName will generate ClusterResourceBinding name by kind and name
func GenerateClusterResourceBindingName(kind, name string) string {
return strings.ToLower(kind + "-" + name)
}

At meanwhile, the binding name for any resource template should be a "name-kind" format.

  • For resources which names would present on the label, their name and namespace should be no more than 63 characters. (@yangcheng-icbc #249)
    • PropagationPolicy/ClusterPropagationPolicy
    • ResourceBinding/ClusterResourceBinding (Not necessary as it's internal API)
    • Work(Not necessary as it's internal API)
  • For any workload which name is part of Binding names, their name should be no more than 63 - len(kind) characters.

Support skip member cluster TLS verification

What would you like to be added:

  • Add an optional field to MemberClusterSpec to indicate if the Karmada control plan should skip verifying member cluster's TLS certificate. (@RainbowMango #156)
  • Update karmadactl join process to preserve member cluster's InsecureSkipTLSVerify setting in its kubeconfig file. (@mrlihanbo #159 )
  • Update controller process to apply the new field when building member cluster client. (@mrlihanbo #159 )

Why is this needed:
When joining a member cluster with kubeconfig file which indicate client should skip TLS verification by insecure-skip-tls-verify set, e.g.

{
  "kind": "Config",
  "apiVersion": "v1",
  "preferences": {},
  "clusters": [
    {
      "name": "internalCluster",
      "cluster": {
        "server": "https://100.94.28.77:5443",
        "insecure-skip-tls-verify": true
      }
    }
  ]
}

The Karmada control plane, as a member cluster's client, should preserve the indication. Otherwise, Karmada control plane may fail to contact with member cluster:

E0111 10:07:43.686938      13 membercluster_status_controller.go:149] Failed to do cluster health check for cluster membercluster1, err is : Get "https://100.94.28.77:5443/readyz?timeout=32s": x509: certificate is valid for 10.0.0.131, 10.0.0.4, 10.0.0.138, 10.0.0.184, 127.0.0.1, 10.247.0.1, 10.247.0.2, not 100.94.28.77 

bootstrap welcome message is wrong

What happened:
After run hack/karmada-bootstrap.sh, I get unexpected welcome messages as follows:

Local Karmada is running.
To start using your karmada, run:
  export KUBECONFIG=/root/.kube/member3.config

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Karmada version: v0.4+
  • Others:

cc @mrlihanbo

Cleanup status from resource template after policy removed

What would you like to be added:
After the binding object been removed, the status on the resource template should be cleanup.

// TODO(RainbowMango): cleanup status in resource template that current binding object refers to.

// TODO(RainbowMango): cleanup status in resource template that current binding object refers to.

Why is this needed:
Cleanup outdate status avoiding ambiguity.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.