Coder Social home page Coder Social logo

open-cluster-management-io / cluster-proxy Goto Github PK

View Code? Open in Web Editor NEW
42.0 42.0 22.0 7.39 MB

An OCM addon that automates the installation of Kubernetes' konnectivity servers and agents.

License: Apache License 2.0

Makefile 2.37% Dockerfile 0.99% Go 96.64%
kubernetes

cluster-proxy's People

Contributors

champly avatar elgnay avatar haoqing0110 avatar ivan-cai avatar mikeshng avatar qiujian16 avatar rokibulhasan7 avatar skeeey avatar tamalsaha avatar xuezhaojun avatar yue9944882 avatar zhiweiyin318 avatar zhujian7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cluster-proxy's Issues

Support access not only kube-apiserver but also other services in the managed cluster

Is it possible for cluster-proxy to support the above feature?

Currently, proxy-agent would direct all traffic into kube-apiserver through this ExternalName type service:

func newClusterService(namespace, name string) *corev1.Service {
const nativeKubernetesInClusterService = "kubernetes.default.svc.cluster.local"
return &corev1.Service{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
Kind: "Service",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: name,
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeExternalName,
ExternalName: nativeKubernetesInClusterService,
},
}
}

But if we do the following:

  1. Add the target service' hostname in proxy-agent "agent-identifiers" flag. It can make sure requests will go to the desired managed cluster through the tunnel.
  2. Add another ExternalName service to point to the target service.

Therefore another target service on the managed cluster could be accessed by the user on the hub as well.

Observed flakiness: FAIL: TestAPIs

curl -sSLo /home/runner/work/cluster-proxy/cluster-proxy/go/src/open-cluster-management.io/cluster-proxy/testbin/setup-envtest.sh https://raw.githubusercontent.com/kubernetes-sigs/controller-runtime/v0.8.3/hack/setup-envtest.sh
source /home/runner/work/cluster-proxy/cluster-proxy/go/src/open-cluster-management.io/cluster-proxy/testbin/setup-envtest.sh; \
	fetch_envtest_tools /home/runner/work/cluster-proxy/cluster-proxy/go/src/open-cluster-management.io/cluster-proxy/testbin; \
	setup_envtest_env /home/runner/work/cluster-proxy/cluster-proxy/go/src/open-cluster-management.io/cluster-proxy/testbin; \
	go test ./pkg/... -coverprofile cover.out
fetching envtest [email protected] (into '/home/runner/work/cluster-proxy/cluster-proxy/go/src/open-cluster-management.io/cluster-proxy/testbin')
kubebuilder/bin/
kubebuilder/bin/etcd
kubebuilder/bin/kubectl
kubebuilder/bin/kube-apiserver
setting up env vars
?   	open-cluster-management.io/cluster-proxy/pkg/apis	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/apis/proxy	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/apis/proxy/v1alpha1	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/common	[no test files]
ok  	open-cluster-management.io/cluster-proxy/pkg/config	0.003s	coverage: 100.0% of statements
?   	open-cluster-management.io/cluster-proxy/pkg/generated/clientset/versioned	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/generated/clientset/versioned/fake	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/generated/clientset/versioned/scheme	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/generated/clientset/versioned/typed/proxy/v1alpha1	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/generated/clientset/versioned/typed/proxy/v1alpha1/fake	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/generated/informers/externalversions	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/generated/informers/externalversions/internalinterfaces	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/generated/informers/externalversions/proxy	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/generated/informers/externalversions/proxy/v1alpha1	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/generated/listers/proxy/v1alpha1	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/proxyagent/agent	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/proxyagent/health	[no test files]
Running Suite: Controller Suite
===============================
Random Seed: [164](https://github.com/open-cluster-management-io/cluster-proxy/runs/5851454339?check_suite_focus=true#step:4:164)9251107
Will run 2 of 2 specs

• Failure [90.156 seconds]
ClusterManagementAddon Controller
/home/runner/work/cluster-proxy/cluster-proxy/go/src/open-cluster-management.io/cluster-proxy/pkg/proxyserver/controllers/clustermanagementaddon_controller_test.go:19
  Deploy proxy server
  /home/runner/work/cluster-proxy/cluster-proxy/go/src/open-cluster-management.io/cluster-proxy/pkg/proxyserver/controllers/clustermanagementaddon_controller_test.go:86
    Should have a proxy server deployed correctly with default config [It]
    /home/runner/work/cluster-proxy/cluster-proxy/go/src/open-cluster-management.io/cluster-proxy/pkg/proxyserver/controllers/clustermanagementaddon_controller_test.go:87

    Timed out after 90.000s.
    Expected success, but got an error:
        <*errors.errorString | 0xc0004d5d00>: {
            s: "managedproxy not ready",
        }
        managedproxy not ready

    /home/runner/work/cluster-proxy/cluster-proxy/go/src/open-cluster-management.io/cluster-proxy/pkg/proxyserver/controllers/clustermanagementaddon_controller_test.go:102
------------------------------
1.649251113552[178](https://github.com/open-cluster-management-io/cluster-proxy/runs/5851454339?check_suite_focus=true#step:4:178)9e+09	INFO	Starting server	{"path": "/metrics", "kind": "metrics", "addr": "[::]:8080"}
1.6492511135526588e+09	INFO	controller.clustermanagementaddon	Starting EventSource	{"reconciler group": "addon.open-cluster-management.io", "reconciler kind": "ClusterManagementAddOn", "source": "kind source: *v1alpha1.ClusterManagementAddOn"}
1.6492511135526774e+09	INFO	controller.clustermanagementaddon	Starting EventSource	{"reconciler group": "addon.open-cluster-management.io", "reconciler kind": "ClusterManagementAddOn", "source": "kind source: *v1alpha1.ManagedProxyConfiguration"}
1.649251113552685e+09	INFO	controller.clustermanagementaddon	Starting Controller	{"reconciler group": "addon.open-cluster-management.io", "reconciler kind": "ClusterManagementAddOn"}
1.6492511136533926e+09	INFO	controller.clustermanagementaddon	Starting workers	{"reconciler group": "addon.open-cluster-management.io", "reconciler kind": "ClusterManagementAddOn", "worker count": 1}
1.6492511136534724e+09	INFO	ClusterManagementAddonReconciler	Start reconcile	{"name": "cluster-proxy"}
1.6492511136535134e+09	INFO	ClusterManagementAddonReconciler	Cannot find proxy-configuration	{"name": "cluster-proxy-config"}
1.6492512037021577e+09	INFO	ClusterManagementAddonReconciler	Start reconcile	{"name": "cluster-proxy"}
1.649251203702[187](https://github.com/open-cluster-management-io/cluster-proxy/runs/5851454339?check_suite_focus=true#step:4:187)5e+09	INFO	ClusterManagementAddonReconciler	Cannot find cluster-addon	{"name": "cluster-proxy"}
•


Summarizing 1 Failure:

[Fail] ClusterManagementAddon Controller Deploy proxy server [It] Should have a proxy server deployed correctly with default config 
/home/runner/work/cluster-proxy/cluster-proxy/go/src/open-cluster-management.io/cluster-proxy/pkg/proxyserver/controllers/clustermanagementaddon_controller_test.go:102

Ran 2 of 2 Specs in 97.461 seconds
FAIL! -- 1 Passed | 1 Failed | 0 Pending | 0 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall [202](https://github.com/open-cluster-management-io/cluster-proxy/runs/5851454339?check_suite_focus=true#step:4:202)1.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

  You are using a custom reporter.  Support for custom reporters will likely be removed in V2.  Most users were using them to generate junit or teamcity reports and this functionality will be merged into the core reporter.  In addition, Ginkgo 2.0 will support emitting a JSON-formatted report that users can then manipulate to generate custom reports.

  If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711
  Learn more at: https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-custom-reporters

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.5

--- FAIL: TestAPIs (97.46s)
FAIL
coverage: 62.6% of statements
FAIL	open-cluster-management.io/cluster-proxy/pkg/proxyserver/controllers	97.493s
?   	open-cluster-management.io/cluster-proxy/pkg/proxyserver/operator/authentication	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/proxyserver/operator/authentication/selfsigned	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/proxyserver/operator/eventhandler	[no test files]
?   	open-cluster-management.io/cluster-proxy/pkg/util	[no test files]
FAIL
make: *** [Makefile:63: test] Error 1

proxyagent didn't registration but the `Available` status shows `True`

Platform: macOS + Kind
Version: the latest code

What I have done:

After the cluster-proxy is installed, I found the addon installed successfully on the hub:

➜  cluster-proxy-test-env k get managedclusteraddons.addon.open-cluster-management.io -A                        
NAMESPACE   NAME            AVAILABLE   DEGRADED   PROGRESSING
cluster1    cluster-proxy   True 

But, on the managed cluster, the logs of addon-agent continuously shows error msg as the following:

E0104 11:16:21.900558       1 portforward.go:79] error handling connection: pods is forbidden: User "open-cluster-management:cluster-proxy:addon-agent" cannot list resource "pods" in API group "" in the namespace "open-cluster-management-addon" 

It seems we should grant proper permission for User "open-cluster-management:cluster-proxy:addon-agent", but I checked the code and found there is no relevant process yet.

Would you mind taking a look at this one? @yue9944882

Add unit test

We need more unit test to ensure the code quality

Addon-agent container with image tag latest instead of fixed version

Hi Guys!

I've a problem with the addon-agent container that has a wrong image with the tag latest instead of a fixed version. I check the code and it seems all right but unfortunately, when I deploy the cluster-proxy on a managed cluster the containers of deployment have these values:

containers:
      - args:
        - --proxy-server-host=127.0.0.1
        - --proxy-server-port=8091
        - --agent-identifiers=host=dev&host=dev.open-cluster-management-cluster-proxy&host=dev.open-cluster-management-cluster-proxy.svc.cluster.local
        - --ca-cert=/etc/ca/ca.crt
        - --agent-cert=/etc/tls/tls.crt
        - --agent-key=/etc/tls/tls.key
        command:
        - /proxy-agent
        image: quay.io/open-cluster-management/cluster-proxy:v0.2.0
        imagePullPolicy: IfNotPresent
        name: proxy-agent
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/ca
          name: ca
          readOnly: true
        - mountPath: /etc/tls
          name: hub
          readOnly: true
      - args:
        - --v=2
        - --hub-kubeconfig=/etc/kubeconfig/kubeconfig
        - --cluster-name=dev
        - --proxy-server-namespace=open-cluster-management-addon
        - --enable-port-forward-proxy=true
        command:
        - /agent
        image: quay.io/open-cluster-management/cluster-proxy:latest
        imagePullPolicy: IfNotPresent
        name: addon-agent
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/kubeconfig/
          name: hub-kubeconfig
          readOnly: true

as you can see the proxy-agent container has version v0.2.0 instead the addon-agent has the latest version that is causing crashloopbackoff errors.
Why they don't have the same version? I'll attach also the ManagedProxy configuration

apiVersion: proxy.open-cluster-management.io/v1alpha1
kind: ManagedProxyConfiguration
metadata:
  name: cluster-proxy
spec:
  authentication:
    dump:
      secrets:
        signingAgentServerSecretName: agent-server
        signingProxyClientSecretName: proxy-client
        signingProxyServerSecretName: proxy-server
    signer:
      type: SelfSigned
  proxyAgent:
    image: quay.io/open-cluster-management/cluster-proxy:v0.2.0
    replicas: 3
  proxyServer:
    entrypoint:
      port: 8091
      type: PortForward
    image: quay.io/open-cluster-management/cluster-proxy:v0.2.0
    inClusterServiceName: proxy-entrypoint
    namespace: open-cluster-management-addon
    replicas: 3

Thanks!

Proxy-Agent support customize `proxy-server-port`

I found the proxy-agent can set a customize port, so it use 8091 in default.

In my case, I'm using type: Hostname as entrypoint type, and also using 443 port to expose the proxy-server.

So could we add a flag to support user to specify the --proxy-server-port in proxy-agent?

PS:
It's a simple issue, and I think the only thing we should have a discussion about is the flags' name.
Should it be as the following?

    proxyServer:
    image: {{ .Values.proxyServerImage }}
    namespace: {{ .Release.Namespace }}
    entrypoint:
      {{- if .Values.proxyServer.entrypointAddress }}
      type: Hostname
      hostname:
        value: {{ .Values.proxyServer.entrypointAddress }}
        port: {{ ..Values.proxyServer.entrypointPort }} # Here!! Updated
      {{- else if .Values.entrypointLoadBalancer }}
      type: LoadBalancerService
      loadBalancerService: {}
      {{- else  }}
      type: PortForward
      {{- end }}

agent.go add a flag:

serviceEntryPointPort = proxyConfig.Spec.ProxyServer.Entrypoint.Hostname.Port

I also have free time to fix this issue. 😁

install no scale cluster-proxy replicas

Hi
With fresh installation of cluster-proxy addon, the status of ManagedProxyConfiguration return a successfull deployment with 3 replicas in spec but 0 is set in status , so is needed to scale manually the deployment

spec:
  authentication:
    dump:
      secrets:
        signingAgentServerSecretName: agent-server
        signingProxyClientSecretName: proxy-client
        signingProxyServerSecretName: proxy-server
    signer:
      type: SelfSigned
  proxyAgent:
    image: 'quay.io/open-cluster-management/cluster-proxy:v0.1.4'
    replicas: 3
  proxyServer:
    entrypoint:
      port: 8091
      type: PortForward
    image: 'quay.io/open-cluster-management/cluster-proxy:v0.1.4'
    inClusterServiceName: proxy-entrypoint
    namespace: open-cluster-management-addon
status:
  conditions:
    - lastTransitionTime: '2022-05-03T19:36:47Z'
      message: 'Replicas: 0'
      reason: SuccessfullyDeployed
      status: 'True'
      type: ProxyServerDeployed

helm install error

When I try to helm install cluster-proxy in a hub cluster,i get this error.
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "ClusterManagementAddOn" in version "addon.open-cluster-management.io/v1alpha1"

how proxy server works?

I want to deploy HA proxy servers, and I find the document, and want some help to figure out some questions:
The scene: proxy server has three replications and proxy agent has two replications.
Does the picture means every agent maintains a connection to every server, so every agent has three connections to every server? If it works like this, how to setup proxyserver's load balancer to make sure the connections located to the three servers.
image

Change the usage of `leaseUpdater`

Right now we are using hubConfigClient to new a leaseUpdater:

leaseUpdater, err := health.NewAddonHealthUpdater(cfg, clusterName)

func NewAddonHealthUpdater(hubClientCfg *rest.Config, clusterName string) (lease.LeaseUpdater, error) {
hubClient, err := kubernetes.NewForConfig(hubClientCfg)
if err != nil {
return nil, err
}
return lease.NewLeaseUpdater(
hubClient,

But a better usage is using spokeClient in NewLeaseUpdater and using hubConfigClient in WithHubLeaseConfig as the following code shows here:

https://github.com/stolostron/multicloud-operators-foundation/blob/b01ead83c195e3d43a8b34f2a1f0510c4698694d/cmd/agent/agent.go#L174

GitHub post summit action is broken

GitHub post summit action is broken, the images can not be built, see build #157

# install imagebuilder
Run go install github.com/openshift/imagebuilder/cmd/[email protected]
  go install github.com/openshift/imagebuilder/cmd/imagebuilder@v1.[2](https://github.com/open-cluster-management-io/cluster-proxy/actions/runs/4738861962/jobs/8413151028#step:4:2).1
  shell: /usr/bin/bash -e {0}
  env:
    GO_VERSION: 1.19
    GO_REQUIRED_MIN_VERSION: 
    GOROOT: /opt/hostedtoolcache/go/1.19.8/x64
go: downloading github.com/openshift/imagebuilder v1.2.1
go: finding module for package github.com/fsouza/go-dockerclient
go: finding module for package github.com/docker/docker/api/types
go: downloading github.com/docker/docker v2[3](https://github.com/open-cluster-management-io/cluster-proxy/actions/runs/4738861962/jobs/8413151028#step:4:3).0.[4](https://github.com/open-cluster-management-io/cluster-proxy/actions/runs/4738861962/jobs/8413151028#step:4:4)+incompatible
go: downloading github.com/fsouza/go-dockerclient v1.9.7
go: finding module for package k8s.io/klog
go: downloading k8s.io/klog v1.0.0
go: finding module for package github.com/containerd/containerd/platforms
go: downloading github.com/containerd/containerd v1.7.0
go: finding module for package github.com/containers/storage/pkg/archive
go: downloading github.com/containers/storage v1.46.1
go: finding module for package github.com/containers/storage/pkg/fileutils
go: finding module for package github.com/containers/storage/pkg/idtools
go: finding module for package github.com/docker/docker/pkg/system
go: finding module for package github.com/pkg/errors
go: downloading github.com/pkg/errors v0.9.1
go: found github.com/docker/docker/api/types in github.com/docker/docker v23.0.4+incompatible
go: found github.com/fsouza/go-dockerclient in github.com/fsouza/go-dockerclient v1.9.7
go: found k8s.io/klog in k8s.io/klog v1.0.0
go: found github.com/containerd/containerd/platforms in github.com/containerd/containerd v1.7.0
go: found github.com/containers/storage/pkg/archive in github.com/containers/storage v1.46.1
go: found github.com/containers/storage/pkg/fileutils in github.com/containers/storage v1.46.1
go: found github.com/containers/storage/pkg/idtools in github.com/containers/storage v1.46.1
go: found github.com/docker/docker/pkg/system in github.com/docker/docker v23.0.4+incompatible
go: found github.com/pkg/errors in github.com/pkg/errors v0.9.1
go: downloading github.com/docker/go-connections v0.4.0
go: downloading github.com/docker/go-units v0.[5](https://github.com/open-cluster-management-io/cluster-proxy/actions/runs/4738861962/jobs/8413151028#step:4:5).0
go: downloading github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b
go: downloading github.com/moby/patternmatcher v0.5.0
go: downloading github.com/klauspost/compress v1.1[6](https://github.com/open-cluster-management-io/cluster-proxy/actions/runs/4738861962/jobs/8413151028#step:4:6).4
go: downloading github.com/moby/sys/sequential v0.5.0
go: downloading github.com/sirupsen/logrus v1.9.0
go: downloading golang.org/x/sys v0.[7](https://github.com/open-cluster-management-io/cluster-proxy/actions/runs/4738861962/jobs/8413151028#step:4:7).0
go: downloading github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6
go: downloading github.com/morikuni/aec v1.0.0
go: downloading github.com/klauspost/pgzip v1.2.5
go: downloading github.com/ulikunitz/xz v0.5.11
go: downloading github.com/opencontainers/runc v1.1.5
go: downloading github.com/gogo/protobuf v1.3.2
go: downloading github.com/opencontainers/go-digest v1.0.0
go: downloading google.golang.org/grpc v1.53.0
go: downloading github.com/opencontainers/runtime-spec v1.1.0-rc.1
go: downloading github.com/syndtr/gocapability v0.0.0-20200[8](https://github.com/open-cluster-management-io/cluster-proxy/actions/runs/4738861962/jobs/8413151028#step:4:9)150638[12](https://github.com/open-cluster-management-io/cluster-proxy/actions/runs/4738861962/jobs/8413151028#step:4:13)-42c35b437635
go: downloading github.com/moby/sys/mountinfo v0.6.2
go: downloading google.golang.org/genproto v0.0.0-20230306[15](https://github.com/open-cluster-management-io/cluster-proxy/actions/runs/4738861962/jobs/8413151028#step:4:16)5012-7f2fa6fef1f4
go: downloading github.com/golang/protobuf v1.5.2
go: downloading google.golang.org/protobuf v1.28.1
# github.com/openshift/imagebuilder/dockerfile/parser
Error: /home/runner/go/pkg/mod/github.com/openshift/[email protected]/dockerfile/parser/parser.go:1[22](https://github.com/open-cluster-management-io/cluster-proxy/actions/runs/4738861962/jobs/8413151028#step:4:23):12: undefined: system.LCOWSupported
Error: /home/runner/go/pkg/mod/github.com/openshift/[email protected]/dockerfile/parser/parser.go:1[57](https://github.com/open-cluster-management-io/cluster-proxy/actions/runs/4738861962/jobs/8413151028#step:4:58):12: undefined: system.LCOWSupported
Error: Process completed with exit code 2.

And the image on quay.io is not updated for a long time: https://quay.io/repository/open-cluster-management/cluster-proxy?tab=tags

Support customize "keepalive-time" in proxy-server

The default keepalive-time is 1 hour, which is too long for the test we are using(OCP will automatically close the connection if no message is transferred)

https://github.com/kubernetes-sigs/apiserver-network-proxy/blob/fc56f4ba47a3da62519f93743830f13abf962492/cmd/server/app/options/options.go#L311

We set this flag to 30s to keep this grpc-connection alive in our practice. (Every 30s proxy-server send proxy-agent a message to keep the connection alive)

Could we support this flag as well?

Lack of some clusterrole

When I try to install cluster-proxy on a OCP platform, I got the following error:

E0217 08:54:41.554025       1 base_controller.go:270] "cluster-management-addon-controller" controller failed to sync "local-cluster/cluster-proxy", err: managedclusteraddons.addon.open-cluster-management.io "cluster-proxy" is forbidden: cannot set an ownerRef on a resource you can't delete: , <nil>

E0217 08:54:41.944396       1 base_controller.go:270] "addon-registration-controller" controller failed to sync "local-cluster/cluster-proxy", err: roles.rbac.authorization.k8s.io "cluster-proxy-addon-agent" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , <nil>

I didn't find this issue in a KinD env, so I suppose OCP has a tighter permission on roles.

In my practice the issue can be solved by:

On hub:
Apply the following changes to clusterrole.yaml:

  - apiGroups:
      - addon.open-cluster-management.io
    resources:
      - clustermanagementaddons
      - managedclusteraddons
      - clustermanagementaddons/status
      - managedclusteraddons/status
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
      - delete # !!! Here! add this
  - apiGroups:
      - proxy.open-cluster-management.io
    resources:
      - managedproxyconfigurations
      - managedproxyconfigurations/status
      - managedproxyconfigurations/finalizers # !!! Here! add this
    verbs:
      - get
      - list
      - watch
      - update
      - patch

One agent:

I'm not sure how to fix this issue on the agent, because as the code shows:

OwnerReferences: []metav1.OwnerReference{
{
APIVersion: addonv1alpha1.GroupVersion.String(),
Kind: "ManagedClusterAddOn",
Name: addon.Name,
BlockOwnerDeletion: pointer.Bool(true),

We can either add permission to the addon-framework or we can delete the ownerReference here(Or set the BlockOwnerDeletion to false).

So it depends on whether this OwnerReferences must be in agent manifest work. @yue9944882

cc @skeeey

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.