jont828 / cluster-api-visualizer Goto Github PK
View Code? Open in Web Editor NEWMulticluster resource visualization tool for Cluster API
License: Apache License 2.0
Multicluster resource visualization tool for Cluster API
License: Apache License 2.0
Currently, hack/observability/cluster-api-visualizer/chart/templates/clusterrole.yaml
gives us permissions to all api groups and resources. We can revise this to be more specific and only give access to the resources we need to use clusterctl describe
and to use the kubectl client to fetch a CRD.
I could not figure out how this happens since i can't see any direct call to the Upgrade
Functions of cluster-api nor any implicit one.
But while working on the cluster-api-provider-azure
i noticed this happenening
Tilt
with make tilt-up
from tag 1.6.0 https://github.com/kubernetes-sigs/cluster-api-provider-azure/tree/v1.6.0 of cluster-api-provider-azure
Featureflags
are all reverted to false (which is the problem i am facing )I don't think something like this is expected , i can also see that the registry changes
pre capi-visualizer
$ kubectl get -n capi-system deployment capi-controller-manager -o json | jq '.spec.template.spec.containers[0] | .image,.args'
"k8s.gcr.io/cluster-api/cluster-api-controller:v1.2.4"
[
"--leader-elect",
"--metrics-bind-addr=localhost:8080",
"--feature-gates=MachinePool=true,ClusterResourceSet=true,ClusterTopology=false,RuntimeSDK=false"
]
post capi-visualizer
"registry.k8s.io/cluster-api/cluster-api-controller:v1.2.5"
[
"--leader-elect",
"--metrics-bind-addr=localhost:8080",
"--feature-gates=MachinePool=false,ClusterResourceSet=false,ClusterTopology=false,RuntimeSDK=false"
]
Started capi-visualizer
with -v=10
i can actually see the patch request
I1118 11:02:05.173549 1 request.go:1073] Request Body: {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"creationTimestamp":null,"labels":{"cluster.x-k8s.io/provider":"cluster-api","clusterctl.cluster.x-k8s.io":"","control-plane":"controller-manager"},"name":"capi-controller-manager","namespace":"capi-system","resourceVersion":"985"},"spec":{"replicas":1,"selector":{"matchLabels":{"cluster.x-k8s.io/provider":"cluster-api","control-plane":"controller-manager"}},"strategy":{},"template":{"metadata":{"creationTimestamp":null,"labels":{"cluster.x-k8s.io/provider":"cluster-api","control-plane":"controller-manager"}},"spec":{"containers":[{"args":["--leader-elect","--metrics-bind-addr=localhost:8080","--feature-gates=MachinePool=false,ClusterResourceSet=false,ClusterTopology=false,RuntimeSDK=false"],"command":["/manager"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_UID","valueFrom":{"fieldRef":{"fieldPath":"metadata.uid"}}}],"image":"registry.k8s.io/cluster-api/cluster-api-controller:v1.2.5","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"/healthz","port":"healthz"}},"name":"manager","ports":[{"containerPort":9443,"name":"webhook-server","protocol":"TCP"},{"containerPort":9440,"name":"healthz","protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/readyz","port":"healthz"}},"resources":{},"volumeMounts":[{"mountPath":"/tmp/k8s-webhook-server/serving-certs","name":"cert","readOnly":true}]}],"serviceAccountName":"capi-manager","terminationGracePeriodSeconds":10,"tolerations":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}],"volumes":[{"name":"cert","secret":{"secretName":"capi-webhook-service-cert"}}]}}},"status":{}}
I1118 11:02:05.173652 1 round_trippers.go:466] curl -v -XPATCH -H "Accept: application/json, */*" -H "Content-Type: application/merge-patch+json" -H "User-Agent: clusterctl/ (linux/amd64)" -H "Authorization: Bearer <masked>" 'https://10.96.0.1:443/apis/apps/v1/namespaces/capi-system/deployments/capi-controller-manager?timeout=30s'
I1118 11:02:05.209545 1 round_trippers.go:553] PATCH https://10.96.0.1:443/apis/apps/v1/namespaces/capi-system/deployments/capi-controller-manager?timeout=30s 200 OK in 35 milliseconds
Hi
Currently, the CAPI Visualizer just shows ClusterResourceSet Addons.
I think integrating with CAAPH and showing their resources (HelmChartProxy
&HelmChartRelease
) would be a good addition.
Currently, this app only supports resources on Azure and Docker. I'd like to have support for all the common CAPI providers like AWS, GCP, and more.
The ideal UX is for this project can run out of a Docker container so that way the UI implementation is abstracted away and it can be run in one command. The issue(s) right now are that (1) the clusterctl client from within a container can't find the kubeconfig and kubecontext of the local management cluster, and (2) if the kubeconfig of a kind cluster is copied into the Docker container, the kubeconfig points to a localhost address which the container can't easily access.
We tried to use the visualizer without internet for our KubeCon tutorial and it turns out it loads some css files from the Internet.
(but this works if they are already cached before)
To reproduce:
Would be nice if those files could be builtin so the visualizer doesn't need internet access.
The background polling interval is set to 1 minute by default and the front-end doesn't reload the view unless the data changed. It would be helpful to add a watch mode which could be implemented by setting the polling interval to 5 seconds or so for when a Cluster has just been created and we expect new changes often.
It would be great if we can make Machine and CP collapsible like workers
This app would ideally be able to fetch the known CRDs on the management cluster similar to kubectl get crds
and then get the individual resources based on that result. Currently, it knows which resources to find from a list of CRDs with the kind/plural and group for each.
All the CRD nodes except for Cluster must be categorized under a resource category node (ClusterInfrastructure, ControlPlane, and Workers). This requires every node directly owned by Cluster to be categorized under one of the three by setting the category node as its parent. This way, all the other child nodes are categorized automatically.
Certain resources like Machine
or AzureMachineTemplate
can be in either the control plane or worker categories and must be resolved dynamically. Most ambiguous resources like this have this label cluster.x-k8s.io/control-plane
, but it would be better to use the configRefs, infrastructureRefs, and bootstrapRefs instead. For example, if an AzureMachineTemplate
is referenced in a MachineDeployment
's infrastructureRef spec, then we can know that it should be under the Workers category.
Currently when using this from CAPI we are pinning to a specific image tag, and this will require some maintenance to keep up with new releases.
Instead, we would like the helm chart to always return the latest image by default
The app uses the kind
of the cluster.spec.infrastructureRef
to parse the name of the provider, i.e. Azure, AWS, GCP. Clusters built from ClusterClasses do not have it set initially and so we can't determine the provider until it is set. I'd like to find a consistent/uniform way to determine the infra provider so the icon is always set correctly.
With the CAPI tilt-based dev environment it is possible to deploy CAPI providers, the visualizer and a logging stack (promtail, loki, grafana).
Here is some more information about how to deploy the logging stack: https://cluster-api.sigs.k8s.io/developer/logging.html?highlight=loki#developing-and-testing-logs
Based on our recent work on logging, all controllers (at least in core CAPI) should now include key value pairs for the currently reconciled object and related objects.
This makes it possible to filter e.g. all logs for a specific KubeadmControlPlane across controllers.
I think it would be great if we could pass a URL template as command line flag into the visualizer and the visualizer then adds a "logs" button on every resource.
The URL for the query above looks like this:
I think in general it would be great if there is a way to render this URL based on:
Please note that the k/v pairs will be further improved soon by this PR kubernetes-sigs/cluster-api#7152 and the upcoming bump to controller-runtime v0.13.0.
Implement a settings menu that can persist across pages/refreshes, so users don't have to toggle settings every time. Additionally, it gives room to add additional configurations as we can't fit too much in the top app bar.
It would be nice to have multi-arch images of the visualizer available.
We noticed that currently the image is only build for amd64.
(Here's how we build & (further down) push multi-arch images in Cluster API: https://github.com/kubernetes-sigs/cluster-api/blob/main/Makefile#L655)
Since CAPZ now uses Azure Service Operator to drive most of its reconciliation, it would be awesome if the visualizer could also show those ASO resources in the resource graph.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.