Coder Social home page Coder Social logo

acm-workload's Introduction

Usage

Prerequisite (optional)

When the observability addon is enabled or disabled on OpenShift clusters managed by ACM, the local Prometheus will restart and all previous metrics will be lost.

If you want to retain the metrics before ACM installation, you need to configure persistent storage for Prometheus to prevent metrics lost. Follow the steps for Configuring persistent storage before importing the OpenShift cluster as managed clusters.

For example, create the following resource on the managed cluster before importing it:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    prometheusK8s:
      volumeClaimTemplate:
        spec:
          storageClassName: gp3-csi
          resources:
            requests:
              storage: 10Gi

Deploy the workload

  1. Import a managed cluster and use the make command to enable all the needed addons for the managed cluster.

Login to the hub cluster and run the below command:

export AWS_BUCKET_NAME=<bucket_name> 
export AWS_ACCESS_KEY=<access_key>
export AWS_SECRET_KEY=<secret_key> 
export MANAGED_CLUSTER_NAME=<cluster1>

docker run -it -e AWS_BUCKET_NAME=$AWS_BUCKET_NAME -e AWS_ACCESS_KEY=$AWS_ACCESS_KEY -e AWS_SECRET_KEY=$AWS_SECRET_KEY -e MANAGED_CLUSTER_NAME=$MANAGED_CLUSTER_NAME -v /root/.kube:/root/.kube quay.io/haoqing/acm-workload:latest make enable-all

Ensure only below ManagedClusterAddon are created and "AVAILABLE" status is true on the hub.

$ kubectl get mca -n $MANAGED_CLUSTER_NAME
NAME                          AVAILABLE   DEGRADED   PROGRESSING
application-manager           True
cluster-proxy                 True
config-policy-controller      True
governance-policy-framework   True
observability-controller      True                   False
search-collector              True
work-manager                  True
  1. On the hub cluster, generate and create cron job based on the ManagedCluster .metadata.creationTimestamp.
docker run -it -e MANAGED_CLUSTER_NAME=$MANAGED_CLUSTER_NAME -v /root/.kube:/root/.kube quay.io/haoqing/acm-workload:latest make cronjob

Ensure the below resources are created on the hub.

$ kubectl get cronjob
NAME                            SCHEDULE        SUSPEND   ACTIVE   LAST SCHEDULE   AGE
app-create-cluster1             40 14 08 01 1   False     0        <none>          65s
app-delete-cluster1             10 16 08 01 1   False     0        <none>          65s
enable-all-cluster1             40 23 08 01 1   False     0        <none>          65s
enable-app-cluster1             10 19 08 01 1   False     0        <none>          65s
enable-policy-proxy-cluster1    40 20 08 01 1   False     0        <none>          65s
enable-policy-search-cluster1   10 22 08 01 1   False     0        <none>          65s
obs-create-cluster1             40 17 08 01 1   False     0        <none>          65s
obs-delete-cluster1             10 19 08 01 1   False     0        <none>          65s
policy-create-cluster1          10 16 08 01 1   False     0        <none>          65s
policy-delete-cluster1          40 17 08 01 1   False     0        <none>          65s

$ kubectl get serviceaccount/my-cronjob
NAME         SECRETS   AGE
my-cronjob   0         2m49s

$ kubectl get clusterrolebinding.rbac.authorization.k8s.io/my-cronjob
NAME         ROLE                        AGE
my-cronjob   ClusterRole/cluster-admin   2m58s

Analysis the resource usage

Gather metrics and analysis after ahout 16 hours.

Export the managed cluster URL and token to the OC_CLUSTER_URL and OC_TOKEN.

export MANAGED_CLUSTER_NAME=<cluster1>
export OC_CLUSTER_URL="https://api.fake.test.red-chesterfield.com:6443"
export OC_TOKEN="sha256~xxx"

Login to the hub cluster, gather the metrics to folder $PWD/$MANAGED_CLUSTER_NAME.

export DURATION=$(oc get managedclusters $MANAGED_CLUSTER_NAME --no-headers | awk '{gsub(/h/,"",$6); if ($6 ~ /d/) { split($6, arr, "d"); $6=(arr[1]*24)+arr[2]; } $6+=3; print $6"h"}')
mkdir $MANAGED_CLUSTER_NAME
docker run -e OC_CLUSTER_URL=$OC_CLUSTER_URL -e OC_TOKEN=$OC_TOKEN -e DURATION=$DURATION -e CLUSTER=spoke -v $PWD/$MANAGED_CLUSTER_NAME:/acm-inspector/output quay.io/haoqing/acm-inspector:latest > $PWD/$MANAGED_CLUSTER_NAME/logs

On the hub cluster, analysis the metrics based on the cronjob created time, the result is output to $PWD/$MANAGED_CLUSTER_NAME/acm_analysis.

docker run -it -e MANAGED_CLUSTER_NAME=$MANAGED_CLUSTER_NAME -v /root/.kube:/root/.kube -v $PWD/$MANAGED_CLUSTER_NAME/:/acm-workload/$MANAGED_CLUSTER_NAME quay.io/haoqing/acm-workload:latest make analysis

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.