Coder Social home page Coder Social logo

ibm / cp4waiops-gitops Goto Github PK

View Code? Open in Web Editor NEW
10.0 10.0 25.0 45.71 MB

Manage Your IBM Cloud Pak for Watson AIOps With GitOps

Home Page: https://ibm.github.io/cp4waiops-gitops/docs/

License: Apache License 2.0

crossplane gitops ibm-cloud-pak ibm-cloud-pak-4-waiops openshift argocd kubernetes

cp4waiops-gitops's Introduction

Table of Contents generated with DocToc

Deploy Cloud Pak for Watson AIOps using GitOps

This repository is about using OpenShift GitOps to deployCloud Pak for Watson AIOps(CP4WAIOps) on Red Hat OpenShift Cluster. Refer to our gitops webpage to check detailed document and start your toturial with gitops.

Install CP4WAIOps using GitOps

Please refer to the following documents and decide how you want to deploy CP4WAIOps:

More Install Options for CP4WAIOps using GitOps

There are some advanced configuration available for CP4WAIOps to support more install scenarios. Also, as a customer, you may want to fork this repository to customize it that meets your specific needs. For more details, please refer to Customize CP4WAIOps Install.

cp4waiops-gitops's People

Contributors

gyliu513 avatar huang-cn avatar imgbot[bot] avatar imgbotapp avatar jianh619 avatar lihongbj avatar liyanwei93 avatar morningspace avatar stevemar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cp4waiops-gitops's Issues

Design decision on cp4waiops release chart naming and version number

I think we really need a clear decision on how to name our cp4waiops release chart and version:

  • In 3.2, we put ai-manager and event-manager to a single helm chart, which is good. Then, the helm chart name + version can be something like: cp4waiops32-0.1.0. It could be a bit confusing why we use cp4waiops32 as chart name here. The reason is essentially we do not maintain a consistent helm chart for cp4waiops release by release, but instead, we have several charts physically to host cp4waiops in each release. Also, different from cp4waiops itself, helm chart has its own version, so that we can track helm chart change separately w/o interfering w/ cp4waiops release cycle, e.g.: cp4waiops32-0.1.0 and cp4waiops32-0.1.1 are both helm charts for cp4waiops 3.2, but there are some enhancements added in helm template in chart 0.1.1.

  • In 3.3, we put ai-manager and event-manager separately in their own helm chart, then it makes it inconsistent w/ 3.2 and adds confusion, e.g.: we could use aimanager33-0.1.0 and eventmanager33-0.1.0, but there's no cp4waiops33-0.1.0. It looks ai-manager and event-manager are two separate products, and cp4waiops as a product is "missing". Also, using eventmanager33 does not mean event-manager version is 3.3, but instead, it is the cp4waiops version, and event-manager has its own version, e.g.: 1.6.4.0. And, it looks we can not use name such as cp4waiops33-aimanager-0.1.0, because it will break helm template rendering when we want to define another helm chart that depends on the official cp4waiops helm chart to customize something. (Only one dash is allowed in dependent helm chart name)

With that, to keep it simple and consistent, I'm thinking maybe it's better to follow the way that we did w/ 3.2 to use a single helm chart to cover both ai-manager and event-manager. Then, in 3.3, we can keep using cp4waiops33-x.y.z.

Install x-small cp4waiops failed from all-in-one in new release-3.4

Install x-small cp4waiops failed from all-in-one in new release-3.4 per doc https://github.com/IBM/cp4waiops-gitops/blob/release-3.4/docs/how-to-deploy-cp4waiops.md#option-2-experimental-install-using-all-in-one-configuration

project: default
source:
  repoURL: 'https://github.com/IBM/cp4waiops-gitops'
  path: config/all-in-one
  targetRevision: release-3.4
  helm:
    parameters:
      - name: cp4waiops.profile
        value: x-small
      - name: cp4waiops.eventManager.enabled
        value: 'false'
destination:
  server: 'https://kubernetes.default.svc'
  namespace: openshift-gitops
syncPolicy:
  automated: {}

failed detail in subscription ibm-aiops-orchestrator :

Conditions:
    - lastTransitionTime: '2022-05-29T12:14:18Z'
      message: >-
        targeted catalogsource openshift-marketplace/ibm-operator-catalog
        missing
      reason: UnhealthyCatalogSourceFound
      status: 'True'
      type: CatalogSourcesUnhealthy
    - message: >-
        constraints not satisfiable: no operators found from catalog
        ibm-operator-catalog in namespace openshift-marketplace referenced by
        subscription ibm-aiops-orchestrator, subscription ibm-aiops-orchestrator
        exists
      reason: ConstraintsNotSatisfiable
      status: 'True'
      type: ResolutionFailed

Create a few sample custom sizing configmaps without parameterizing resource settings

We shall create a set of sample custom sizing configmaps that do not allow users to modify the detailed resource settings within these configmaps. This means all the resources settings in these configmaps are fixed values or OOTB.

The reason for this is that, user will use some tooling to input load factors such as number of transactions per second which will gear the tooling to generate such configmaps. Once these configmaps submitted to git repo, the GitOps process will consume and sync onto target system.

As example, we could have:

.
└── config
    └── cp4waiops                 (cp4waiops gitops configuration)
        └── custom-sizing         (cp4waiops custom sizing sample configuration)
            ├── templates
            │   ├──custom-sizing-configmap.yaml       (configmap for arbitrary user input)
            │   ├──custom-sizing-resource-lockers.yaml(custom sizing not covered by configmap for arbitrary user input)
            │   ├──x-small-configmap.yaml             (configmap for general x-small)
            │   ├──x-small-idle-configmap.yaml        (configmap specific for idle workload using x-small)
            │   ├──x-small-lad-configmap.yaml         (configmap specific for lad use case using x-small)
            │   ├──x-small-mad-configmap.yaml         (configmap specific for mad use case using x-small)
            │   ├──x-small-resource-lockers.yaml      (custom sizing not covered by configmap for general x-small)
            │   ├──x-small-idle-resource-lockers.yaml (custom sizing not covered by configmap for idle workload using x-small)
            │   ├──x-small-lad-resource-lockers.yaml  (custom sizing not covered by configmap for lad using x-small)
            │   ├──x-small-mad-resource-lockers.yaml  (custom sizing not covered by configmap for mad using x-small)
            │   ├──small-lad-configmap.yaml           (configmap specific for lad use case using small)
            │   ├──small-mad-configmap.yaml           (configmap specific for mad use case using small)
            │   ├──small-lad-resource-lockers.yaml    (custom sizing not covered by configmap for lad using small)
            │   └──small-mad-resource-lockers.yaml    (custom sizing not covered by configmap for mad using small)
            └── values.yaml       (values for arbitrary user input, and the profile, e.g.: x-small, x-small-idle, x-small-lad, small-mad, etc.)

CP4WAIOps Event Manager deployement through ArgoCD got stuck on couple of pods

Hi all!

After issue #125 my ArgoCD continued. Lots of pods got deployed, but some of them have broken hearts:
image

When looking in my OCP cluster, I see for example these:
image

Some of the pods do have errors, for example evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh:

Using TLS certificate located at /internal-tls-keys/tls.crt

  Using TLS key located at /internal-tls-keys/tls.key

{"name":"logging","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"Logging started","time":"2022-03-31T09:20:09.003Z","v":0}
{"name":"common-express-server","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"\n== Loaded server application ==\n Package name: @hdm/akora-app-noi\n Package version: 12.0.501\n Base URL: https://netcool-evtmanager.cp4waiops-em-6c9cbda044ecd11a71bd72721098e1cf-0000.eu-de.containers.appdomain.cloud/\n HTTP port: 8080\n HTTPS port: 8443\n Development Mode: false\n Require HTTPS: true\n===============================\n ","time":"2022-03-31T09:20:09.005Z","v":0}
{"name":"akora-server.middleware.Session","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"cookie":{"name":"akora.sid","path":"/","maxage":7200,"samesite":"lax"},"provider":"redis","msg":"Sessions are enabled","time":"2022-03-31T09:20:09.017Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_ea_uiapi","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.140Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-ibm-ea-ui-api-graphql.noi.svc:8080/graphql","time":"2022-03-31T09:20:09.141Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_ea_uiapi" ~> ""","time":"2022-03-31T09:20:09.141Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_noi_webgui","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.141Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> https://evtmanager-webgui.noi.svc:16311/ibm/console/webtop","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_noi_webgui" ~> ""","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_cemusers","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-ibm-cem-cem-users.noi.svc:6002/","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_cemusers" ~> ""","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_cem_uiapi","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-ibm-cem-event-analytics-ui.noi.svc:3201/","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_rba_uiapi","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-ibm-cem-rba-rbs.noi.svc:3005/","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_rba_uiapi" ~> ""","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_rba_legacyui","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-ibm-cem-rba-rbs.noi.svc:3005/","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_asm_ui_api","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> https://evtmanager-topology-ui-api.noi.svc:3080/","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_asm_ui_api" ~> ""","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_noi_dashboarding","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-grafana.noi.svc:3000/","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_noi_dashboarding" ~> ""","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.DashFederationRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"pages":[{"federateType":["page","widget"],"federated":true,"globalized":[{"language":"ar","label":"Alerts"},{"language":"cs","label":"Výstrahy"},{"language":"de","label":"Alerts"},{"language":"en","label":"Alerts"},{"language":"es","label":"Alertas"},{"language":"fr","label":"Alertes"},{"language":"he","label":"Alerts"},{"language":"hu","label":"Riasztások"},{"language":"it","label":"Avvisi"},{"language":"ja","label":"アラート"},{"language":"ko","label":"경보"},{"language":"pl","label":"Alerty"},{"language":"pt-BR","label":"Alertas"},{"language":"ru","label":"Уведомления"},{"language":"th","label":"Alerts"},{"language":"zh-CN","label":"警报"},{"language":"zh-TW","label":"警示"}],"id":"com.ibm.hdm.noi.alerts.alert-viewer","platforms":["DESKTOP"],"roles":["noi_operator","noi_engineer","noi_lead"],"pageurl":"/aiops/cfd95b7e-3bc7-4006-a4a8-a73a79c71255/federated/alerts","type":"visibl...
{"name":"akora-server.middleware.Authentication","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"mode":"openid-cem","msg":"Authentication is enabled","time":"2022-03-31T09:20:09.419Z","v":0}
{"name":"akora-server.dash-federation.ci","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"Check console integration { ci: 'com.ibm.hdm.hdmintegration.ui.ci.cloudanalytics' }","time":"2022-03-31T09:20:09.422Z","v":0}
TypeError: client_id is required
at new BaseClient (/server/node_modules/openid-client/lib/client.js:176:13)
at new Client (/server/node_modules/openid-client/lib/client.js:1821:7)
at module.exports (/server/lib/middleware/Authentication/modes/openid-cem.js:50:18)
at Authentication (/server/lib/middleware/Authentication/index.js:52:14)
at module.exports (/server/lib/routes/getHttpRoutes.js:72:53)
at routes (/server/lib/routes/index.js:21:22)
at Config.applyAppRoutes (/server/lib/index.js:19:32)
at App (/server/node_modules/@hdm/common-express-server/lib/App.js:33:10)
at AsyncFunction.module.exports.createServers (/server/node_modules/@hdm/common-express-server/lib/index.js:43:15)
at module.exports (/server/node_modules/@hdm/common-express-server/lib/index.js:17:40)

As per Slack DM with @morningspace:

Did quick look, so:

  • CatalogSource refers to icr.io/cpopen/ibm-operator-catalog:latest, that is supposed to be the GA build

  • Subscription uses channel: v1.7, it looks the 3.2.0 docs on IBM Docs haven’t been updated to reflect the latest release for Event Manager, but I’m assuming this is the right channel

  • For those pods w/ broken hearts, I see two of them are caused by ImagePullBackOff, which seems to be the root cause, i.e.: cannot pull images from cp.icr.io. The other two pods are impacted because of this.

Hopefully someone can see a missing link in this?

Thanks!
K.

Missing SCC for Event Manager service account?

The Event Manager installation instructions (https://www.ibm.com/docs/en/noi/1.6.4?topic=openshift-preparing) state that the admin must associate an SCC with the noi-service-account, but I do not see that in the repo, only the service account itself:
https://github.com/IBM/cp4waiops-gitops/blob/main/config/3.3/event-manager/templates/noi-serviceaccount.yaml

Also note that the SA cannot be in the repo, because OCP attaches additional secrets to service accounts and then Argo and OCP start to overwrite each other's changes periodically: Argo shows the resource as out-of-sync because it has an extra secret and deletes it, then OCP adds the secret again.

Unfortunately, the only way to avoid this problem is to use a procedural approach to detect, create, and patch the service account, as needed:

https://github.com/IBM/cloudpak-gitops/blob/018986020947e8e2d09f10b390b570fb1a6b35d8/config/cloudpaks/cp4aiops/install-emgr/templates/subscriptions/030-sync-prereqs.yaml#L54

Update the patch serviceaccount job to use initContainer

A a fix to issue w/ 3.2 GA build, this job is used to wait for some resources available to apply image pull requests. The current job has a several issues:

  • It is doing the detecting logic in a while loop, which seems taking much cpu/mem, as I've seen the job OOMKilled quite often if it runs longer. This can be replaced by initContainer.
  • It is using oc apply to "patch" the resource, which should have been replaced by oc patch.

CP4WAIops CR outofSync

CP4WAIops CR outofSync because of noi has been changed by something when deploy Instalation

截屏2021-11-05 下午3 33 49

Workaround:

Add ignoreDifferences to application like below:

kubectl edit application cp4waiops -n openshift-gitops

截屏2021-11-05 下午5 16 08

cc @morningspace @jianh619

Does not support remote deployment

The current repo only supports in-cluster deployment. It will fail when deploy on a remote cluster. The major problem is that the configuration assumes there is always an openshift-gitops namespace on the target cluster, and all jobs use the gitops service account created in openshift-gitops namespace w/ cluster-admin role. This is not true in a remote cluster, where gitops operator does not exist, neither the namespace.

Create GitHub Action to auto publish docs

We have some recent changes in docs. However, this has not been synced to the online docs website at:
https://ibm.github.io/cp4waiops-gitops/docs/. Right now, to publish the docs need to be done manually by running mkdocs cli from local. We need to create a GitHub Action to automate this work.

In the GitHub Action, we should:

  • Make sure mkdocs cli is in place and executable
  • Make sure ghp_import is in place and executable
  • Run mkdocs build to generate static web pages from the markdown files
  • Run ghp-import site -p -x docs to publish the generated static web pages to gh-pages branch under docs folder.

P.S.:
We cannot use https://github.com/marketplace/actions/deploy-mkdocs because it will override the whole gh-pages branch, which will override the helm repo index file at root directory on the same branch.

Update AI Manager helm chart version to fix the GitHub Action failure

We have recently modified helm chart for AI Manager and Event Manager in PR #132, but did not update the helm chart version. (Event Manager chart version is updated which is good). This breaks the GitHub Action to generate the helm chart packages. More details, please check: https://github.com/IBM/cp4waiops-gitops/runs/5797052801.

We should bump the helm chart version, in this case, AI Manager from 0.0.1 to 0.0.2, anytime when we modify the corresponding helm chart.

Create a few sample custom sizing configmaps without parameterizing

We shall create a set of sample custom sizing configmaps that do not allow users to modify the detailed resource settings within these configmaps. This means all the resources settings in these configmaps are fixed value or OOTB.

The reason for this is that, user will use some tooling to input load factors such as number of transactions per second which will gear the tooling to generate such configmaps. Once these configmaps submitted to git repo, the GitOps process will consume and sync onto target system.

As example, we could have:

.
└── config
    └── cp4waiops                 (cp4waiops gitops configuration)
        └── custom-sizing         (cp4waiops custom sizing sample configuration)
            ├── templates
            │   ├──custom-sizing-configmap.yaml       (configmap for arbitrary user input)
            │   ├──custom-sizing-resource-lockers.yaml(custom sizing not covered by configmap for arbitrary user input)
            │   ├──x-small-configmap.yaml             (configmap for general x-small)
            │   ├──x-small-idle-configmap.yaml        (configmap specific for idle workload using x-small)
            │   ├──x-small-lad-configmap.yaml         (configmap specific for lad use case using x-small)
            │   ├──x-small-mad-configmap.yaml         (configmap specific for mad use case using x-small)
            │   ├──x-small-resource-lockers.yaml      (custom sizing not covered by configmap for general x-small)
            │   ├──x-small-idle-resource-lockers.yaml (custom sizing not covered by configmap for idle workload using x-small)
            │   ├──x-small-lad-resource-lockers.yaml  (custom sizing not covered by configmap for lad using x-small)
            │   ├──x-small-mad-resource-lockers.yaml  (custom sizing not covered by configmap for mad using x-small)
            │   ├──small-lad-configmap.yaml           (configmap specific for lad use case using small)
            │   ├──small-mad-configmap.yaml           (configmap specific for mad use case using small)
            │   ├──small-lad-resource-lockers.yaml    (custom sizing not covered by configmap for lad using small)
            │   └──small-mad-resource-lockers.yaml    (custom sizing not covered by configmap for mad using small)
            └── values.yaml       (values for arbitrary user input, and the profile, e.g.: x-small, x-small-idle, x-small-lad, small-mad, etc.)

Advanced Install Document

We need to provide some documents for customer to do customization install which will request them to fork the repo and then track all of the commits in their own repo.

Suggest to use the new helm-based install method for ceph storage

A couple of issues for the current ceph YAML manifests stored in this repo:

  • It uses an old ceph version.
  • It's very often that you will see OutOfSync when using it to deploy ceph especially when you use manual sync. In such case, you will have to re-sync to get rid of OutOfSync. This is because some fields defined in ceph YAML manifests stored in repo will have different appearance but w/ the same meaning at runtime, e.g.: some field will be missing at runtime, while in YAML manifest, it has "enabled" field w/ false value. They are the same thing but will be treated as difference by Argo CD. That's why you see OutOfSync.
  • It does not handle the healthiness properly, where it saids healthy but some ceph pods are still being launched.

All these issues are resolved in https://github.com/cloud-pak-gitops/sample-app-gitops, where it uses robot-shop as a sample app and use ceph as storage too.

With that, suggest to follow sample-app-gitops repo to update this repo. Also, suggest to create a separate repo to host all helm charts centrally, including the ceph one, rather than put them under each gitops repo like sample-app-gitops.

airgap enhancement

How to leverage gitops to simplify the process for airgap install.

  • How to import images automatically
  • How to work in an airgap cluster without internet? We cannot use github.com but others, like helm?

Update `REPLACE_IT` with more detial

We have some parameters with default value as REPLACE_IT, can we update it to more detailed info, like REPLACE_IT, HOW??, WHAT kind of info is needed etc

image

@IBM/cp4waiops-gitops-core

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.