ibm / cp4waiops-gitops Goto Github PK
View Code? Open in Web Editor NEWManage Your IBM Cloud Pak for Watson AIOps With GitOps
Home Page: https://ibm.github.io/cp4waiops-gitops/docs/
License: Apache License 2.0
Manage Your IBM Cloud Pak for Watson AIOps With GitOps
Home Page: https://ibm.github.io/cp4waiops-gitops/docs/
License: Apache License 2.0
Install x-small cp4waiops failed from all-in-one in new release-3.4 per doc https://github.com/IBM/cp4waiops-gitops/blob/release-3.4/docs/how-to-deploy-cp4waiops.md#option-2-experimental-install-using-all-in-one-configuration
project: default
source:
repoURL: 'https://github.com/IBM/cp4waiops-gitops'
path: config/all-in-one
targetRevision: release-3.4
helm:
parameters:
- name: cp4waiops.profile
value: x-small
- name: cp4waiops.eventManager.enabled
value: 'false'
destination:
server: 'https://kubernetes.default.svc'
namespace: openshift-gitops
syncPolicy:
automated: {}
failed detail in subscription ibm-aiops-orchestrator
:
Conditions:
- lastTransitionTime: '2022-05-29T12:14:18Z'
message: >-
targeted catalogsource openshift-marketplace/ibm-operator-catalog
missing
reason: UnhealthyCatalogSourceFound
status: 'True'
type: CatalogSourcesUnhealthy
- message: >-
constraints not satisfiable: no operators found from catalog
ibm-operator-catalog in namespace openshift-marketplace referenced by
subscription ibm-aiops-orchestrator, subscription ibm-aiops-orchestrator
exists
reason: ConstraintsNotSatisfiable
status: 'True'
type: ResolutionFailed
including harbor and docker reigstry
based on #3
I think we really need a clear decision on how to name our cp4waiops release chart and version:
In 3.2, we put ai-manager and event-manager to a single helm chart, which is good. Then, the helm chart name + version
can be something like: cp4waiops32-0.1.0
. It could be a bit confusing why we use cp4waiops32
as chart name here. The reason is essentially we do not maintain a consistent helm chart for cp4waiops release by release, but instead, we have several charts physically to host cp4waiops in each release. Also, different from cp4waiops itself, helm chart has its own version, so that we can track helm chart change separately w/o interfering w/ cp4waiops release cycle, e.g.: cp4waiops32-0.1.0
and cp4waiops32-0.1.1
are both helm charts for cp4waiops 3.2, but there are some enhancements added in helm template in chart 0.1.1
.
In 3.3, we put ai-manager and event-manager separately in their own helm chart, then it makes it inconsistent w/ 3.2 and adds confusion, e.g.: we could use aimanager33-0.1.0
and eventmanager33-0.1.0
, but there's no cp4waiops33-0.1.0
. It looks ai-manager and event-manager are two separate products, and cp4waiops as a product is "missing". Also, using eventmanager33
does not mean event-manager version is 3.3, but instead, it is the cp4waiops version, and event-manager has its own version, e.g.: 1.6.4.0
. And, it looks we can not use name such as cp4waiops33-aimanager-0.1.0
, because it will break helm template rendering when we want to define another helm chart that depends on the official cp4waiops helm chart to customize something. (Only one dash is allowed in dependent helm chart name)
With that, to keep it simple and consistent, I'm thinking maybe it's better to follow the way that we did w/ 3.2 to use a single helm chart to cover both ai-manager and event-manager. Then, in 3.3, we can keep using cp4waiops33-x.y.z
.
Use this item to track all the updates needed to align with the reference implementation.
@morningspace can you check what is wrong with webpage here?
The current repo only supports in-cluster deployment. It will fail when deploy on a remote cluster. The major problem is that the configuration assumes there is always an openshift-gitops namespace on the target cluster, and all jobs use the gitops service account created in openshift-gitops namespace w/ cluster-admin role. This is not true in a remote cluster, where gitops operator does not exist, neither the namespace.
Update document based on #3
In 3.2, we have a pre-checking script at https://github.com/IBM/cp4waiops-samples/tree/main/prereq-checker/3.2 , we need to enable end user can use GitOps to do this checking.
Add EventManagerGateway
We need to add some comments to https://github.com/cloud-pak-gitops/cp4waiops-gitops/blob/main/config/3.2/cp4waiops/values.yaml to describe the meaning for each parameter.
We need to update document to adding argocd CLI to create applications, so that the CICD or some test tools can leverage argocd CLI to do gitops.
We shall create a set of sample custom sizing configmaps that do not allow users to modify the detailed resource settings within these configmaps. This means all the resources settings in these configmaps are fixed value or OOTB.
The reason for this is that, user will use some tooling to input load factors such as number of transactions per second which will gear the tooling to generate such configmaps. Once these configmaps submitted to git repo, the GitOps process will consume and sync onto target system.
As example, we could have:
.
└── config
└── cp4waiops (cp4waiops gitops configuration)
└── custom-sizing (cp4waiops custom sizing sample configuration)
├── templates
│ ├──custom-sizing-configmap.yaml (configmap for arbitrary user input)
│ ├──custom-sizing-resource-lockers.yaml(custom sizing not covered by configmap for arbitrary user input)
│ ├──x-small-configmap.yaml (configmap for general x-small)
│ ├──x-small-idle-configmap.yaml (configmap specific for idle workload using x-small)
│ ├──x-small-lad-configmap.yaml (configmap specific for lad use case using x-small)
│ ├──x-small-mad-configmap.yaml (configmap specific for mad use case using x-small)
│ ├──x-small-resource-lockers.yaml (custom sizing not covered by configmap for general x-small)
│ ├──x-small-idle-resource-lockers.yaml (custom sizing not covered by configmap for idle workload using x-small)
│ ├──x-small-lad-resource-lockers.yaml (custom sizing not covered by configmap for lad using x-small)
│ ├──x-small-mad-resource-lockers.yaml (custom sizing not covered by configmap for mad using x-small)
│ ├──small-lad-configmap.yaml (configmap specific for lad use case using small)
│ ├──small-mad-configmap.yaml (configmap specific for mad use case using small)
│ ├──small-lad-resource-lockers.yaml (custom sizing not covered by configmap for lad using small)
│ └──small-mad-resource-lockers.yaml (custom sizing not covered by configmap for mad using small)
└── values.yaml (values for arbitrary user input, and the profile, e.g.: x-small, x-small-idle, x-small-lad, small-mad, etc.)
cp4waiops
|
| -- foundation
| |
| | -- ibm-operator-catalogsource
|
| -- aimanager
| |
| | -- operator
| | -- operands
|
| -- eventmanager
|
| -- operator
| -- operands
CP4WAIops CR outofSync because of noi
has been changed by something when deploy Instalation
Workaround:
Add ignoreDifferences to application like below:
kubectl edit application cp4waiops -n openshift-gitops
We have some recent changes in docs. However, this has not been synced to the online docs website at:
https://ibm.github.io/cp4waiops-gitops/docs/. Right now, to publish the docs need to be done manually by running mkdocs cli from local. We need to create a GitHub Action to automate this work.
In the GitHub Action, we should:
mkdocs
cli is in place and executableghp_import
is in place and executablemkdocs build
to generate static web pages from the markdown filesghp-import site -p -x docs
to publish the generated static web pages to gh-pages
branch under docs
folder.P.S.:
We cannot use https://github.com/marketplace/actions/deploy-mkdocs because it will override the whole gh-pages
branch, which will override the helm repo index file at root directory on the same branch.
We need to provide some documents for customer to do customization install which will request them to fork the repo and then track all of the commits in their own repo.
@morningspace @liyanwei93 Can we disable argocd-custom by default?
Some changes may include:
The Event Manager installation instructions (https://www.ibm.com/docs/en/noi/1.6.4?topic=openshift-preparing) state that the admin must associate an SCC with the noi-service-account
, but I do not see that in the repo, only the service account itself:
https://github.com/IBM/cp4waiops-gitops/blob/main/config/3.3/event-manager/templates/noi-serviceaccount.yaml
Also note that the SA cannot be in the repo, because OCP attaches additional secrets to service accounts and then Argo and OCP start to overwrite each other's changes periodically: Argo shows the resource as out-of-sync because it has an extra secret and deletes it, then OCP adds the secret again.
Unfortunately, the only way to avoid this problem is to use a procedural approach to detect, create, and patch the service account, as needed:
CRD need to be installed first before install CR
How to leverage gitops to simplify the process for airgap install.
Hi all!
After issue #125 my ArgoCD continued. Lots of pods got deployed, but some of them have broken hearts:
When looking in my OCP cluster, I see for example these:
Some of the pods do have errors, for example evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh:
Using TLS certificate located at /internal-tls-keys/tls.crt
Using TLS key located at /internal-tls-keys/tls.key
{"name":"logging","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"Logging started","time":"2022-03-31T09:20:09.003Z","v":0}
{"name":"common-express-server","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"\n== Loaded server application ==\n Package name: @hdm/akora-app-noi\n Package version: 12.0.501\n Base URL: https://netcool-evtmanager.cp4waiops-em-6c9cbda044ecd11a71bd72721098e1cf-0000.eu-de.containers.appdomain.cloud/\n HTTP port: 8080\n HTTPS port: 8443\n Development Mode: false\n Require HTTPS: true\n===============================\n ","time":"2022-03-31T09:20:09.005Z","v":0}
{"name":"akora-server.middleware.Session","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"cookie":{"name":"akora.sid","path":"/","maxage":7200,"samesite":"lax"},"provider":"redis","msg":"Sessions are enabled","time":"2022-03-31T09:20:09.017Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_ea_uiapi","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.140Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-ibm-ea-ui-api-graphql.noi.svc:8080/graphql","time":"2022-03-31T09:20:09.141Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_ea_uiapi" ~> ""","time":"2022-03-31T09:20:09.141Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_noi_webgui","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.141Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> https://evtmanager-webgui.noi.svc:16311/ibm/console/webtop","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_noi_webgui" ~> ""","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_cemusers","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-ibm-cem-cem-users.noi.svc:6002/","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_cemusers" ~> ""","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_cem_uiapi","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-ibm-cem-event-analytics-ui.noi.svc:3201/","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_rba_uiapi","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-ibm-cem-rba-rbs.noi.svc:3005/","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_rba_uiapi" ~> ""","time":"2022-03-31T09:20:09.142Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_rba_legacyui","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-ibm-cem-rba-rbs.noi.svc:3005/","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_asm_ui_api","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> https://evtmanager-topology-ui-api.noi.svc:3080/","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_asm_ui_api" ~> ""","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"id":"hdm_noi_dashboarding","msg":"Creating scoped proxy","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy created: / -> http://evtmanager-grafana.noi.svc:3000/","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.ApiProxyRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"[HPM] Proxy rewrite rule created: "^/api/p/hdm_noi_dashboarding" ~> ""","time":"2022-03-31T09:20:09.143Z","v":0}
{"name":"akora-server.routes.DashFederationRoute","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"pages":[{"federateType":["page","widget"],"federated":true,"globalized":[{"language":"ar","label":"Alerts"},{"language":"cs","label":"Výstrahy"},{"language":"de","label":"Alerts"},{"language":"en","label":"Alerts"},{"language":"es","label":"Alertas"},{"language":"fr","label":"Alertes"},{"language":"he","label":"Alerts"},{"language":"hu","label":"Riasztások"},{"language":"it","label":"Avvisi"},{"language":"ja","label":"アラート"},{"language":"ko","label":"경보"},{"language":"pl","label":"Alerty"},{"language":"pt-BR","label":"Alertas"},{"language":"ru","label":"Уведомления"},{"language":"th","label":"Alerts"},{"language":"zh-CN","label":"警报"},{"language":"zh-TW","label":"警示"}],"id":"com.ibm.hdm.noi.alerts.alert-viewer","platforms":["DESKTOP"],"roles":["noi_operator","noi_engineer","noi_lead"],"pageurl":"/aiops/cfd95b7e-3bc7-4006-a4a8-a73a79c71255/federated/alerts","type":"visibl...
{"name":"akora-server.middleware.Authentication","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"mode":"openid-cem","msg":"Authentication is enabled","time":"2022-03-31T09:20:09.419Z","v":0}
{"name":"akora-server.dash-federation.ci","hostname":"evtmanager-ibm-hdm-common-ui-uiserver-58fd6bffbd-bclzh","pid":1,"level":30,"msg":"Check console integration { ci: 'com.ibm.hdm.hdmintegration.ui.ci.cloudanalytics' }","time":"2022-03-31T09:20:09.422Z","v":0}
TypeError: client_id is required
at new BaseClient (/server/node_modules/openid-client/lib/client.js:176:13)
at new Client (/server/node_modules/openid-client/lib/client.js:1821:7)
at module.exports (/server/lib/middleware/Authentication/modes/openid-cem.js:50:18)
at Authentication (/server/lib/middleware/Authentication/index.js:52:14)
at module.exports (/server/lib/routes/getHttpRoutes.js:72:53)
at routes (/server/lib/routes/index.js:21:22)
at Config.applyAppRoutes (/server/lib/index.js:19:32)
at App (/server/node_modules/@hdm/common-express-server/lib/App.js:33:10)
at AsyncFunction.module.exports.createServers (/server/node_modules/@hdm/common-express-server/lib/index.js:43:15)
at module.exports (/server/node_modules/@hdm/common-express-server/lib/index.js:17:40)
As per Slack DM with @morningspace:
Did quick look, so:
CatalogSource refers to icr.io/cpopen/ibm-operator-catalog:latest, that is supposed to be the GA build
Subscription uses channel: v1.7, it looks the 3.2.0 docs on IBM Docs haven’t been updated to reflect the latest release for Event Manager, but I’m assuming this is the right channel
For those pods w/ broken hearts, I see two of them are caused by ImagePullBackOff, which seems to be the root cause, i.e.: cannot pull images from cp.icr.io. The other two pods are impacted because of this.
Hopefully someone can see a missing link in this?
Thanks!
K.
We can take 3.3 folder as reference, and change other folder structure for releases. Especially, 3.2 and 3.3, they need to be consistent, so that the same set of all-in-one chart can be applied to both.
@cloud-pak-gitops/cp4wiops-gitops-core
We shall create a set of sample custom sizing configmaps that do not allow users to modify the detailed resource settings within these configmaps. This means all the resources settings in these configmaps are fixed values or OOTB.
The reason for this is that, user will use some tooling to input load factors such as number of transactions per second which will gear the tooling to generate such configmaps. Once these configmaps submitted to git repo, the GitOps process will consume and sync onto target system.
As example, we could have:
.
└── config
└── cp4waiops (cp4waiops gitops configuration)
└── custom-sizing (cp4waiops custom sizing sample configuration)
├── templates
│ ├──custom-sizing-configmap.yaml (configmap for arbitrary user input)
│ ├──custom-sizing-resource-lockers.yaml(custom sizing not covered by configmap for arbitrary user input)
│ ├──x-small-configmap.yaml (configmap for general x-small)
│ ├──x-small-idle-configmap.yaml (configmap specific for idle workload using x-small)
│ ├──x-small-lad-configmap.yaml (configmap specific for lad use case using x-small)
│ ├──x-small-mad-configmap.yaml (configmap specific for mad use case using x-small)
│ ├──x-small-resource-lockers.yaml (custom sizing not covered by configmap for general x-small)
│ ├──x-small-idle-resource-lockers.yaml (custom sizing not covered by configmap for idle workload using x-small)
│ ├──x-small-lad-resource-lockers.yaml (custom sizing not covered by configmap for lad using x-small)
│ ├──x-small-mad-resource-lockers.yaml (custom sizing not covered by configmap for mad using x-small)
│ ├──small-lad-configmap.yaml (configmap specific for lad use case using small)
│ ├──small-mad-configmap.yaml (configmap specific for mad use case using small)
│ ├──small-lad-resource-lockers.yaml (custom sizing not covered by configmap for lad using small)
│ └──small-mad-resource-lockers.yaml (custom sizing not covered by configmap for mad using small)
└── values.yaml (values for arbitrary user input, and the profile, e.g.: x-small, x-small-idle, x-small-lad, small-mad, etc.)
Create a page like this https://cloud-pak-gitops.github.io/website/docs/#/
We have recently modified helm chart for AI Manager and Event Manager in PR #132, but did not update the helm chart version. (Event Manager chart version is updated which is good). This breaks the GitHub Action to generate the helm chart packages. More details, please check: https://github.com/IBM/cp4waiops-gitops/runs/5797052801.
We should bump the helm chart version, in this case, AI Manager from 0.0.1 to 0.0.2, anytime when we modify the corresponding helm chart.
@cloud-pak-gitops/cp4wiops-gitops-core
Follow the proposal as https://github.com/IBM/cloudpak-gitops/tree/v0.1.0
A couple of issues for the current ceph YAML manifests stored in this repo:
All these issues are resolved in https://github.com/cloud-pak-gitops/sample-app-gitops, where it uses robot-shop as a sample app and use ceph as storage too.
With that, suggest to follow sample-app-gitops repo to update this repo. Also, suggest to create a separate repo to host all helm charts centrally, including the ceph one, rather than put them under each gitops repo like sample-app-gitops.
@cloud-pak-gitops/cp4wiops-gitops-core
AIManager and Event Manager can not be deployed on the same namespace. Need to set up two namespaces parameters.
Refer supported scenarios under https://www.ibm.com/docs/en/cloud-paks/cloud-pak-watson-aiops/3.2.0?topic=planning-deployment-scenarios#deploy_aimgr_noi
Also please refer the workaround for installing both on same cluster under different namespace.
A a fix to issue w/ 3.2 GA build, this job is used to wait for some resources available to apply image pull requests. The current job has a several issues:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.