Coder Social home page Coder Social logo

cloudoperators / greenhouse Goto Github PK

View Code? Open in Web Editor NEW
8.0 8.0 1.0 12.53 MB

Cloud operations platform

License: Apache License 2.0

Dockerfile 0.07% Makefile 0.49% Smarty 1.03% Go 65.38% Shell 0.71% CSS 0.50% HTML 1.78% JavaScript 17.00% TypeScript 12.80% SCSS 0.24%
cloud juno k8s kubebuilder kubernetes monitoring operations prometheus observability opentelemetry security compliance

greenhouse's Introduction

Greenhouse

REUSE status

Greenhouse is a cloud operations platform designed to streamline and simplify the management of a large-scale, distributed infrastructure.

It offers a unified interface for organizations to manage various operational aspects efficiently and transparently and operate their cloud infrastructure in compliance with industry standards.
The platform addresses common challenges such as the fragmentation of tools, visibility of application-specific permission concepts and the management of organizational groups. It also emphasizes the harmonization and standardization of authorization concepts to enhance security and scalability. With its operator-friendly dashboard, features and extensive automation capabilities, Greenhouse empowers organizations to optimize their cloud operations, reduce manual efforts, and achieve greater operational efficiency.

Value propositions

Greenhouse value propositions

Community

Greenhouse holds bi-weekly community calls in Microsoft Teams.

Roadmap

The Roadmap Kanban board provides an overview of ongoing and planned efforts.

Architecture & Design

The Greenhouse design and architecture document describes the various use-cases and user stories.

API resources

Greenhouse extends Kubernetes using Custom Resource Definitions(CRD).
See the API resources documentation for more details.

Code of Conduct

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone. By participating in this project, you agree to abide by its Code of Conduct at all times.

Licensing

Copyright 2024 SAP SE or an SAP affiliate company and Greenhouse contributors. Please see our LICENSE for copyright and license information. Detailed information including third-party components and their licensing/copyright information is available via the REUSE tool.

greenhouse's People

Contributors

ajinkyapatil8190 avatar artiereus avatar auhlig avatar christianneu avatar edda avatar hodanoori avatar ivogoman avatar kengou avatar ospo-bot[bot] avatar renovate[bot] avatar tilmanhaupt avatar uwe-mayer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

kengou4

greenhouse's Issues

[FEAT] - Removal off all Greenhouse resources during Cluster Offboarding

Priority

(Low) Something is a little off

Description

In order to ensure no objects are left behind there should be a garbage collection that removes all Greenhouse managed resources from the managed cluster.
Under normal circumstances the Greenhouse Controller will ensure that their managed CRDs are removed during the Lifecycle.

Acceptance criteria:

  • add a `resources.greenhouse.sap/managed-by' label to all resources created in the managed cluster
  • document this behaviour

Reference Issues

No response

Ceph integration

Umbrella issue for tracking Ceph integration.
Granular sub-issues can be found via label WP2.3.4.7 Ceph integration.

Greenhouse must support operational tooling required by the upcoming Ceph implementation.
Coordinate efforts with @jknipper, @defo89, @SchwarzM.

Acceptance criteria:

  • Develop a plugin for the Ceph clusters reflecting the general health and status
  • Ceph k8s clusters onboarded to Greenhouse
  • Greenhouse-managed plugins in cluster: Kube-monitoring, ingress, cert-manager, disco, etc.
  • Kubernetes access management based on Greenhouse teams

[FEAT] - Supported Kubernetes Versions

Priority

(Medium) I'm annoyed but I'll live

Description

Helm supports a limited amount of Kubernetes Versions:

As of Helm 3, Helm is assumed to be compatible with n-3 versions of Kubernetes it was compiled against
[1] Helm Version skew

This matches the Kubernetes versions officially supported [2] Kubernetes Supported Versions

The Greenhouse HelmController should supported the officially supported Kubernetes Versions by Kubernetes & Helm but not more.

Acceptance Criteria:

  • Document the decision on which Kubernetes Version to support.
  • Enduser Docs for which Kubernetes Versions are supported for Greenhouse managed Clusters
  • Ensure only Clusters with supported Kubernetes Versions can be onboarded
  • TBD: Stop reconciliation on Clusters running Kubernetes Versions that reached their EOL
  • Make it visible in the UI, if a Clusters Kubernetes version is going EOL in the next half-year.
  • (Optional) Deep Link to deprecation warnings shown in DOOP [3] Constraint Deprecated API Versions

Reference Issues

No response

[FEAT] - Greenhouse UI container

Priority

(Medium) I'm annoyed but I'll live

Description

Currently the ui container provided for greenhouse deployed with the helm charts serves an index.html file with a script tag importing the core greenhouse-ui javascript bundle from an external source.
We aim to provide a container that serves all bundles for the microfrontends the greenhouse core ui consists of from localhost and therefore has no external dependencies.

Necessary steps:

  • Move the greenhouse and the greenhouse-management app from https://github.com/sapcc/juno/tree/main to this repo
  • Introduce gh-actions to build the packages and ship them to the repository's package registry. The goal is to reuse a custom gh-action is to be published and maintained by the juno team for uniformity.
  • Provide a container image hosting the apps locally
  • Introduce a gh-action to build the UI container and ship it to the repository's container registry

Reference Issues

No response

Initial Release Preparation

  • container images pushed to Github registry (via GH Actions)
  • helm-charts pushed to Github registry (via GH Actions)
  • UI container to Github registry (via GH Actions)
  • How-to for installing Greenhouse ( + manual test)
  • add disclaimer that CRDs are alphav1 and documentation is in-progress to Project readme
  • check links from Project readme and documentation are working after repo move
  • (optional) link Greenhouse Documentation from ORA Website

[FEAT] - optionally run ChartTests for a Plugin

Priority

None

Description

Currently the HelmController only checks if a Plugin's Release could be installed/upgraded successfully. However, it there are no additional checks to verify that the Helm Release is working as expected.
With Chart Tests it is possible to run a set of tests after a Helm action has been performed to validate that the deployed components work as intended.

Example tests:

Validate that your configuration from the values.yaml file was properly injected.
Make sure your username and password work correctly
Make sure an incorrect username and password does not work
Assert that your services are up and correctly load balancing
etc.

Acceptance Criteria

  • Plugin Developer can provide ChartTests as part of the PluginDefinition's Helm Chart.
  • Results of the ChartTests are reflected in the Plugins status
  • Failure of these tests should be also be reflected in metrics and result in a Rollback

Reference Issues

No response

Feat: Make PluginConfig namespace configurable

As noted down in the internal repository by @IvoGoman:

With the current implementation all Plugins will be installed into the Greenhouse managed namespace within the remote clusters.
In order to have a more granular setup and to avoid a "mono-namespace" it should be possible to provide a namespace with the PluginConfig. To ensure that no helm releases are left-behind this namespace should be immutable and defaulted to the Greenhouse managed namespace.

[FEAT] - Merging PluginOptionValues (list & map) with defaults

Priority

(Medium) I'm annoyed but I'll live

Description

PluginDefinitions can specify defaults for their PluginOptions. The current implementation overrides these defaults if a Plugin supplies a corresponding PluginOptionValue.

In some cases this behavior is not ideal. For example a PluginDefinition specifies an PluginOption for ingress annotations with a default map of annotations. In the case that PluginAdmin wants to add an additional annotations to the list, they must copy the defaults and supply them in the Plugin together with the additional value.

For such cases it would be a convenience feature if the provided defaults for a map or list are merged with the values provided in a Plugin

Proposal: PluginOptions with list and map should be merged by default. Unwanted default keys should be removed by setting them to null as it is the known behaviour of Helm: deleting a default key

Acceptance Criteria

  • PluginOptionValues are merged with the defaults of PluginOptions for the types list and map
  • Documentation of this feature

Reference Issues

No response

[Epic] Greenhouse installer

Develop a distribution and installer for greenhouse artifacts on a Kubernetes platform.

Tasks:

  • Evaluate OCM to describe the SBOM/SBOD.

[FEAT] - Read-only OptionTypes

Priority

(Medium) I'm annoyed but I'll live

Description

As a Plugin Developer, I want to ensure that some defaults are not overwritten, to ensure the Plugin behaves as intended

Some PluginOptions should be read-only to ensure that the Plugin behaves as the PluginDeveloper has intended. This could be security relevant defaults or simply making sure that any incompatible or problematic setting cannot be overwritten by a Plugin. E.g. enabling or disabled sub-charts etc.
Proposal: PluginOptions are extended with a readOnly attribute. If this is set to true the ValidatingWebhooks for a Plugin ensure that any PluginOptionValue trying to overwrite the default is ignored. The AdmissionWebhook should either:

  • return an error to ensure this behaviour is transparent to the enduser.
  • return a warning and ignore the overwrite

The build-in merge functions in helm.go must ensure that these defaults are not overwritten.

Open Question: Should it be possible to turn a Option into a read-only one later on? This would require some additional effort to ensure that this does not break existing PluginConfigs. Another option would be to ignore any overwrites by a Plugin or possibly bring up a warning.

Acceptance Criteria:

  • It is possible to specify in a PluginDefinitions that a PluginOption is read-only
  • It is transparent to the user, what happens if they try to overwrite a read-only PluginOption
  • Documentation of this feature

Reference Issues

No response

[FEAT] - Instrument controller to expose metrics

Priority

None

Description

At the moment the controller build with Kubebuilder expose a generic standard set of metrics. With additional instrumentation of the controller they should expose actionable metrics.
e.g. the HelmController should expose a metrics w.r.t. the reconciliation of the Helm release for a Plugin, such that the metrics shows reconciliation failures and reasons.

Acceptance criteria:

  • Instrumentation of the Controllers
  • Alerting for Plugin, Cluster and Team RBAC metrics
  • Plutono Graphs e.g. Reconciliation errors, reconciliations/minute ...

Reference Issues

No response

[FEAT] - Plugin Integration

Priority

(Medium) I'm annoyed but I'll live

Description

Plugins can have other Plugins as pre-requisites or dependencies. It can be the case that one plugin requires that another one is deployed and/or that they are requiring the partially the same configuration. In these cases the configuration has to be maintained for each Plugin.
In order to reduce the configuration effort on the Plugins themselves, it should be possible to integrate Plugins with each other.

E.g. logshipper Plugin to allow collection and forwarding of container logs. logs backend that provides an OpenSearch cluster or maybe just the connection details to an existing cluster. It should be possible to take the required connection configuration from a logs Plugin and make it available to the logshipper Plugin.

Currently we are handling this with a multitude of helm values files that make it possible to share this between helm charts. In the context of Plugins it would be an option to reference options from other plugins. For example PluginDefinitionA declares that it requires OptionB from a PluginDefintionB. This way during the configuration of the PluginA from the Greenhouse UI the value may be pre-populated from an existing PluginB that runs in the same target cluster.

Acceptance criteria:

  • work out a concept for sharing configuration between Plugins and document in an ADR
  • implement the solution

Reference Issues

No response

๐Ÿ› [BUG] - `make generate-manifests` removes 2-line license headers

Priority

(Medium) I'm annoyed but I'll live

Description

Github Actions add the licences headers to every file. Also the generated CRDs.
When generating the CRDs locally these headers get removed. It would be nice if we can run the license header injector locally after the generating manifests etc.

Reproduction steps

1. `make generate-manifests`

Manifests

No response

Screenshots

No response

Transfer 2024 roadmap from internal github

Actively work on the open-sourcing Greenhouse considering codebase dependencies, documentation, removal of all internal references, etc. and migrate to this repository.

[FEAT] - Shared Configuration for Plugins

Priority

(Medium) I'm annoyed but I'll live

Description

As a ClusterAdmin, I want to configure common configuration once in a central place, so that all plugins requiring this configuration values get them automatically

  1. Example: Having an alerts plugin configured within my organization all subsequently instantiated kube-monitoring should have the alertmanager automatically configured.
  2. Example: Having a slack plugin configured, the alertmanager should have configured the respective integration.

In contrast to #84 this is not a 1:1 relationship between plugins but a central place where configuration can be maintained on a Organization or Cluster level.

Acceptance criteria:

  • work out a concept for sharing configuration between Plugins and document in an ADR

  • implement the solution

Reference Issues

#84

Kubernetes access management

Umbrella issue for tracking K8S Access management.

Granular sub-issues can be found via label WP2.3.4.2 Kubernetes access management.

Acceptance criteria:

  • Manage access to registered k8s clusters based on greenhouse teams
  • Assign cluster-wide and namespace specific roles to gh teams
  • implement the existing cluster-wide roles (admin, viewer, operator)
  • add support for namespace specific roles (admin, viewer, editor)
  • Tightly control access to k8s secrets
  • Access to secrets should be forbidden in general
  • Do not break admin/editor tasks (e.g. using the helm cli)

Cluster registry

Umbrella issue for tracking the cluster registry feature in the roadmap 2023 Kanban board.
Granular sub-issues can be found via label WP2.3.4.4 Cluster registry.

Manage operational tooling in registered Kubernetes cluster via Greenhouse.

Acceptance criteria:

  • Onboard, off-board clusters to Greenhouse with direct access via UI or kubectl
  • Organization administrators can configure Plugins for 1..N clusters
  • Greenhouse dashboard provides a cluster overview and management page
  • Greenhouse dashboard provides a cluster detail view showing at least the kubernetes version, node stati and configured plugins
  • Authentication for onboarded clusters is managed via unified registry

End-to-end test scenarios

This issue servers as a collection of e2e test scenarios:

  • Org
    • Service Proxy
    • Mock Auth
  • Teams
    • TeamMemberships with MockIdp (?)
  • Cluster
    • Bootstrap
    • Delete
  • Plugin
    • Version update
      • This is a crucial test case, we need to ensure that a plugin upgrade works without problems. We need to make sure that plugins updates will not break when rolled out
    • Deploy PluginConfig

[FEAT] - StatusConditions for TeamRoleBinding

Priority

(Low) Something is a little off

Description

Add Greenhouse StatusConditions to the TeamRoleBinding and populate them in the controller.

Reference Issues

#238 introduces StatusConditions for TeamRoleBindings

[FEAT] - Build Website via GitHub Action

Priority

(Medium) I'm annoyed but I'll live

Description

Acceptance Criteria:

  • Website is build with Hugo with an Action and published to GitHub pages #174
  • Changes to the Readme's of PluginDefinitions in cloudoperators/greenhouse-extensions trigger the action cloudoperators/greenhouse-extensions#85
  • (optional) It is possible to choose different versions of the docs (e.g. main, vX.Y)

Reference Issues

No response

Conferences

The open-source contributions within the ORA context should be presented to the top conferences to raise awareness and engage with the community.
Key topics are kubernetes, cloud-native, operations, devops, observability, security, compliance, ui, open-source, etc.
This issue tracks upcoming conferences we should consider showcasing Greenhouse:

  1. Open source summit
    • 16-18 SEPTEMBER 2024, Vienna
    • CFP closes: Tuesday, April 30, 12:00 AM PDT (UTC-7) / 0900 CEST (UTC+2)
  2. PromCon 2024
    • Berlin
    • CFP closes:
  3. KubeCon Europe 2025
    • NA
    • CFP closes:
  4. DevOps days Portugal
    • SEPTEMBER 23 - 24, 2024
    • CFP closes: Sunday, May 5, 2024.
  5. DevOps days London
    • SEPTEMBER 26 - 27, 2024
    • CFP closes: 23:59 BST 2024-05-24
  6. SREday London
    • SEPTEMBER 19-20, 2024
    • CFP closes: 18:35 CEST 2024-06-24
  7. SREday Amsterdam
    • NA
    • CFP closes:
  8. Continuous Lifecycle Mannheim
    • 13.-14. November
    • CFP closes: 2024-05-05

[FEAT] - Unify logging libraries in the GO code

Priority

(Low) Something is a little off

Description

There are currently direct imports and uses of the following logging libraries within Greenhouse.

log
go.uber.org/zap
github.com/sirupsen/logrus
github.com/go-logr/logr
k8s.io/klog/v2
sigs.k8s.io/controller-runtime/pkg/log

Where possible we should aim to unify this list. As part of this refactoring the logging should also been made contextual, where this is possible and it adds additional value.

Context:
K8S SIG Instrumentation Logging Guidelines
K8S Structured Logging Guidline

Reference Issues

No response

[FEAT] - Greenhouse managed namespace in managed cluster

Priority

(Medium) I'm annoyed but I'll live

Description

Greenhouse creates a namespace in the managed cluster. This namespace contains by default the helm releases for all plugins installed to the cluster, as well as Greenhouse CR that are being propagated.
Currently this namespace is called as the Organization, where the cluster was on-boarded to. This is to change this behaviour to create a namespace called greenhouse instead.

Acceptance criteria:

  • existing clusters with the old behaviour should work as is (some plugins cannot easily be migrated)
  • info/warning from admission, if this namespace already exists
  • enduser documentation

Reference Issues

No response

[FEAT] - Add Team awareness to Plugins

Priority

(Low) Something is a little off

Description

The Team and TeamMemberShip CRDs enable user management within an Organization.
Currently, all Teams are being considered by any Plugin regardless of the Plugins relevance for a Team.
Granular control is needed.

Proposal:

  • Label Teams that are relevant for a PluginDefinition
    Accepted label keys are:
    • name of the PluginDefinition, e.g. the supernova plugin adds greenhouse.sap/supernova to _Teams`
    • common labels like greenhouse.sap/support-group
  • UI: Include a step in the Plugin enablement wizard to allow selection of Plugins.
  • If the PluginDefinition is disabled, the label containing the PluginDefinition name is removed from all teams

Reference Issues

No response

Kubernetes monitoring plugins

Context

Umbrella issue for tracking efforts on Kubernetes monitoring plugins. Granular subtopics can be found via the label WP2.3.4.3 Kubernetes Monitoring.

The Greenhouse catalog offers plugins that install a collection of Kubernetes manifests, Plutono dashboards and Prometheus rules. They are combined with documentation and scripts to provide easy-to-use end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

Plugins

kube-monitoring

alerts

Plutono

By architectural decision, the Alertmanager and Plutono are accepted as components in the central Greenhouse cluster. Other components run in remote Kubernetes clusters. Plugins are expected to be highly integrated so that certain configuration options work out of the box.

Acceptance criteria:

  1. Prometheus operator and CRDs
    • Deploy as cluster singleton
  2. Prometheus server
    • kube-monitoring plugin can deploy multiple Prometheus server instances
  3. Prometheus Alertmanager
    • Single Alertmanager instance per organisation. All Prometheis alerting params configured.
    • Advanced: Evaluate multi-AZ HA scenario
  4. Plutono
    • Holistic instance in central cluster. Additional ones in remote clusters.
  5. Plugin tests
  6. Documentation, lots of documentation

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: Package lookup failures

Warning

Renovate failed to look up the following dependencies: Failed to look up npm package juno-ui-components, Failed to look up npm package url-state-provider, Failed to look up npm package messages-provider.

Files affected: ui/cluster-admin/package.json, ui/dashboard/package.json, ui/org-admin/package.json, ui/plugin-admin/package.json, ui/team-admin/package.json


Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

docker-compose
ui/cluster-admin/docker-compose.yaml
ui/plugin-admin/docker-compose.yaml
ui/secret-admin/docker-compose.yaml
ui/team-admin/docker-compose.yaml
dockerfile
Dockerfile
  • golang 1.22
Dockerfile.dev-env
  • golang 1.22
  • alpine 3.20.0
Dockerfile.headscalectl
  • golang 1.22
Dockerfile.tailscale
  • golang 1.22
  • ghcr.io/tailscale/tailscale v1.66.3
Dockerfile.tcp-proxy
  • golang 1.22
github-actions
.github/workflows/build-greenhousectl.yaml
  • actions/checkout v4
  • actions/setup-go v5
  • goreleaser/goreleaser-action v5
.github/workflows/codeql.yaml
  • actions/checkout v4
  • actions/setup-go v5
  • github/codeql-action v3
  • github/codeql-action v3
  • github/codeql-action v3
.github/workflows/docker-build.yaml
  • actions/checkout v4
  • sigstore/cosign-installer v3.5.0
  • docker/setup-qemu-action v3
  • docker/setup-buildx-action v3
  • docker/login-action v3
  • docker/metadata-action v5
  • docker/build-push-action v5
  • github/codeql-action v3
.github/workflows/helm-lint.yaml
  • actions/checkout v4
  • azure/setup-helm v4.2.0
  • actions/setup-python v5
  • helm/chart-testing-action v2.6.1
  • actions/github-script v7
.github/workflows/helm-push.yaml
  • actions/checkout v4
  • azure/setup-helm v4.2.0
  • actions/setup-python v5
  • docker/login-action v3
  • tj-actions/changed-files v44
.github/workflows/hugo.yaml
  • actions/checkout v4
  • actions/checkout v4
  • actions/configure-pages v5
  • actions/upload-pages-artifact v3
  • actions/deploy-pages v4
.github/workflows/kustomize-lint.yaml
  • actions/checkout v4
  • actions/setup-go v5
.github/workflows/label.yml
  • actions/labeler v5
.github/workflows/license.yaml
  • actions/checkout v4
  • apache/skywalking-eyes v0.6.0
  • EndBug/add-and-commit v9
.github/workflows/stale.yaml
  • actions/stale v9
.github/workflows/ui-test.yml
  • actions/checkout v4
  • actions/setup-node v4
.github/workflows/unit-tests.yml
  • actions/checkout v4
  • actions/setup-go v5
gomod
go.mod
  • go 1.22
  • github.com/dexidp/dex/api/v2 v2.1.1-0.20240409110506-3705207f0190@3705207f0190
  • k8s.io/api v0.29.5
  • k8s.io/apiextensions-apiserver v0.29.5
  • k8s.io/apimachinery v0.29.5
  • k8s.io/cli-runtime v0.29.5
  • k8s.io/client-go v0.29.5
  • k8s.io/component-base v0.29.5
  • k8s.io/kubectl v0.29.5
  • sigs.k8s.io/controller-runtime v0.17.5
  • github.com/ghodss/yaml v1.0.0
  • github.com/jeremywohl/flatten/v2 v2.0.0-20211013061545-07e4a09fb8e4@07e4a09fb8e4
  • github.com/juanfont/headscale v0.22.3
  • github.com/oklog/run v1.1.1-0.20240127200640-eee6e044b77c@eee6e044b77c
  • github.com/onsi/ginkgo/v2 v2.19.0
  • github.com/onsi/gomega v1.33.1
  • github.com/pkg/errors v0.9.1
  • github.com/prometheus/client_golang v1.19.1
  • github.com/sirupsen/logrus v1.9.3
  • github.com/spf13/cobra v1.8.0
  • github.com/spf13/pflag v1.0.5
  • github.com/wI2L/jsondiff v0.5.2
  • go.uber.org/zap v1.27.0
  • golang.org/x/text v0.15.0
  • gopkg.in/yaml.v3 v3.0.1
  • gotest.tools/v3 v3.5.1
  • helm.sh/helm/v3 v3.14.4
  • k8s.io/api v0.29.5
  • k8s.io/apiextensions-apiserver v0.29.5
  • k8s.io/apimachinery v0.29.5
  • k8s.io/cli-runtime v0.29.5
  • k8s.io/client-go v0.29.5
  • k8s.io/kubectl v0.29.5
  • k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0@fe8a2dddb1d0
  • sigs.k8s.io/controller-runtime v0.15.3
  • sigs.k8s.io/yaml v1.4.0
  • github.com/go-logr/logr v1.4.2
  • github.com/google/uuid v1.6.0
  • github.com/otiai10/copy v1.14.0
  • github.com/prometheus/common v0.53.0
  • golang.org/x/time v0.5.0
  • google.golang.org/grpc v1.64.0
  • google.golang.org/protobuf v1.34.1
  • k8s.io/klog/v2 v2.120.1
website/go.mod
  • go 1.20
helm-values
charts/cors-proxy/values.yaml
charts/headscale/values.yaml
  • ghcr.io/gurucomputing/headscale-ui 2023.01.30-beta-1
  • postgres 16.3
charts/idproxy/values.yaml
charts/manager/values.yaml
  • registry.k8s.io/ingress-nginx/kube-webhook-certgen v20221220-controller-v1.5.1-58-g787ea74b6
charts/tailscale-proxy/values.yaml
charts/team-membership/values.yaml
charts/ui/values.yaml
  • nginx 1.26.0-alpine
charts/website/values.yaml
helmv3
charts/greenhouse/Chart.yaml
npm
ui/cluster-admin/package.json
  • @types/jest ^27.5.2
  • @types/node ^16.18.58
  • @types/react ^18.2.27
  • @types/react-dom ^18.2.12
  • interweave ^13.1.0
  • postcss-url ^10.1.3
  • ts-luxon ^4.4.0
  • typescript ^5.2.2
  • @babel/core ^7.20.2
  • @babel/preset-env ^7.20.2
  • @babel/preset-react ^7.18.6
  • @babel/preset-typescript ^7.23.2
  • @svgr/core ^7.0.0
  • @svgr/plugin-jsx ^7.0.0
  • @tanstack/react-query 4.28.0
  • @testing-library/dom ^8.19.0
  • @testing-library/jest-dom ^5.16.5
  • @testing-library/react ^13.4.0
  • @testing-library/user-event ^14.4.3
  • assert ^2.0.0
  • autoprefixer ^10.4.2
  • babel-jest ^29.3.1
  • babel-plugin-transform-import-meta ^2.2.0
  • esbuild ^0.17.19
  • eslint-plugin-react-hooks ^4.6.0
  • jest ^29.7.0
  • jest-environment-jsdom ^29.3.1
  • postcss ^8.4.21
  • postcss-url ^10.1.3
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom 18.2.0
  • react-test-renderer 18.2.0
  • sapcc-k8sclient ^1.0.2
  • sass ^1.60.0
  • shadow-dom-testing-library ^1.7.1
  • tailwindcss ^3.3.1
  • util ^0.12.4
  • zustand ^4.1.1
  • @tanstack/react-query ^4.28.0
  • juno-ui-components *
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom 18.2.0
  • url-state-provider *
  • zustand ^4.1.1
ui/dashboard/package.json
  • @babel/core ^7.20.2
  • @babel/preset-env ^7.20.2
  • @babel/preset-react ^7.18.6
  • @svgr/core ^7.0.0
  • @svgr/plugin-jsx ^7.0.0
  • @tailwindui/react ^0.1.1
  • @testing-library/dom ^8.19.0
  • @testing-library/jest-dom ^5.16.5
  • @testing-library/react 13.4.0
  • @testing-library/user-event ^14.4.3
  • assert ^2.0.0
  • autoprefixer ^10.4.2
  • babel-jest ^29.3.1
  • babel-plugin-transform-import-meta ^2.2.0
  • esbuild ^0.20.2
  • immer ^9.0.21
  • jest ^29.3.1
  • jest-environment-jsdom ^29.3.1
  • postcss ^8.4.21
  • postcss-url ^10.1.3
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom 18.2.0
  • react-test-renderer 18.2.0
  • sapcc-k8sclient ^1.0.2
  • sass ^1.60.0
  • shadow-dom-testing-library ^1.7.1
  • tailwindcss ^3.3.1
  • util ^0.12.4
  • zustand 4.5.2
  • juno-ui-components *
  • messages-provider *
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom ^18.2.0
  • url-state-provider *
  • utils *
  • zustand 4.3.7
ui/org-admin/package.json
  • @babel/core ^7.20.2
  • @babel/preset-env ^7.20.2
  • @babel/preset-react ^7.18.6
  • @svgr/core ^7.0.0
  • @svgr/plugin-jsx ^7.0.0
  • @tanstack/react-query 4.28.0
  • @testing-library/dom ^8.19.0
  • @testing-library/jest-dom ^5.16.5
  • @testing-library/react ^13.4.0
  • @testing-library/user-event ^14.4.3
  • assert ^2.0.0
  • autoprefixer ^10.4.2
  • babel-jest ^29.3.1
  • babel-plugin-transform-import-meta ^2.2.0
  • jest ^29.3.1
  • jest-environment-jsdom ^29.3.1
  • luxon ^2.3.0
  • postcss ^8.4.21
  • postcss-url ^10.1.3
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom 18.2.0
  • react-test-renderer 18.2.0
  • sapcc-k8sclient ^1.0.2
  • sass ^1.60.0
  • shadow-dom-testing-library ^1.7.1
  • tailwindcss ^3.3.1
  • util ^0.12.4
  • zustand 4.5.2
  • esbuild ^0.19.5
  • @tanstack/react-query 4.28.0
  • juno-ui-components *
  • luxon ^2.3.0
  • messages-provider *
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom ^18.2.0
  • url-state-provider *
  • utils *
  • zustand 4.3.7
ui/plugin-admin/package.json
  • github-markdown-css ^5.5.1
  • react-markdown ^9.0.1
  • rehype-raw ^7.0.0
  • remark-gfm ^4.0.0
  • @babel/core ^7.20.2
  • @babel/preset-env ^7.20.2
  • @babel/preset-react ^7.18.6
  • @babel/preset-typescript ^7.24.1
  • @svgr/core ^7.0.0
  • @svgr/plugin-jsx ^7.0.0
  • @tanstack/react-query 4.28.0
  • @testing-library/dom ^8.19.0
  • @testing-library/jest-dom ^5.16.5
  • @testing-library/react ^13.4.0
  • @testing-library/user-event ^14.4.3
  • assert ^2.0.0
  • autoprefixer ^10.4.2
  • babel-jest ^29.3.1
  • babel-plugin-transform-import-meta ^2.2.0
  • esbuild ^0.19.5
  • jest ^29.3.1
  • jest-environment-jsdom ^29.3.1
  • luxon ^2.3.0
  • postcss ^8.4.21
  • postcss-url ^10.1.3
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom 18.2.0
  • react-test-renderer 18.2.0
  • sapcc-k8sclient ^1.0.2
  • sass ^1.60.0
  • shadow-dom-testing-library ^1.7.1
  • tailwindcss ^3.3.1
  • util ^0.12.4
  • zustand 4.3.7
  • juno-ui-components *
  • luxon ^2.3.0
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom 18.2.0
  • url-state-provider *
  • zustand 4.3.7
ui/secret-admin/package.json
  • github-markdown-css ^5.5.1
  • kubernetes-types ^1.30.0
  • react-markdown ^9.0.1
  • rehype-raw ^7.0.0
  • remark-gfm ^4.0.0
  • @babel/core ^7.20.2
  • @babel/preset-env ^7.20.2
  • @babel/preset-react ^7.18.6
  • @babel/preset-typescript ^7.24.1
  • @svgr/core ^7.0.0
  • @svgr/plugin-jsx ^7.0.0
  • @tanstack/react-query 4.28.0
  • @testing-library/dom ^8.19.0
  • @testing-library/jest-dom ^5.16.5
  • @testing-library/react ^13.4.0
  • @testing-library/user-event ^14.4.3
  • assert ^2.0.0
  • autoprefixer ^10.4.2
  • babel-jest ^29.3.1
  • babel-plugin-transform-import-meta ^2.2.0
  • esbuild ^0.19.12
  • jest ^29.3.1
  • jest-environment-jsdom ^29.3.1
  • luxon ^2.3.0
  • postcss ^8.4.21
  • postcss-url ^10.1.3
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom 18.2.0
  • react-test-renderer 18.2.0
  • sapcc-k8sclient ^1.0.2
  • sass ^1.60.0
  • shadow-dom-testing-library ^1.7.1
  • tailwindcss ^3.3.1
  • util ^0.12.4
  • zustand 4.3.7
  • luxon ^2.3.0
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom 18.2.0
  • zustand 4.3.7
ui/team-admin/package.json
  • lodash ^4.17.21
  • @babel/core ^7.20.2
  • @babel/preset-env ^7.20.2
  • @babel/preset-react ^7.18.6
  • @svgr/core ^7.0.0
  • @svgr/plugin-jsx ^7.0.0
  • @testing-library/dom ^8.19.0
  • @testing-library/jest-dom ^5.16.5
  • @testing-library/react ^13.4.0
  • @testing-library/user-event ^14.4.3
  • assert ^2.0.0
  • autoprefixer ^10.4.2
  • babel-jest ^29.3.1
  • babel-plugin-transform-import-meta ^2.2.0
  • esbuild ^0.17.19
  • jest ^29.3.1
  • jest-environment-jsdom ^29.3.1
  • luxon ^2.3.0
  • postcss ^8.4.21
  • postcss-url ^10.1.3
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom 18.2.0
  • react-test-renderer 18.2.0
  • sapcc-k8sclient ^1.0.2
  • sass ^1.60.0
  • shadow-dom-testing-library ^1.7.1
  • tailwindcss ^3.3.1
  • zustand 4.5.2
  • juno-ui-components *
  • luxon ^2.3.0
  • messages-provider *
  • prop-types ^15.8.1
  • react 18.2.0
  • react-dom 18.2.0
  • url-state-provider *
  • utils *
  • zustand 4.5.2
ui/types/package.json
  • kubernetes-types ^1.30.0
ui/utils/package.json
  • ts-luxon ^4.5.2

  • Check this box to trigger a request for Renovate to run again on this repository

[FEAT] - Validation and Generation of ServiceProxy URLs

Priority

(Medium) I'm annoyed but I'll live

Description

The service-proxy allows to expose Kubernetes Service resources from an onboarded cluster. In the past this has caused troubles if the combined $service-$namespace-$cluster is longer than 63 characters.

The labels must follow the rules for ARPANET host names. They must
start with a letter, end with a letter or digit, and have as interior
characters only letters, digits, and hyphen. There are also some
restrictions on the length. Labels must be 63 characters or less.
RFC1035

From an initial discussion with @databus23 this solution is proposed:

  • generate shorter links smarter: ๐Ÿ˜„ Instead of always generate cryptic shortlinks we only do it if it is required.
    Ideally we want the service and cluster name to be part of the url because it's useful information. So we can try to generate names like $service--$cluster and fallback to $service-$hash only if we would otherwise end up with more then 63 characters.

With some proposals for the generated name:

  1. We should an additional annotation to the service that can specify the service name. The name of service object is to long as it contains the helm release. E.g. kube-monitoring-external-prometheus --> external-prometheus.
    So by specifying something like greenhouse.sap/expose-name: "external-prometheus" in addition to greenhouse.sap/expose: "true" we can generate a url with a shorter more meaningful name
  2. We should maybe constraint the length of cluster names so that we don't end up with names like garden-greenhouse--monitoring-external. This cluster should really be named monitoring-external in the greenhouse context.
  3. As long as there isn't a clash of service names in a cluster (some service name in multiple namespaces) we use $service--$cluster as the hostname. If there is a clash we add the namespace to the hostname: $service--$cluster--$namespace. This might introduce a change in urls for an existing service if a second one with the same name is created in the cluster but at least we can resolve the conflict.
  4. If the resulting string is > 63 characters we hash the name with something short similar to how the deployment hashes are generated and we take a 52 character prefix from the generated name and add "-[10 digit hash]" to it. Because we changed the ordering of things in 3. we have a higher chance that service and cluster name will still be visible and only the namespace will be replaced by a hash.

Reference Issues

No response

[FEAT] - Rollout status for a Plugin

Priority

(Low) Something is a little off

Description

As an Operator I would like to see the rollout status of the deployed resources in order to know if the Plugin was configured/ deployed correctly.

Our internal pipelines have an additional step that looks at the rollout status for Deployments, StatefulSets, and DaemonSets that were deployed with a Helm release.

Acceptance Criteria:

  • Plugin has an additional status field for the rollout status
  • After a Deployment of the Helm Chart the rollout status for Deployments, StatefulSets, and DaemonSets contained in the release are checked

Reference Issues

No response

Alertmanger receiver config

Greenhouse Alertmanager Config should provide a basic framework to set up a list of different receivers like Slack channels or webhooks including authentication.

  • create a new AlertmangerConfig manifest and template it accordingly
  • Accept authentication data of type secret
  • Perform plugin integration and upgrade tests
  • Provide documentation and references

[FEAT] - Plugin Bundle/ Cluster Preset Concept

Priority

(Medium) I'm annoyed but I'll live

Description

As a _ClusterAdmin_, I would like to maintain a Plugin preset for a type of Cluster, so that I do not have to configure every Plugin manually for every Cluster that is on-boarded

An Organization can have multiple Kubernetes Clusters that should be set-up in a unified way. To reduce the toil of doing the Plugin configuration manually, it would be great if the ClusterAdmin could configure a bundle/preset.
This bundle/preset should contain a list of Plugins, default configuration etc. .
It should be possible to override/ complement these values with values specific to the cluster.

Acceptance criteria:

  • Propose a concept and with an ADR
  • Implement the Solution

Reference Issues

No response

๐Ÿ› [BUG] - dev-env image build failing

Priority

(Medium) I'm annoyed but I'll live

Description

this is the first failing run:
https://github.com/cloudoperators/greenhouse/actions/runs/8601060342
which points to this commit:
a43ac46

We need to investigate what makes the docker buildx fail for the Dockerfile.dev-env as the commit does not include changes to either of:

  • ./Dockerfile.dev-env
  • ./cmd/dev-env

Reproduction steps

1. Go to https://github.com/cloudoperators/greenhouse/actions/runs/8601060342
2. Check logs for `Build and push Docker Image` step of https://github.com/cloudoperators/greenhouse/actions/runs/8601060342/job/23567388327

Manifests

No response

Screenshots

No response

[FEAT] - Prettify DEX organization selection screen

Priority

(Low) Something is a little off

Description

The login page provided by dex is not styled in the same way the rest of the Greenhouse UIs.

As a Greenhouse user, I want to easily identify what the Login page asks, so that I can access Greenhouse with the correct context.

It would be nice if we could have

  • Greenhouse-like theming
  • (optional) Login to Greenhouse instead of dex
  • (optional) visually signify that the User selects a Greenhouse Organization

Reference Issues

No response

[FEAT] - Team RBAC use LabelSelector for Cluster & Team

Priority

(Medium) I'm annoyed but I'll live

Description

Currently the Team RBAC uses direct mapping of TeamRoleBinding to Cluster and Team by name.
It should be made possible to set a LabelSelector instead for either referenced resource. This will reduce the configuration effort for the ClusterAdmin and makes RBAC work OOTB for newly onboarded clusters and teams.

Acceptance Criteria:

  • Add a LabelSelector for Cluster into the TeamRoleBinding CRD #238
  • Add a LabelSelector for Team into the TeamRoleBinding CRD
  • Validate per Webhook that only clusterName or clusterSelector is set #238
  • Make these LabelSectors immutable

Reference Issues

No response

[FEAT] - Plugin Overview: MVP Checklist and Post-MVP Tasks

Priority

(Medium) I'm annoyed but I'll live

Description

MVP

Definition based on:

  • the live running implementation (view mode)
  • the PR #142 (view, edit and delete mode)

Tasks:

  • Remove main tabs and set the existing plugin list view as landing page. Add a DataGridToolbar with a Add New Plugin button which would display the available plugins definition from the PR #142 in a panel
  • Display a message It seems that no plugins have been created yet. Do you want to create a new one? with a button to the available plugins definition (panel) from the PR #142 when no plugins found.
  • Available plugins view redesign using a similar template like volta.
  • When a plugin definition is selected the form from the PR #142 will displayed in same panel.
  • Add to the enabled plugins list an option/button to edit/remove. Remove should have a confirmation dialog.
  • New plugin view realocate cancel and create buttons.
  • New plugin display error messages on top of the view
  • Actually create a secret from a OptionValue with type secret
  • Watch Plugins instead of getting, so we get updates without browser reload
  • Second confirmation on Plugin delete

Post-MVP

  • Display authentication error message. Right now if no authentication setup is done or an authentication error happens nothing will shown and the enduser sees a black screen.
  • Unify stores
  • Smooth scroll on Panel open within PluginDefinition screens
  • Hide empty entries in Plugin detail view.
  • Use URLstate everywhere
  • Show plugin preset in plugin list if plugin is part of a preset

Reference Issues

No response

[FEAT] - Move special Greenhouse PluginValues to global scope

Priority

(Medium) I'm annoyed but I'll live

Description

Currently the additional greenhouse values are added in the scope of the parent helm-chart of a plugin.
In order to use these values across parent helm-chart and all sub-charts of a plugin these values must be moved into the global values scope.
Helm Doc Reference

Acceptance Criteria:

  • Add the additional values provided to each Plugin to the values .global scope, while keeping the existing #200
  • Migrate all existing Plugins to use the global scoped values
  • Remove the additional values from the parent chart scope
  • Documentation of the Feature and of the provided values cloudoperators/greenhouse-extensions#96

Reference Issues

No response

๐Ÿ› [BUG] - Fix docker buildx for multiarch for docker images

Priority

(Low) Something is a little off

Description

Found the issue: go: cannot install cross-compiled binaries when GOBIN is set
There is an open issue already for that: golang/go#57485

Option's to mitigate the issue:

remove $GOBIN
add -o to the go install go build -o out ...

Reproduction steps

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Manifests

  apiVersion: greenhouse.sap/v1alpha1
  kind: ...

Screenshots

DESCRIPTION

๐Ÿ› [BUG] - service proxy plugin has fixed container registry and tag

Priority

(Low) Something is a little off

Description

The ServiceProxyReconciler automatically deploys a service-proxy Plugin for each organization. The chart defines defaults for the image repository and tag.
Currently the reconciler overwrites the tag with the git version of the current deployed version of Greenhouse and it is not possible to configure the registry.

{
Name: "image.tag",
Value: &apiextensionsv1.JSON{Raw: versionJSON},
},

It should be possible to configure these values in the greenhouse manager deployment, to enable using alternative registries to pull the container image from.

Reproduction steps

1. Deploy a new version of Greenhouse
2. ServiceProxyReconciler updates the Plugins with the lastest git version
3. service-proxy pods cannot pull the image

Manifests

No response

Screenshots

No response

[FEAT] - TeamRole should support aggregationRules

Priority

(Medium) I'm annoyed but I'll live

Description

Currently, the TeamRole enables users to specify permissions using inclusivce Kubernetes RBAC policy rules.
To support the common use case to grant view-only permissions on most resources but exclude some (e.g. secrets) aggregation rules can be used instead of specifying the policy rules explicitly.
See the ClusterRole example:

aggregationRule:
  clusterRoleSelectors:
  - matchLabels:
      rbac.authorization.k8s.io/aggregate-to-view: "true"

TODO:

  • The TeamRule should support aggregation rules as per rbac v1 specification to have the full set of policies generated by kubernetes accordingly in the central/remote clusters.
  • Document the behaviour on where the aggregation takes place
  • Introduce a default, cluster-scoped view-only TeamRole in each organization via org rbac seeder controller
  • Assign the view-only TeamRole to each team, cluster within an org

Reference Issues

#90

๐Ÿ› [BUG] - PluginDefinition deletion breaks reconciliation of Plugins

Priority

None

Description

Currently it is possible for a Greenhouse Admin to delete a PluginDefinition, for which there are existing Plugins. The reconciliation of these Plugins will fail since the Webhook denies admission of Plugins referencing non-existing PluginDefinitions.

Acceptance Criteria:

  • Deny PluginDefinition deletion via Webhook, if Plugins referencing the PluginDefinition exist

Reproduction steps

1. Delete any PluginDefinition
2. Plugins cannot be reconciled, as the referenced PluginDefinition is missing.

[FEAT] - Plutono PluginDefinition

Priority

(Medium) I'm annoyed but I'll live

Description

Organization administrators can deploy a Plutono Plugin into their namespace on the Greenhouse central cluster.
Plutono comes with a set of pre-installed kube-monitoring dashboards and has the capability to load custom dashboards from a git repo.

Acceptance criteria:

  • Plugin contains

    1. Single Plutono instance
    2. Configuration for Prometheus datasources
    3. Dashboards
      • Dashboards for kube-metrics pre-installed, use Plutono built-in capabilities to import/export dashboards
      • Advanced: Automatically sync dashboards from a customer repo.
  • Ingress configured per convention
    1. plutono.<org_name>.greenhouse.<TLD> (with OIDC)
    2. plutono-internal.<org_name>.greenhouse.<TLD> (w/o OIDC)

cc @artherd42 @richardtief

Reference Issues

No response

End-to-end test framework

Umbrella issue for tracking e2e test framework efforts.
Granular sub-issues can be found via label WP2.3.4.6 End-to-end test framwork.

Greenhouse should have an e2e test framework to ensure the stability and reliability of our platform.

  1. Define test framework
  2. Define test scenarios
  3. Define test setup. Should run locally + in PR as well.
  4. Define integration in CI
  5. Documentation

Acceptance criteria:

  • E2E tests are implemented and cover the identified critical scenarios.
  • E2E tests run successfully on the CI/CD pipeline.
  • Documentation for running E2E tests is provided.
  • Metrics, alerts and visualization integration. Showcasing stability + trend

[FEAT] - PluginDefinition & Plugin Admin view

Priority

(Medium) I'm annoyed but I'll live

Description

In the organization tab all administrators and members can see a list of available PluginDefinitions. See the PluginDefinition CRD.

MVP:

  • List of PluginDefinitions showing the .metadata.name, .spec.description, .spec.version

Vision:

  • Include detailed PluginDefinition description. Either include link to an external source like the PluginDefinition catalog on the website, or include via configmap that is deployed to the cluster.
  • Organization administrators can click a PluginDefinition to start the wizard to instantiate it (creating a Plugin)

Reference Issues

No response

[FEAT] - add "AdminPlugins" to manage Plugins running on the Greenhouse central cluster

Priority

(Medium) I'm annoyed but I'll live

Description

There are a limited amount of PluginDefinitions that an Organization may deploy to their namespace in the Greenhouse Central cluster. (currently restricted in code: #41)

These PluginDefinitions should only be configurable for an Organization Admin and may only be installed once in certain cases such as Teams2Slack, Alerts etc...

Acceptance Criteria:

  • Document the concept/decisions for a new CRD
  • Ensure access is limited to Organization Admins
  • Document the feature

Reference Issues

No response

OCM evaluation

Umbrella issue for tracking the OCM evaluation.
Granular sub-issues can be found via label WP2.3.4.5 OCM evaluation.

Evaluate OCM integration in Greenhouse as an addition or replacement of the Helm controller logic aiming to achieve a unified specification and management of software of the ORA stack.

[FEAT] - Cluster registry unified authentication

Priority

(Low) Something is a little off

Description

Managing authentication for a fleet of Kubernetes clusters and a broad user base can be quite challenging.
Greenhouse can support as it has a holistic view on all clusters and their configuration (kubeconfigs) within an organization and should be used to generate kubeconfigs for users as shown below.

The task is to provide unified authentication as a core feature/admin plugin by implement a

  • kubeconfig generator (open-source internal version + integration with Greenhouse as core plugin/controller)
  • kubect plugin managing connected clusters (open-source kubectl-sync; rename to cloudctl)

Example generated kubeconfig.yaml:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:  <CA>
    server: <K8S API server>
  name: <cluster name>
contexts:
- context:
    cluster: <cluster name>
    user: oidc@<cluster name>
  name: <cluster name>
current-context: <cluster name>
kind: Config
preferences: {}
users:
- name: oidc@<cluster name>
  user:
    auth-provider:
      config:
        client-id: <client id>
        client-secret:  <client secret>
        idp-issuer-url: <idp issuer url>
      name: oidc

The OIDC settings can be consumed from the organization CRD (default).
Optionally, an org-wide alternative clientID, clientSecret should be configurable in case of different IDS applications.

Reference Issues

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.