Coder Social home page Coder Social logo

eventing-manager's Introduction

Eventing Manager

Status

REUSE status

GitHub tag checks state

Overview

Eventing Manager is a standard Kubernetes operator that observes the state of Eventing resources and reconciles their state according to the desired state. It uses Controllers, which provide a reconcile function responsible for synchronizing resources until the desired state is reached in the cluster.

This project is scaffolded using Kubebuilder, and all the Kubebuilder makefile helpers mentioned here can be used.

Get Started

You need a Kubernetes cluster to run against. You can use k3d to get a local cluster for testing, or run against a remote cluster.

Note

Your controller automatically uses the current context in your kubeconfig file, that is, whatever cluster kubectl cluster-info shows.

Install

  1. To install the latest version of the Eventing Manager in your cluster, run:

    kubectl apply -f https://github.com/kyma-project/eventing-manager/releases/latest/download/eventing-manager.yaml
  2. To install the latest version of the default Eventing custom resource (CR) in your cluster, run:

    kubectl apply -f https://github.com/kyma-project/eventing-manager/releases/latest/download/eventing-default-cr.yaml

Development

Prerequisites

Run Eventing Manager Locally

  1. Install the CRDs into the cluster:

    make install
  2. Run Eventing Manager. It runs in the foreground, so if you want to leave it running, switch to a new terminal.

    make run

Note

You can also run this in one step with the command: make install run.

Run Tests

Run the unit and integration tests:

make generate-and-test

Linting

  1. Fix common lint issues:

    make imports
    make fmt
    make lint

Modify the API Definitions

If you are editing the API definitions, generate the manifests such as CRs or CRDs:

make manifests

Note

Run make --help for more information on all potential make targets.

For more information, see the Kubebuilder documentation.

Build Container Images

Build and push your image to the location specified by IMG:

make docker-build docker-push IMG=<container-registry>/eventing-manager:<tag> # If using docker, <container-registry> is your username.

NOTE: For MacBook M1 devices, run:

make docker-buildx IMG=<container-registry>/eventing-manager:<tag>

Deployment

You need a Kubernetes cluster to run against. You can use k3d to get a local cluster for testing, or run against a remote cluster.

Note

Your controller automatically uses the current context in your kubeconfig file, that is, whatever cluster kubectl cluster-info shows.

Deploy in the Cluster

  1. Download Go packages:

    go mod vendor && go mod tidy
  2. Install the CRDs to the cluster:

    make install
  3. Build and push your image to the location specified by IMG:

    make docker-build docker-push IMG=<container-registry>/eventing-manager:<tag>
  4. Deploy the eventing-manager controller to the cluster:

    make deploy IMG=<container-registry>/eventing-manager:<tag>
  5. [Optional] Install Eventing Custom Resource:

    kubectl apply -f config/samples/default.yaml
  6. For EventMesh backend, if the Kyma Kubernetes cluster is managed by Gardener, then the Eventing Manager reads the cluster public domain from the ConfigMap kube-system/shoot-info. Otherwise, set the spec.backend.config.domain to the cluster public domain in the eventing custom resource; for example:

    spec:
      backend:
        type: "EventMesh"
        config:
          domain: "example.domain.com"
          eventMeshSecret: "kyma-system/eventing-backend"
          eventTypePrefix: "sap.kyma.custom"

Undeploy Eventing Manager

Undeploy Eventing Manager from the cluster:

make undeploy

Uninstall CRDs

To delete the CRDs from the cluster:

make uninstall

End-to-End Tests

See hack/e2e/README.md

Contributing

See CONTRIBUTING.md

Code of Conduct

See CODE_OF_CONDUCT.md

Licensing

See the License file

eventing-manager's People

Contributors

ajinkyapatil8190 avatar dependabot[bot] avatar friedrichwilken avatar grego952 avatar grischperl avatar halamix2 avatar iwonalanger avatar k15r avatar kyma-eventing-bot avatar marcobebway avatar mfaizanse avatar mmitoraj avatar muralov avatar nhingerl avatar pbochynski avatar rakesh-garimella avatar the1bit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

eventing-manager's Issues

Deprovision of resources deployed by eventing-manager

Description:

Delete resources created by the eventing manager.

Note: Deletion of Eventing CR is not removing ClusterRoles and ClusterRoleBindings from the cluster, even though the owner references are set. Seems like k8s is not performing garbage collection for ClusterRoles and ClusterRoleBindings when the owner is deleted. We can specifically delete ClusterRoles and ClusterRoleBindings in our deletion logic.

Acceptance Criteria:

  • If eventing CR was deleted, all correlating resources should be deleted as well.
  • Existing subscriptions in the cluster should stop eventing from being deprovisioned.
  • A condition should give useful information to the customer for the reason a deletion is not feasible.

Implement the Status Handling for Eventing Manager

Description
The basic scaffolding of the Eventing Manager is already done. Now, the basic status handling has to be implemented. You can refer to the implementation in the NATS Manager here.

Tasks

  • Write a status sub resource status.state needed by the lifecycle manager

[EPIC] Eventing Manager

Description

The eventing component of the kyma project needs to be transformed into a kyma module. The initial plan and details are outlined in https://github.tools.sap/skydivingtunas/proposals/tree/main/eventing-modularization. This epic will take the initial plan into reality by managing and tracking the actual action items.

The eventing module will implement ideas described in the backend switching epic: kyma-project/kyma#13800

Actions:

Needed for Dev:

Needed for Stage:

Needed for Prod:

Watch NATS Resource for NATS Mode Only

Description
Watch NATS CR to reflect its readiness state in Eventing CR. If EventMesh mode is enabled, it should not depend on the existence of NATS CRD and shouldn't react to NATS resource changes.

Hint:
check nats-managers destination-rule watching mechanism:

Acceptance Criteria

  • Eventing CR should have state not ready if NATS is unavailable in NATS mode
  • Eventing Manager in EventMesh mode should not watch NATS CR

Expose subscription status as metric

Description

Our current alert to identify pending consumers works by checking if incoming events are getting dispatched. When a subscription is ready this is the correct way to do it. But in case of a non-ready subscription we might still have the consumer in Jetstream, but the messages cannot be dispatched, as the sink is not valid.

Expose a per consumer to reflect the status of the subscription that caused the creation of the consumer

eventing-ec-nats-subscription-ready

as labels it will have the name of the subscription and all the consumers that are created by it. its state will be a simple 1/0 (ready/not-ready).

The metric can then be used to suppress the alert for pending messages on a consumer in case a subscription is not ready.

Acceptance

  • expose new metric
  • update alert rule to use the new metric

Get rid of current certificate handling for Webhooks

Description
Currently API GW cert handler is used. As it is temporary solution and will be deleted by the other team. We need to switch to another cert handling solution, e.g. central cert-manager or monitoring cert handler.

Reasons

As it is temporary solution and will be deleted by the other team

add new metric that mirrors the state of the health check in EPP

Description

EPP availability is currently based on kube-state-metrics. To remove this dependency expose a new metric
eventing-epp-health. This metric will be set to the same value as the regular health endpoint.
In fact this means at the moment that we will set it to 2xx during startup and not change it, as this is the exact behaviour of the health endpoint. In alert queries we can then check if the value is present or not.

Acceptance

  • expose new metric eventing-epp-health

[TB: 3 days] How to upgrade EC to Eventing Manager in existing clusters?

Supposition: NATS-manager is already released.

Questions:

  • How to upgrade existing Kyma clusters to Eventing Manager? without losing events.
  • Do we need any migration script?
    • Should we remove eventing-controller deployment and other resources before provisioning eventing-manager?
    • What should be the Eventing CR spec? Should a script identify the active backend and create an Eventing CR accordingly.
    • Where should this migration script reside? in reconciler or a separate script? How other teams (telemetry/istio/serveless) are doing this?
  • Check if the upgrade needs any different steps for: (If its different then investigate it in a new ticket, its not in scope of this ticket)
    • EC with NATS to eventing-manger with NATS
    • EC with EventMesh to eventing-manger with EventMesh
  • Check this point if its any helpful in this upgrade flow.

Check answers here!

Hint:

  • Pairing-up with someone is preferred for working on this ticket.
  • Be in-sync with team and PB while working on this ticket.
  • When the time-box exceeds then discuss with the team how to move forward.

Acceptance Criteria:

  • A document to answer the above questions.
  • Test the proposed solutions for upgrade.

Pause/unpause functionality for NATS Dispatcher

Description

We want to be able to pause and unpauseย theย Subscriptionย in the case of an invalid sink to avoid the retries to the unreachable sink.

Pause/un-pause scenarios:

  1. Automatic pause by the reconciler
  • In that case the reconciler renders a subscription not ready automatically in case the sink is not valid
  • un-pause, as soon as the sink passes the validity check
  • It must NOT put a pause label on the subscription, as this would force the user to interfere with the subscription before it is resumed
  • requeue the paused subscription
  1. Manual pause
  • In that case any user of eventing can add a โ€œpauseโ€ label to the Subscription
  • Un-pause, once the label is gone

Implementation Tasks

This Feature should only be implemented for v1alpha2 Subscription version.

  • in the paused state Subscription should be not-ready
  • delete the NATS-server Subscription and leave the consumer (hence, the interest)
  • add a paused sub condition to the Kyma Subscription status

Note: check if closing the Subscription on the NATS side would help

Acceptance Criteria

  • Pause/unpause functionality is implemented
  • A test case is written which proves that a NATS Subscription can be paused and resumed without losing any events.

Follow-ups

kyma-project/kyma#16329

move event type prefix from ConfigMap to Eventing-CR

Description
in the eventing component the user is able to configure the event-type-prefix by modifying a configmap. With the new EventingCR a better location for this configuration is inside the EventingCR.

To ease future development of eventing-controller and eventing-manager as part of this story the currently reused parts from eventing-controller shall be duplicated into the eventing-manager codebase. Subscription managers will still be imported into the new eventing-manager.

Acceptance

  • expose event-type-prefix as a property in the Eventing CR
  • the default value is sap.kyma.custom
  • update busola in case it still requires this setting

Change Eventing CRD to a single backend

Description
Change it to single a backend and get rid of array. Adapt defaulting/validation and proposal.

Note
Follow up to #17

Tasks

  • Change to a single backend
  • Adapt defaulting
  • Adapt validation

Implement e2e tests using golang for eventing-manager

Description:

Implement E2E tests using Golang for eventing-manager (follow the similar approach as in nats-manager).

Tasks:

  • Implement the following test cases in golang:
    • Check if Eventing is configured according to the Eventing CR (Note: Eventing CR status ready -> EPP pods ready).
    • Verify that cleanup works (after deleting Eventing CR -> no resources should be left behind in the cluster)
    • Event publishing and receiving works. [Both backends] (Partially blocked by: #77)
      • Port forward EPP to publish events.
      • use the same test cases for event types as in fast-integration tests:
        • v1alpha1 subscriptions with CE (structured+binary) and legacy events.
        • v1alpha2 subscriptions with CE (structured+binary) and legacy events.

NOTE:

  • Backend switching/tracing/dashboards are not part of this ticket.
  • Implementing the github action pipelines to run these tests are not part of this ticket.

Acceptance criteria:

  • The tests should be able to run for the specified backend NATS/EventMesh.
  • Developers must be able to execute these tests locally as well as create a job that runs them centrally
  • Clear indication of errors. Developers must not be required to scan through lengthy output to find the actual error
  • Have make targets to easily run the tests steps (e.g. prep, run etc.).
  • Add documentation on how to run tests

Code refactoring: Separation of the dispatcher from jetstream.go

Description
Separate the dispatching part from jetstream.go so that it does not have direct access to the sinks map.

The dispatching part

image

Before

image

After

NOTE: The end goal (but not the goal of the story) is that the dispatcher is a separate component from the controller. I still think the image helps explaining how the refactoring shall take place such that the dispatcher does not have access to sinks anymore.

image

AC:

  • The dispatcher does not have access to the sinks map.
    • Instead a dispatcher is created per subscription.
    • If the subscription sink changes, the existing dispatcher is stopped and a new dispatcher with the new URL is started.
  • getCallback in jetstream.go does not contain the main dispatching logic anymore.
  • Unit test for the dispatcher (there is a unit test basis for this already in this PR)

Reasons

Decoupling improves our code quality.

Attachments

Reconcile a single eventing CR. Multiple eventing CRs are not supported (would a cluster scope cr be an option?)

Description
At the moment, we support reconciling multiple instances of the Eventing CR. For now, we will limit this functionality to only support reconciling a single instance (with a specific name within kyma-system namespace). All other instances of the Eventing CR will not get ready and will have a clear condition telling the user about the limitation.

Hint

  • Check similar implementation in nats-manager here
  • In reconciler, get name and namespace, look if values are not allowed, set state to error, but pass reconciliation. Have a flag to disable this feature for tests.

Acceptance

  • Make the name and namespace of the CR configurable in NATS-Manager.
  • limit reconciling Eventing CRs to one specific instance
  • add a condition with a clear message to inform the user about the limitation
  • set other instances of Eventing CR to an error state
  • integration test should still be able to create Eventing Cr with different names & namespaces

Give restricted RBAC permissions to eventing-manager

Description

Currently, the controller in eventing-manager is given all-access to cluster resources. Restrict these permissions, and only give RBAC for the resources as required by the controller. Give it access per resource name as done in NATS-manager.

Resources managed/accessed by eventing-manager include:

  • K8s Service
  • ServiceAccount
  • ClusterRoles
  • ClusterRoleBindings
  • NATS CR
  • Applications.kyma-project
  • API Rule

There may be resources with dynamic names where access per resource name will not be possible.

Additionally, restrict the RBAC permissions of the webhook cert job.

Tasks

  • Restrict RBAC for eventing-manager resources
  • Restrict RBAC for webhook cert job

Enable EventMesh Backend

Enable EventMesh backend for Eventing Manager. We'll have one controller in Eventing Manager.
Backend switching is excluded as it'll be implemented in separate issues. The prerequisite is implementation of subscription in nats mode #8

Steps:

  • Create EPP for EventMesh Mode (logic is copied into eventing-manager, probably needs to be improved)
  • Start Subscriber for EventMesh Mode (EC package is imported and called)

Acceptance Criteria

  • Eventing Manager should work in EventMesh mode

Fix defaulting rules to set defaults depending on NATS or EventMesh backend

Questions:

  • Is it possible to do it using the defaulting rules?

Description

  • NATS Backend is set as the default
  • NATS related backend config values are also defined via defaulting rules (NATSStreamStorageType, etc)
  • If the backend is switched to EventMesh, the NATS backend config details remain in the CR and are always recreated upon deletion
  • => Defaulting rules for the backend config values need to be different depending on the backend

Acceptance Criteria

  • When NATS Backend is set, only NATS related backend config details are automatically set as default
  • When EventMesh Backend is set, only EventMesh related backend config details are automatically set as default
  • Backend config details not relevant to the active backend can be removed without their recreation

Add Publisher Proxy Suffix and Fill Values from Eventing CR Directly

Currently, publisher and nats configs are set to PublisherConfig and NatsConfig structs as env vars and read as env vars. We want them (resource limits & requests, nats url ,etc.) are set directly to the PublisherProxy deployment. Hence, we need to get rid of these structs.

Publisher Proxy deployment and hpa name should have publisher-proxysuffix.

Update Eventing CR Defaulting values

Description

  • Eventing CRD uses defaulting rules (here) to define Eventing resources
  • Take the values from eventing overrides defined in management-plane-configs.

Task

  • Update defaulting rules with current values for the resources

Block eventing-CR removal if resources owned by it exist in the cluster

Description

Same as with nats-manager. you cannot remove the module as long as subscriptions are configured in the cluster, regardless of them being NATS or EventMesh subscriptions. Once everything is cleaned up, the module can go.

Tasks

  • block deletion of the eventing-cr until all resources created by it are removed from the cluster
    • subscriptions
    • APIRules
  • DO NOT cleanup subscriptions. This task is for the user. As soon as they remove all subscriptions the eventing-CR is ready to be deleted (finalizer removed from it)

Proposal: How to migrate nats and eventmesh reconciler into eventing manager

Description

  • When the eventing CR reconciliation gets triggered, how should the NATS or EventMesh infrastructure be provisioned?
  • When the user changes the backend, how will the old backend stop and how would the new get started?
  • Should we migrate the subscription managers from eventing controller?
  • Should we convert from the current codebase (same codebase in modular fashion).
  • What about time concerns? How to do so?

Acceptance Criteria

  • Find out if we already documented the needed changes to a modular codebase
  • Have a flow chart how migration should look like

Implement a prow job to build the Eventing module artifact (on PRs and main).

Description
After building the eventing-manager image we need to prepare an actual module out of it.
The build process is similar to the one used by nats-manager.

Hints

  • manually test the resulting OCI image with lifecycle manager
  • may require changing the kustomize setup

Acceptance

  • Define the default.yaml in config/samples/
  • OCI image that can be used by kyma-lifecycle manager
  • Job to run on PRs and main
  • The Job should publish the module-template.yaml in the job artifacts. (example)

nats-manager issue

Setup CI job to execute the e2e test without lifecycle manager

Setup CI

note for grooming: we discovered that this ticket is not closed. However, it looks like it was done already. Let's verify this together.

Setup the CI job to execute the e2e test from #74.

Prerequisite: #74

Job Steps:

  • Install k3d
  • Install nats-module (latest release) with lifecycle-manager and wait for readiness.
  • Deploy eventing-manager without lifecycle manager.
  • Run end-to-end tests

Acceptance

  • Execute the e2e script using github action
  • Run it on PRs and main:
    • [pre-main jobs] Two pipelines with E2E test for each backend.
    • [post-main jobs] Two pipelines with E2E test for each backend.

Proposal of Eventing Manager Architecture

Description
Using the previous discussion as a basis, create a proposal for the eventing manager architecture.

Acceptance Criteria

  • TAM Block Diagram for reconcilers (Eventing CR, Subscription CR) (optional create a sequence diagram to visualize how the various reconcilers interact)

Create/Delete Publisher Proxy k8s resources with Eventing CR

Currently, Publisher Proxy k8s resources (service, roles, etc.) are created with eventing-manager deployment. We should create/delete them only Eventing CR is created/deleted. Moreover, configmap containing event type prefix can be specified in Eventing CR with default value.

Acceptance Criteria

  • create/delete EPP resources during eventing-CR reconciliation
    already implemented:

    • EPP deployment
    • hpa for EPP

    still required:

    • EPP services (metrics, publish, health)
    • service account
    • roles / bindings
    • peer authentication (only if istio is available, similar code is available in NATS-manager)
      • Note: We decided to not manage peer authentication in eventing-manager. It will be created by the consumers of metrics.

    use go structs to specify/deploy these resources
    resources are owned by the eventing-manager
    changes should trigger reconciliation of eventing-cr

Reconcile Eventing CR in NATS Mode

Description
The main difference between the old eventing-controller and the new eventing-manager is that the eventing backend configuration is driven by a new CR called the eventing CR. This CR needs to be reconciled in order to setup the manager for further reconciliation of subscriptions later on in its life cycle.

Reconcile the Eventing CR so that the backend is available and configured correctly according to the configuration specified in the eventing CR. This ticket focuses on NATS as the active backend.

Acceptance

  • check if the NATS module is deployed in the cluster
  • deploy an EPP instance configured to use NATS as its publishing target
  • deploy EPP with a configured HPA so that it can be scaled automatically according to the configuration in the eventing CR
  • Reconciling Subscriptions is NOT part of this story

React to NATS+EventMesh backend config modifications on Eventing CR

Currently, when an Eventing CR spec is created then the NATS/EventMesh subscription-manager object is initialised and started with the configurations from the CR. But if someone later changes the configs in the CR then it will not update it in the subscription-manager object.

Tasks:

  • Identify if CR spec changed or not.
    • Keep an in-memory hash to identify if spec has changed. The hash should only be based on NATS/EventMesh relevant fields in the spec e.g. backend, logging etc.
    • If the spec changed then change the eventing CR status to Processing.
  • If changed restart/re-init the NATS/EventMesh subscription-manager object with the new configs.
    • Do not do cleanup when stopping the NATS/EventMesh subscription-manager.
  • If changing some config for NATS/EventMesh causes the subscription manager to start (because on backend we cannot change that config), then discuss with team that should we make that field immutable?

Acceptance Criteria:

  • Changing the Spec should update the relevant NATS/EventMesh eventing infra.
  • Do the implementation in separate PRs for NATS and EventMesh.

Support switching between NATS and EventMesh

Implement backend switching from NATS -> EventMesh and EventMesh -> NATS. The eventing CR is allowed to have only one backend and changing it to different backend is switching.
#37 is prerequisite for this issue.

Some implementation details:

  • cleaning the subscriptions if we switch -> reuse the logic from EC (import packages)
    • NATS -> EventMesh: delete event NATS subscriptions
    • EventMesh -> NATS: delete event EventMesh subscriptions
  • persist previous backend type to identify switching is happening
    • Add backend type in Eventing CR status to persist here
  • status.state should have processing value during switching

Acceptance Criteria

  • Event switching is working as with non-modular Eventing

implement working eventing with the backend configured to JetStream

Description:
When we apply an eventing CR configured to use JetStream as the active backend, we want the eventing manager to consume an existing NATS/JetStream module in the cluster: it will deploy EPP and configure it to publish to NATS/JetStream. Finally, it will start to reconcile subscriptions and set the eventing CR state to ready. It will set the state to an error, if no NATS/JetStream module can be found in the cluster. During the overall setup its state will be set to processing.

Tasks:

Acceptance:

  • a cr set to use NATS will lead to a working eventing infrastructure including epp
  • if NATS is not available, the cr state will show an error.

Scaffold a new controller using kubebuilder

Follow the Kubebuilder documentation to scaffold a new Kubebuilder project.

Tasks:

  • Scaffold a new controller project using Kubebuilder with Eventing CRD as mentioned in the proposal.
  • Dummy controller doing nothing.

React to changes on ClusterRole and ClusterRoleBindings

The watcher for ClusterRoles and ClusterRoleBindings are not working as done for other k8s resources. For example: deleting the clusterRole/ClusterRoleBinding is not triggering reconciliation.

Hint: Maybe watching cluster-scoped resources needs a different way.

AC:

  • Any changes on deployed ClusterRoles and ClusterRoleBindings should trigger reconciliation.
  • Uncomment disabled tests for ClusterRoles and ClusterRoleBindings (here).

Setup validation and conversion webhook for Subscription CR

Description

Enable Subscription resource validation and conversion webhooks. Use the EC validation and conversion logic in Eventing Manager.

Steps:

  • Deploy eventing-cert-handler jobs. Currently, deployed as a CronJob in the eventing helm chart. Need to convert to Kustomize
  • Webhook resources need to be converted to Kustomize
    • Set created webhook names to ENV vars as done ENV Vars
  • Reuse EC webhooks: call v1alpha1 webhook & v1alpha2 webhook SetupWebhookWithManager() function in main.go

Note: we don't need to write additional integration tests. When we move code from EC to eventing-manager we will provide the tests.

Acceptance Criterias:

  • Validation and Conversion webhooks should work as in non-modular evening

Validation and defaulting rules for the CR

Description:

Implement CEL based validation and defaulting rules for the eventing-CR.

Hint: will be tested in the integration test

Acceptance:

  • define validation and defaulting rules for the CR and update the proposal
  • use CEL to validate eventing CR
  • use CEL to apply default values to the eventing CR

Enable Subscription Manager in NATS Mode

Description

The current implementation of a reconciler for Eventing-CRs checks that all infrastructure for Eventing in NATS mode is set up correctly. With this ticket we need to enable reconciling Subscriptions in NATS mode. As this logic already exists in the current implementation in eventing-controller, the way to migrate the existing logic into the new reconciler is by importing the package of the eventing-controller and reuse it.

Note
The existing logic is imported as a package to avoid code duplication so long as there is still work to be done on the eventing-controller. This ticket is only about starting subscription manager.

Acceptance Criteria:

  • reconcile kyma subscription and create nats-infrastructure (stream, consumers, dispatcher)
  • existing unit and integration tests should continue to pass as before
  • apply the configuration from the eventing CR as defined in the proposal

Implement initial Eventing-manager using Kubebuilder

Description

As we start from scratch the first step to a fully implemented eventing module is handling the CR. Handling the CR consists of two parts.

  1. Lifecycle manager deploys the module and determines its state using the state field in the status subresource.
  2. Eventing-Manager reacts to the Eventing-CR and updates the state field in the status subresource.

After implementation of this ticket the eventing module will be considered ready by the lifecycle manager. At this time no actual eventing related behaviour is active.

Tasks to be implemented

Acceptance:

  • No actual resources need to be active (api rules, deployments, services).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.