Coder Social home page Coder Social logo

xk6-kubernetes's Introduction

Go Reference Version Badge Build Status

xk6-kubernetes

A k6 extension for interacting with Kubernetes clusters while testing.

Build

To build a custom k6 binary with this extension, first ensure you have the prerequisites:

  1. Download xk6:

    go install go.k6.io/xk6/cmd/xk6@latest
  2. Build the k6 binary:

    xk6 build --with github.com/grafana/xk6-kubernetes

    The xk6 build command creates a k6 binary that includes the xk6-kubernetes extension in your local folder. This k6 binary can now run a k6 test using xk6-kubernetes APIs.

Development

To make development a little smoother, use the Makefile in the root folder. The default target will format your code, run tests, and create a k6 binary with your local code rather than from GitHub.

git clone [email protected]:grafana/xk6-kubernetes.git
cd xk6-kubernetes
make

Using the k6 binary with xk6-kubernetes, run the k6 test as usual:

./k6 run k8s-test-script.js

Usage

The API assumes a kubeconfig configuration is available at any of the following default locations:

  • at the location pointed by the KUBECONFIG environment variable
  • at $HOME/.kube

APIs

Generic API

This API offers methods for creating, retrieving, listing and deleting resources of any of the supported kinds.

Method Parameters Description
apply manifest string creates a Kubernetes resource given a YAML manifest or updates it if already exists
create spec object creates a Kubernetes resource given its specification
delete kind removes the named resource
name
namespace
get kind returns the named resource
name
namespace
list kind returns a collection of resources of a given kind
namespace
update spec object updates an existing resource

The kinds of resources currently supported are:

Examples

Creating a pod using a specification

import { Kubernetes } from 'k6/x/kubernetes';

const podSpec = {
    apiVersion: "v1",
    kind:       "Pod",
    metadata: {
        name:      "busybox",
        namespace: "testns"
    },
    spec: {
        containers: [
            {
                name:    "busybox",
                image:   "busybox",
                command: ["sh", "-c", "sleep 30"]
            }
        ]
    }
}

export default function () {
  const kubernetes = new Kubernetes();

  kubernetes.create(pod)

  const pods = kubernetes.list("Pod", "testns");

  console.log(`${pods.length} Pods found:`);
  pods.map(function(pod) {
    console.log(`  ${pod.metadata.name}`)
  });
}

Creating a job using a YAML manifest

import { Kubernetes } from 'k6/x/kubernetes';

const manifest = `
apiVersion: batch/v1
kind: Job
metadata:
  name: busybox
  namespace: testns
spec:
  template:
    spec:
      containers:
      - name: busybox
        image: busybox
        command: ["sleep", "300"]
    restartPolicy: Never
`

export default function () {
  const kubernetes = new Kubernetes();

  kubernetes.apply(manifest)

  const jobs = kubernetes.list("Job", "testns");

  console.log(`${jobs.length} Jobs found:`);
  pods.map(function(job) {
    console.log(`  ${job.metadata.name}`)
  });
}

Helpers

The xk6-kubernetes extension offers helpers to facilitate common tasks when setting up a tests. All helper functions work in a namespace to facilitate the development of tests segregated by namespace. The helpers are accessed using the following method:

Method Parameters Description
helpers namespace returns helpers that operate in the given namespace. If none is specified, "default" is used

The methods above return an object that implements the following helper functions:

Method Parameters Description
getExternalIP service returns the external IP of a service if any is assigned before timeout expires
timeout in seconds
waitPodRunning pod name waits until the pod is in 'Running' state or the timeout expires. Returns a boolean indicating of the pod was ready or not. Throws an error if the pod is Failed.
timeout in seconds
waitServiceReady service name waits until the given service has at least one endpoint ready or the timeout expires
timeout in seconds

Examples

Creating a pod and wait until it is running

import { Kubernetes } from 'k6/x/kubernetes';

let podSpec = {
    apiVersion: "v1",
    kind:       "Pod",
    metadata: {
        name:      "busybox",
        namespace:  "default"
    },
    spec: {
        containers: [
            {
                name:    "busybox",
                image:   "busybox",
                command: ["sh", "-c", "sleep 30"]
            }
        ]
    }
}

export default function () {
  const kubernetes = new Kubernetes();

  // create pod
  kubernetes.create(pod)

  // get helpers for test namespace
  const helpers = kubernetes.helpers()

  // wait for pod to be running
  const timeout = 10
  if (!helpers.waitPodRunning(pod.metadata.name, timeout)) {
      console.log(`"pod ${pod.metadata.name} not ready after ${timeout} seconds`)
  }
}

xk6-kubernetes's People

Contributors

alrsorokin avatar codebien avatar davidpst avatar javaducky avatar jorturfer avatar kamontat avatar lxkuz avatar mstoykov avatar pablochacin avatar ppcano avatar simskij avatar toddtreece avatar zv0n avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xk6-kubernetes's Issues

Add option for ImagePullPolicy for container creation

When testing in local clusters (for example, minikube) it is convenient to be able to run containers using local images (i.e. loaded to the local cluster) instead of pulled from a repository. In this case, setting the imagePullPolicy to IfNotPresent allows loading the local image.

Add integration tests

Some features, such as those proposed in #47 are intended to trigger complex behaviours in Kubernetes beyond the creation of some objects.

It can be argued that these features should be tested using unit tests that only verify that the xk6-kubernetes code being tested calls the Kubernetes API with the expected parameters (for example, create an object with the correct definition). However, this type of tests cannot guarantee that the resulting behavior in a live cluster is the expected. For instance, it cannot check that the developer has an incorrect understanding of the object's parameters to be used. Therefore, integration tests against a live Kubernetes cluster may be desirable.

One key question to resolve as a project is which tools to use for such tests in order to automate its execution. Multiple tools exits in the Kubernetes ecosystem such as :

Even when it is not strictly necessary to agree on a tool (as tests should only relay on standard Kubernetes APIs) it is convenient to document the process of setting up tests for at least one tool. Also, it is convenient that these tools are executed in the CI pipelines and therefore the compatibility with the CI/CD infrastructure is a key requirement.

Redesign scope of xk6-kubernetes API

The xk6-kubernetes extension was initially developed with the intention of has been used for chaos experiments, providing basic functionalities such as deleting resources (e.g. namespaces, secrets) and running work loads (for example, jobs). It also provided some other basic functions for creating resources but adding little or no additional abstractions over the native k8s api.

The functions provided by xk6-kubernetes can also be used when setting up tests. However, some common tasks such as exposing a deployment as a service require considerable boiler plate code and can became tedious to program.

At the same time, as the requirements for running chaos experiments has become more complex, some specialized functions has been added, such as attaching ephemeral containers to running pods, or executing commands in a container.

It seems clear that the current design of xk6-kubernetes mixes functions that are too generic for been useful when setting up tests or too specialized for been in a general purpose k8s extension. Therefore the following changes are proposed:

  1. Focus on providing helpers that abstract common tasks such as the proposed in #62
  2. Cover other requirements by providing generic apply functions for creating resources using yaml definitions.
  3. Move specialized functionality introduced to support chaos experiments (ephemeral containers, command execution) to xk6-chaos

Regarding the generic apply method, as handling inline yaml can be cumbersome (in particular if we want to substitute certain fields using variables) other alternatives can be explored:

  • rendering inline yaml using inline text templates (similar to using helm charts)
  • generating yaml using tools such as kustomize, embedding this tool(s) into the library to prevent dependencies with external binaries in the test environment.

Edit:
The generic interface could also be provided by using using a generic golang client that handles resources as generic structs which should be easily mapped to javascript objects.

One potential benefit of using a generic apply method is the possibility of removing the dependencies to many of the k8s api packages, reducing the size of the extension.

#66 (comment)

`kubernetes.*.get methods` should not throw GoError exceptions when the resource is not available

Each existing module provides a get method to return the resource info:

  • deployments.get(name, nameSpace)
  • config_maps.get(name, nameSpace)
  • ingresses.get(name, nameSpace)
  • jobs.get(name, nameSpace)
  • namespaces.get(name)
  • pods.get(name, nameSpace)
  • secrets.get(name, nameSpace)
  • services.get(name, nameSpace)

When the resource is not available, the call throws an exception.

ERRO[0000] GoError: deployments.apps "hello-mnikube" not found
	at reflect.methodValueCall (native)
	at file:///Users/ppcano/dev/li/k8s/xk6-kubernetes/examples/get-deployment.js:10:55(15)  executor=per-vu-iterations scenario=default source=stacktrace

If the resource is not available, the method should return null.

Add helper function for waiting a Deployment is ready

When setting the resources for a test, it is a common case that after deployment an application as a Deployment, the test must wait until all the replicas are ready. This is a simple tasks, but having a helper function will prevent repeating the code across tests.

Add options for job cleanup

By default, when a job is terminated, it is not deleted. Similarly, when a job is deleted, by default the jobs owned by it are not automatically deleted.

In order to address this issue is will be convenient to add options for

  • Automatically deleting completed jobs
  • Deleting in cascade the pods owned by a job

Remove Deprecated API

Since the introduction of the Generic API the previous API around specific resource types was deprecated

There are several reasons for removing the deprecated API:

  • Simplify the code base
  • Simplify documentation (as the generic API allows to do almost everything the K8s API allows using YAML manifests)
  • Hopefully, reduce the size of the resulting binary or at least facilitate the identification of the dependencies that have more impact

One of main blockers so far for removing the deprecated API is that most of the examples depend on that API. Most of these examples seems to be superfluous now and would add not benefit if converted to the new API because the tasks they implement are covered by the official k8s documentation.

Another blocker for removing the API is that some helper functions such as jobs.wait and pods.exec should be moved as helper functions. The pods.addEphemeralContainer function should probably be removed as it was introduced as a requirement for the xk6-disruptor and was since moved to that extension.

The suggested plan is:

Use camelCase for properties instead of snake case.

We are discussing the following default, opening the issue not to forget it.

Docs says:

Similarly, Go field names will be converted from Pascal case to Snake case. For example, the struct field SomeField string will be accessible in JS as the some_field object property. This behavior is configurable with the js struct tag, so this can be changed with SomeField string js:"someField" or the field can be hidden with `js:"-"

To align with common JS convention, I suggest naming the property configMaps instead of config_maps:

  • kubernetes.configMaps.get(name)
  • kubernetes.configMaps.list()

Bump dependencies and CI

We've fallen behind on the latest version of k6 dependency (and others). CI standards based upon grafana/k6 have also been updated.

Error: the server could not find the requested resource

After running the xk6-kubernetes example script add-ephemeral.js, I noticed an error on stout:

ERRO[0002] GoError: the server could not find the requested resource
running at reflect.methodValueCall (native)

The pod gets created, though. so the functionality of the script works as expected. This error just makes it seem like there's a failure somewhere...

When I try to kill (kill-pod.js) that pod created with the add-ephemeral.js script, I also get the error message. The pod does get killed.

The create-pod.js script does not produce this error message. Killing that pod (with kill-pod.js) comes back successful as well (no error message).

building with make fails because local module is not used

After forking the repository for adding a new package nodes, the make command failed because it was still referencing the xk6-kubernetes package from the github repository:

github.com/grafana/xk6-kubernetes/pkg/nodes: module github.com/grafana/xk6-kubernetes@latest found (v0.0.0-20220330204656-f40616db6d32), but does not contain package github.com/grafana/xk6-kubernetes/pkg/nodes

Removing the --with option in the k6 build command in the Makefile solves the issue.

--- a/Makefile
+++ b/Makefile
@@ -19,7 +19,7 @@ clean:
 ## build: Builds a custom 'k6' with the local extension. 
 build:
        go install go.k6.io/xk6/cmd/xk6@latest
-       xk6 build --with github.com/grafana/xk6-output-prometheus-remote=.
+       xk6 build

This behavior is unexpected as xk6 should add the current module to the list of replaced modules.

Pod status doesn't change on terminating

If we check pods after kubernetes.pods.kill(podName, nameSpace)
We will see 2 pods: a new one that was automatically made by k8s to restart the process and an old one that we just killed. Both pods will have Running status... IDK if it is an issue of golang k8s client. My local kubectl shows correct pods statuses: Terminated for old and Running for new.

Update to latest extension module API in preparation for v0.38.0

Running the extension produces the following warning:

WARN[0000] Module 'k6/x/kubernetes' is using deprecated APIs that will be removed in k6 v0.38.0, for more details on how to update it see https://k6.io/docs/extensions/guides/create-an-extension/#advanced-javascript-extension

Need to upgrade module code for latest standard.

xk6-kubernetes extension can't be used in-cluster

We are using in KEDA this extension to populate k8s resources for the load tests and it worked nice when we executed the tests locally, but when we have tried to move the test from local to k6-operator, we have faced with the problem that the extension can init the Kubernetes client with this error:

time="2023-08-22T18:39:57Z" level=error msg="GoError: stat /home/k6/.kube/config: no such file or directory

I think that this error is because during the extension init, the path is forced to $HOME/.kube/config even if I kept it clear intentionally for using in-cluster mode.

func getClientConfig(options KubeConfig) (*rest.Config, error) {
kubeconfig := options.ConfigPath
if kubeconfig == "" {
home := homedir.HomeDir()
if home == "" {
return nil, errors.New("home directory not found")
}
kubeconfig = filepath.Join(home, ".kube", "config")
}
return clientcmd.BuildConfigFromFlags("", kubeconfig)
}

Managing this at this level doesn't make sense (IMHO) because there is already helper in controller-runtime that allow to read files from kubeconfig (whereas it is) or in-cluster.
https://github.com/kubernetes-sigs/controller-runtime/blob/116a1b831fffe7ccc3c8145306c3e1a3b1b14ffa/alias.go#L84-L97

Just with this small change the extension should work in both cases, outside and inside the cluster.

I'm trying this change with my fork, I can draft a PR if it works and you see this change useful

Add function for retrieving replicas (pods) of a deployment

A very basic requirement when executing chaos tests on deployments is to act on one of replicas (for example, kill it).
Even when it is possible to obtain the list of replicas by listing all pods in a namespace and filtering those that matches the label selector of the deployment, this in inconvenient.

Therefore, it would be convenient to have a function that, given a deployment, returns the existing list of pods. This function could optionally accept some flag(s) for common field selectors, such as state (e.g. to filter-out non running replicas)

Re-implement generic Apply method using dynamic client's apply

Presently the Apply method provided by the generic API is implemented as a Create from a yaml file. This is not consistent with the experience that users may have with thekubectl apply command that allows both creating new resources or modifying existing resources.

Starting with v1.25 the dynamic client now offers an Apply method which implements the desired functionality, allowing both the creation or the modification of resources.

Therefore, it would be convenient for the sake of consistency and convenience to re-implement the Apply method in the generic API using this newly provided Apply method in the dynamic client.

Rename `kill` for `Delete`

This library wraps the k8s.io/client-go to expose the APIs in a k6 extension.

On k8s.io/client-go, the method name to delete a resource is Delete instead of Kill. I think we should align with the method names of the original library

Apply fails if the resources already exist

Self-explanatory, I think 😄

The error:

ERRO[0003] GoError: pods "httpbin" already exists
Run     at reflect.methodValueCall (native)
base    at setup (file:///Users/dgzlopes/go/src/github.com/grafana/xk6-disruptor/examples/httpbin/disrupt-pod.js:18:14(7))
disrupt at native  hint="script exception"

I have the feeling this shouldn't fail. Same way, while using kubectl, it doesn't fail if the resources already exist.

Add helper function for creating random namespaces

The main use case for the xk6-kubernetes extension is facilitating the setup of tests, by providing a simple API for creating Kubernetes resources such as secrets, pods, services and others required for running a test application.

When running multiple concurrent tests, it is convenient to isolate tests by using different namespaces when creating the resources needed by the tests. Moreover, it is convenient to use randomly generated namespaces for each test, to prevent interferences with other instances of the same tests running concurrently or that run previously and was not properly teared down.

This means that of such test scripts must include a sequence of code similar to the example below:

import { Kubernetes } from 'k6/x/kubernetes'
import { randomString } from 'https://jslib.k6.io/k6-utils/1.2.0/index.js';

const namespace = randomString(8)
       
const nsObj = {
    apiVersion: "v1",
    kind: "Namespace",
    metadata: {
        name: "test-ns"
    }
}


export  function setup() {
  const k8s = new Kubernetes()
  k8s.create(nsObj)
}

Being this a very common use case, In order to prevent this redundancy of code on each test, it will be convenient to provide a helper function that creates a new namespace with a random name and return its name, reducing the above sequence to this show below:

import { Kubernetes } from 'k6/x/kubernetes'


export function setup() {
  const k8s = new Kubernetes()
  namespace = k8s.randomNamespace()
}

Add examples for CRDs

After #85 it should be possible to manipulate CRDs using the generic API. Therefore, it should be useful for users to have this capability documented and some examples provided.

Implement generic k8s interface

Implement a set of generic methods for creating/deleting/getting/listing objects in k8s instead of the current set of methods per object type offered by the xk6-kubernetes extension. In this way the extension is simplified and the code base is reduced significantly as one set of methods cover all the resource types.

For example, for creating a job:

import { Kubernetes } from ‘xk6-kubernrtes’

const jobResource = {
    group: “batch”,
    version: “v1”,
    resource: “jobs” 
}

const jobDef = {
    apiVersion: "batch/v1",
    kind: "Job",
    metadata: {
        name: "demo-job",
    },
    spec: {
        template: {
            metadata: {
                labels: {
                    app: demo,
                }
            },
            spec: {
                containers: [
                    {
                        name:  "sleeper",
                        image: "busybox",
                        command: ["sh",  "-c", "sleep 3"]
                    }
                ]
            }
        }
   }
}

const k8s = new Kubernetes()
let job = k8s.create(jobResource, jobDef)

Similarly, for retrieving resources such a listing jobs:

import { Kubernetes } from ‘xk6-kubernetes’

const jobDef = {
    …                          // omitted for brevity
}

const k8s = new Kubernetes()
const namespace = ‘default’

let jobs = k8s.list(jobResource, namespace)

support only some specific kubernetes version?

I am trying to run test with kubernetes version v1.19.13 using most of example scripts.But I don't think it's working. For an example,
get-pod.js - when I run it from local and pod not present in namespace,the test should display: pod not found

Wait for ephemeral container to be ready

Trying to execute a command in an ephemeral container immediately after creating it may fail with the error ERRO[0000] GoError: unable to upgrade connection: container not found ("k6-chaos") . It would be convenient adding an option for waiting the ephemeral container to be ready.

Create Helpers for exposing pod/deployment in a cluster

When running chaos tests this is common pattern:

  • Create a target pod or deployment (commonly, on its own namespace)
  • Subject the target pod/deployment to certain attack (e.g. kill replica, stress resources, delay network traffic)
  • Submit load to the target and measure behavior

A similar requirement may arise in other contexts, such as integration tests.

However, in order to reach the target from outside a k8s cluster, it must be exposed as a service and optionally (depending on the cluster's infrastructure) the service exposed as an ingress.

Presently the xk6-kubernetes extension offers the basic building blocks for doing so, but requires a considerable programming effort. It can be done by means of applying a raw yaml manifest or by creating a kubernetes service object "manually" and submitting it to the cluster via a create operation. Neither option seems neither convenient nor idiomatic in the context of k6 testing.

Therefore, it would be convenient to create a helper method in the ingress package that allows exposing a pod/deployment as a service and, optionally, as an ingress.

The intended API should look similar to this example below:

 import { kubernetes } from xk6-kubernetes
 
k8s  =  new Kubernetes()

pod = k8s.pods.Create({
  name: "my-pod",
  namespace: "my-namespace",
  image: "my-image",
} )

k8s.helpers.expose({
  selector: { run: "my-app" },
  service: "my-service", 
  type: "ClusterIP",
  port: 80, 
  ingress: "my-ingress",
  path: "/my-app"
})

jobs.create: check the successful execution

Problem

At this moment, jobs.create does not guarantee a command is properly executed on a node.

Case 1: Given a wrong image name busybo.

import { sleep } from 'k6';
import { Kubernetes } from 'k6/x/kubernetes';

export default function () {

  const k = new Kubernetes({});

  k.jobs.create({
    namespace: 'default',
    node_name: 'minikube',
    name: 'my-new-job3',
    image: 'busybo',
    command: ["sh", "-c", "sleep 20"]
  });

  sleep(10);

It will fail.

Screenshot 2022-04-26 at 18 13 33

The error is muted and not propagated to the k6 code.

Case 2 - Given a wrong command name sherror:

import { sleep } from 'k6';
import { Kubernetes } from 'k6/x/kubernetes';

export default function () {

  const k = new Kubernetes({});

  k.jobs.create({
    namespace: 'default',
    node_name: 'minikube',
    name: 'my-new-job3',
    image: 'busybox',
    command: ["sherror", "-c", "sleep 20"]
  });

  sleep(10);

It will fail.

Screenshot 2022-04-26 at 18 19 21

The error is muted and not propagated to the k6 code.

Case 3 - Given a wrong node_name

Screenshot 2022-04-26 at 18 23 15

The error is muted and not propagated to the k6 code.

Exploring Possible solutions

1 . Review golang kubernetes client to identify these errors
2. Create an API to check the job was successfully completed given a timeout.

Deleting namespace using the generic interface fails

When trying to delete a namespace using the generic interface introduced in #70 the operation fails with the message object is being deleted: namespaces "namepspace-name" already exists.

This issue is probably related to an issue found in the create function that lead to introducing a logic for handling namespaces as a special case. The underlying problem is that namespaces cannot be accessed using a namespaced resource interface in the dynamic Kubernetes client.

Error authenticating to cloud (GCP)

While running the basic list-pods script against a cluster in GCP, I encounter the following error:

ERRO[0000] no Auth Provider found for name "gcp"
        at file:///home/vw/dev/k6/scripts/list-pods.js:5:21(4)
        at native  executor=per-vu-iterations scenario=default source=stacktrace

From my understanding, the expected action is to read from the kubeconfig filepath provided (or take from the default path).

Is the default incorrect, or does the documentation need to be updated differently?

Arrays of objects returned by xk6-kubernetes do not have the length property

When developing a function that waits for a service to get its load balancer ip address, I found the array of ip addresses did not have a length property, making the code below to fail:

let svc = k8sClient.services.apply( ..... )
while (svc.status.load_balancer.ingress.lenght == 0) {       // length is 'undefined' which is != '0'
    sleep(1)
    svc = k8sClient.services.get("my-service", "my-namespace")
}
let svcIp = svc.status.load_balancer.ingress[0].ip

This is due to issues in the way goja defines the length property (thanks to @mstoykov for pointing to this) and is solved by bumping the k6 version used in xk6-kubernetes.

Enable linting and address discovered errors

PR #40 attempted to include golangci-lint within the CICD workflow for the project using similar rules to k6 and other extensions. This raised several issues which warrants correction within a separate issue.

Port forwarding feature is not available to expose service to local host

We are testing GRPC call using k6 . For that we need to port-forward a particular service before we can call GRPC method.(Currently we are doing it manually.)
I saw port-forwarding is yet not available in xk6 Kubernetes.
Are you guys planning to introduce this capability any time sooner ? :)
image

Thanks in advance.

Regards,
SaifAli Sanadi

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.