Coder Social home page Coder Social logo

crosscloudci / cross-cloud Goto Github PK

View Code? Open in Web Editor NEW
166.0 23.0 59.0 2.7 MB

Cross-Cloud - multi-cloud K8s provisioner for CNCF CI Project

Home Page: https://cncf.ci

License: Apache License 2.0

HCL 9.76% Shell 43.28% Python 4.56% Makefile 0.54% Go 1.74% Ruby 0.01% SaltStack 1.62% HTML 37.22% sed 0.02% Dockerfile 0.59% Starlark 0.67% Jinja 0.01%
cncf

cross-cloud's Introduction

Cross-cloud - multi-cloud provisioner

The multi-cloud kubernetes provisioner component of the Cross-Cloud CI project.

What is Cross-cloud?

A Kubernetes provisioner supporting multiple clouds (eg. AWS, Azure, Google, Equinix Metal) which

  • Creates K8s clusters on cloud providers
  • Supplies conformance validated Kubernetes end-points for each cloud provider with cloud specific features enabled

How to Use Cross-cloud

You have to have a working Docker environment

Note: 147.75.69.23 is the IP address of the DNS server for Cross Cloud deployed Nodes. Should you wish to be able to reach your Nodes by name from outside the cluster, that IP needs to be in your /etc/resolv.conf but it is not a delegating resolver, so it shouldn't be the only nameserver in your resolv.conf.

Quick start for AWS

Pre-reqs: IAM User with the following Permissions:

  • AmazonEC2FullAccess
  • AmazonS3FullAccess
  • AmazonRoute53DomainsFullAccess
  • AmazonRoute53FullAccess
  • IAMFullAccess
  • IAMUserChangePassword

AWS credentials

export AWS_ACCESS_KEY_ID="YOUR_AWS_KEY_ID"
export AWS_SECRET_ACCESS_KEY="YOUR_AWS_SECRET_KEY"
export AWS_DEFAULT_REGION=”YOUR_AWS_DEFAULT_REGION” # eg. ap-southeast-2

Run the following to provision a Kubernetes cluster on AWS:

docker run \
  -v /tmp/data:/cncf/data \
  -e NAME=cross-cloud \
  -e CLOUD=aws    \
  -e COMMAND=deploy \
  -e BACKEND=file  \ 
  -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID    \
  -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY    \
  -e AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION    \
  -ti registry.cncf.ci/cncf/cross-cloud/provisioning:production
Quick start for GCE

Pre-reqs: Project created on Google Cloud (eg. test-cncf-cross-cloud)

Google Cloud JSON configuration file for authentication. (This file is downloaded directly from the Google Developers Console)

  1. Log into the Google Developers Console and select a project.
  2. The API Manager view should be selected, click on "Credentials" on the left, then "Create credentials," and finally "Service account key."
  3. Select "Compute Engine default service account" in the "Service account" dropdown, and select "JSON" as the key type.
  4. Clicking "Create" will download your credentials.
  5. Rename this file to credentials-gce.json and move to your home directory (~/credentials-gce.json)

Google Project ID

  1. Log into the Google Developers Console to be sent to the Google API library page screen
  2. Click the Select a project drop-down in the upper left
  3. Copy the Project ID for the desired project from the window that shows Eg. test-cncf-cross-cloud

Run the following to provision a Kubernetes cluster on GCE:

export GOOGLE_CREDENTIALS=$(cat ~/credentials-gce.json)
docker run \
  -v /tmp/data:/cncf/data  \
  -e NAME=cross-cloud  \
  -e CLOUD=gce    \
  -e COMMAND=deploy  \
  -e BACKEND=file  \ 
  -e GOOGLE_REGION=us-central1    \
  -e GOOGLE_ZONE=us-central1-a  \
  -e GOOGLE_PROJECT=test-cncf-cross-cloud  \
  -e GOOGLE_CREDENTIALS="${GOOGLE_CREDENTIALS}" \
  -ti registry.cncf.ci/cncf/cross-cloud/provisioning:production
Quick start for OpenStack

You will need a full set of credentials for an OpenStack cloud, including authentication endpoint.

Run the following to provision an OpenStack cluster:

docker run \
  -v $(pwd)/data:/cncf/data \
  -e NAME=cross-cloud \
  -e CLOUD=openstack \
  -e COMMAND=deploy \
  -e BACKEND=file \
  -e TF_VAR_os_auth_url=$OS_AUTH_URL \
  -e TF_VAR_os_region_name=$OS_REGION_NAME \
  -e TF_VAR_os_user_domain_name=$OS_USER_DOMAIN_NAME \
  -e TF_VAR_os_username=$OS_USERNAME \
  -e TF_VAR_os_project_name=$OS_PROJECT_NAME \
  -e TF_VAR_os_password=$OS_PASSWORD \
  -ti registry.cncf.ci/cncf/cross-cloud/provisioning:production
Quick start for vSphere via VMware Cloud (VMC) on AWS

The vSphere provider requires vSphere host and credential information, as well as credentials for the linked AWS account:

Run the following to provision a vSphere cluster:

docker run \
  --rm \
  --dns 147.75.69.23 --dns 8.8.8.8 \
  -v $(pwd)/data:/cncf/data \
  -e NAME=cross-cloud \
  -e CLOUD=vsphere \
  -e COMMAND=deploy \
  -e BACKEND=file \
  -e VSPHERE_SERVER=$VSPHERE_SERVER \
  -e VSPHERE_USER=$VSPHERE_USER \
  -e VSPHERE_PASSWORD=$VSPHERE_PASSWORD \
  -e VSPHERE_AWS_ACCESS_KEY_ID=$VSPHERE_AWS_ACCESS_KEY_ID \
  -e VSPHERE_AWS_SECRET_ACCESS_KEY=$VSPHERE_AWS_SECRET_ACCESS_KEY \
  -e VSPHERE_AWS_REGION=$VSPHERE_AWS_REGION \
  -ti registry.cncf.ci/cncf/cross-cloud/provisioning:ci-stable-v0-2-0

Quickstart for metal.equinix.com

Packet.net requires an auth token and a project id.

To deploy to equinix metal:

docker run \
  -v /tmp/data:/cncf/data \
  --dns 147.75.69.23 --dns 8.8.8.8 \
  -e NAME=cross-cloud \
  -e CLOUD=packet    \
  -e COMMAND=deploy \
  -e BACKEND=file  \
  -e PACKET_AUTH_TOKEN=${PACKET_AUTH_TOKEN} \
  -e TF_VAR_packet_project_id=${PACKET_PROJECT_ID} \
  -ti registry.cncf.ci/cncf/cross-cloud/provisioning:production

To destroy your cluster in packet:

docker run \
  -v /tmp/data:/cncf/data \
  --dns 147.75.69.23 --dns 8.8.8.8 \
  -e NAME=cross-cloud \
  -e CLOUD=packet    \
  -e COMMAND=destroy \
  -e BACKEND=file  \
  -e PACKET_AUTH_TOKEN=${PACKET_AUTH_TOKEN} \
  -e TF_VAR_packet_project_id=${PACKET_PROJECT_ID} \
  -ti registry.cncf.ci/cncf/cross-cloud/provisioning:production

Note: 147.75.69.23 is the IP address of the DNS server for Cross Cloud deployed Nodes. Should you wish to be able to reach your Nodes by name from outside the cluster, that IP needs to be in your /etc/resolv.conf but it is not a delegating resolver, so it shouldn't be the only nameserver in your resolv.conf

General usage and configuration

Minimum required configuration to use Cross-cloud to deploy a Kubernetes cluster on Cloud X.

docker run \
  -v /tmp/data:/cncf/data \
  -e NAME=cross-cloud
  -e CLOUD=<aws|gke|gce|openstack|packet>    \
  -e COMMAND=<deploy|destory> \
  -e BACKEND=<file|s3>  \ 
  <CLOUD_SPECIFIC_OPTIONS>
  <KUBERNETES_CLUSTER_OPTIONS>
  -ti registry.cncf.ci/cncf/cross-cloud/provisioning:production

Common Options

  • -e CLOUD=<aws|gke|gce|openstack|packet> # Choose the cloud provider. Then add the appropriate cloud specific options below.
  • -e COMMAND=<deploy|destory>
  • -e BACKEND=<file|s3> # File will store the Terraform State file to Disk / S3 will store the Terraform Statefile to a AWS s3 Bucket

Cloud Specific Options

AWS:

  • -e AWS_ACCESS_KEY_ID=secret
  • -e AWS_SECRET_ACCESS_KEY=secret
  • -e AWS_DEFAULT_REGION=ap-southeast-2

Equinix Metal:

  • -e PACKET_AUTH_TOKEN=secret
  • -e TF_VAR_packet_project_id=secret

GCE/GKE:

  • -e GOOGLE_CREDENTIALS=secret
  • -e GOOGLE_REGION=us-central1
  • -e GOOGLE_PROJECT=test-163823
  • -e GOOGLE_ZONE=us-central1-a

OpenStack:

  • -e TF_VAR_os_auth_url=$OS_AUTH_URL
  • -e TF_VAR_os_region_name=$OS_REGION_NAME
  • -e TF_VAR_os_user_domain_name=$OS_USER_DOMAIN_NAME
  • -e TF_VAR_os_username=$OS_USERNAME
  • -e TF_VAR_os_project_name=$OS_PROJECT_NAME
  • -e TF_VAR_os_password=$OS_PASSWORD

vSphere via VMware Cloud (VMC) on AWS:

  • -e VSPHERE_SERVER=1.2.3.4
  • -e VSPHERE_USER=admin
  • -e VSPHERE_PASSWORD=notblank
  • -e VSPHERE_AWS_ACCESS_KEY_ID=public
  • -e VSPHERE_AWS_SECRET_ACCESS_KEY=secret
  • -e VSPHERE_AWS_REGION=us-west-2

Kubernetes Cluster Options

Custom Configuration options for the Kubernetes Cluster:

  • -e TF_VAR_pod_cidr=10.2.0.0/16 # Set the Kubernetes Cluster POD CIDR
  • -e TF_VAR_service_cidr=10.0.0.0/24 # Set the Kubernetes Cluster SERVICE CIDR
  • -e TF_VAR_worker_node_count=3 # Set the Number of Worker nodes to be Deployed in the Cluster
  • -e TF_VAR_master_node_count=3 # Set the Number of Master nodes to be Deployed in the Cluster
  • -e TF_VAR_dns_service_ip=10.0.0.10 # Set the Kubernetes DNS Service IP
  • -e TF_VAR_k8s_service_ip=10.0.0.1 # Set the Kubernetes Service IP

Additional Documentation

  • FAQ - Frequently Asked Questions

cross-cloud's People

Contributors

agentpoyo avatar akutz avatar chandankumar4 avatar dankohn avatar denverwilliams avatar doberloh avatar edwarnicke avatar figo avatar hh avatar ibreakthecloud avatar lixuna avatar mrhillsman avatar namliz avatar taylor avatar thewolfpack avatar wvwatson avatar yikun avatar zhengzhenyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cross-cloud's Issues

Update Packet to use Data Sources

Packet Terraform code should use Data Sources and not depend on Flies being on Disk as any stored file will be lost when a CI Job in re-run

Add charts for prometheus to charts.cncf.ci

Prometheus can go here: https://gitlab.com/cncf/cncf.gitlab.io/tree/master/charts
It's currently hosted at https://cncf.gitlab.io

Here are the container image repositories/tags we need to find current releases for:

$ grep -A2 image: charts/stable/prometheus/values.yaml 
  image:
    repository: prom/alertmanager
    tag: v0.5.1
--
  image:
    repository: jimmidyson/configmap-reload
    tag: v0.1
--
  image:
    repository: gcr.io/google_containers/kube-state-metrics
    tag: v0.4.1
--
  image:
    repository: prom/node-exporter
    tag: v0.13.0
--
  image:
    repository: prom/prometheus
    tag: v1.5.2

Add CNI Project https://github.com/containernetworking/cni

We should look at adding Builds for CNI https://github.com/containernetworking/cni

Currently our Deploys are using HyperKube and CNI Release is included in the image

test-master1 net.d # rkt list
UUID		APP		IMAGE NAME					STATE	CREATED		STARTED		NETWORKS
751bb79c	flannel		quay.io/coreos/flannel:v0.6.2			exited	30 minutes ago	30 minutes ago	
7aed1f74	flannel		quay.io/coreos/flannel:v0.6.2			running	30 minutes ago	30 minutes ago	
97556306	oem-gce		coreos.com/oem-gce:1298.7.0			running	31 minutes ago	31 minutes ago	
efb555ea	hyperkube	gcr.io/google-containers/hyperkube:v1.6.3	running	29 minutes ago	29 minutes ago	
test-master1 net.d # rkt enter efb555ea /bin/bash
root@test-master1:/# ls -la /opt/cni/bin/
total 41728
drwxr-xr-x. 2 root root    4096 May 10 16:07 .
drwxr-xr-x. 3 root root    4096 May 10 16:07 ..
-rwxr-x---. 1 root root 4026452 Mar 22 20:04 bridge
-rwxr-x---. 1 root root 2901956 Mar 22 20:03 cnitool
-rwxr-x---. 1 root root 9636499 Mar 22 20:04 dhcp
-rwxr-x---. 1 root root 2910884 Mar 22 20:03 flannel
-rwxr-x---. 1 root root 3102946 Mar 22 20:04 host-local
-rwxr-x---. 1 root root 3609358 Mar 22 20:04 ipvlan
-rwxr-x---. 1 root root 3170507 Mar 22 20:04 loopback
-rwxr-x---. 1 root root 3640336 Mar 22 20:04 macvlan
-rwxr-x---. 1 root root 2733314 Mar 22 20:04 noop
-rwxr-x---. 1 root root 4000236 Mar 22 20:04 ptp
-rwxr-x---. 1 root root 2909669 Mar 22 20:03 tuning

We should figure out how the CNI Release is tied in, but we are in the process of moving away to using K8s Binary's for our Deploys so this isn't to important yet.

Enable unathenticated docker pull/run registry.cncf.ci/repo/proj:tag

Neither docker run or docker login work against registery.cncf.ci:

docker pull:

$ docker run registry.cncf.ci/coredns/coredns:c737c741
Unable to find image 'registry.cncf.ci/coredns/coredns:c737c741' locally
docker: Error response from daemon: Get https://registry.cncf.ci/v2/coredns/coredns/manifests/c737c741: Get https://gitlab.cncf.ci/jwt/auth?scope=repository%3Acoredns%2Fcoredns%3Apull&service=container_registry: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).
See 'docker run --help'.

We get the same error trying to docker login:

$ docker login registry.cncf.ci
Username: hh
Password: 
Error response from daemon: Get https://registry.cncf.ci/v2/: Get https://gitlab.cncf.ci/jwt/auth?account=hh&client_id=docker&offline_token=true&service=container_registry: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) (Client.Timeout exceeded while awaiting headers)

Design Doc?

Is there any documentation on what this is or how it works?

Fork Necessary Prometheus Repos to gitlab.cncf.ci

We need to figure out what repos are necessary to build the prometheus containers used in the stable prometheus helm chart and fork them to gitlab.cncf.ci

Probably go ahead and create a prometheus org and mirror the project names.

Prometheus Server is Failing to Start

Prometheus Server is Failing to Start with error

[dlx@W541 aws-mvci-stable]$ docker run -ti registry.cncf.ci/prometheus/prometheus:ci-v1.5.2.cd47ea7b.6027
panic: standard_init_linux.go:175: exec user process caused "no such file or directory" [recovered]
	panic: standard_init_linux.go:175: exec user process caused "no such file or directory"

goroutine 1 [running, locked to thread]:
panic(0x7ddca0, 0xc820117c70)
	/usr/local/go/src/runtime/panic.go:481 +0x3e6
github.com/urfave/cli.HandleAction.func1(0xc8200ed2e8)
	/go/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/urfave/cli/app.go:478 +0x38e
panic(0x7ddca0, 0xc820117c70)
	/usr/local/go/src/runtime/panic.go:443 +0x4e9
github.com/opencontainers/runc/libcontainer.(*LinuxFactory).StartInitialization.func1(0xc8200ecbf8, 0xc8200220c8, 0xc8200ecd08)
	/go/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/factory_linux.go:259 +0x136
github.com/opencontainers/runc/libcontainer.(*LinuxFactory).StartInitialization(0xc820061630, 0x7fcf1a319728, 0xc820117c70)
	/go/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/factory_linux.go:277 +0x5b1
main.glob.func8(0xc820080a00, 0x0, 0x0)
	/go/src/github.com/opencontainers/runc/main_unix.go:26 +0x68
reflect.Value.call(0x7446a0, 0x8f09d0, 0x13, 0x839798, 0x4, 0xc8200ed268, 0x1, 0x1, 0x0, 0x0, ...)
	/usr/local/go/src/reflect/value.go:435 +0x120d
reflect.Value.Call(0x7446a0, 0x8f09d0, 0x13, 0xc8200ed268, 0x1, 0x1, 0x0, 0x0, 0x0)
	/usr/local/go/src/reflect/value.go:303 +0xb1
github.com/urfave/cli.HandleAction(0x7446a0, 0x8f09d0, 0xc820080a00, 0x0, 0x0)
	/go/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/urfave/cli/app.go:487 +0x2ee
github.com/urfave/cli.Command.Run(0x83c628, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x8cff80, 0x51, 0x0, ...)
	/go/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/urfave/cli/command.go:191 +0xfec
github.com/urfave/cli.(*App).Run(0xc820001b00, 0xc82000a100, 0x2, 0x2, 0x0, 0x0)
	/go/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/urfave/cli/app.go:240 +0xaa4
main.main()
	/go/src/github.com/opencontainers/runc/main.go:137 +0xe24

Kubelet --cloud-provider tag is not Supported on GCE with the CNI Plugin

When kubelet has the Cloud Provider Tag set on GCE and the CNI Plugin is being used the kube-controller-manager RouteController attempts to create routes, instead of delegating the responsibility to the CNI network plugin, making nodes unavailable for scheduling.

bash-4.3# kubectl describe pods kube-dns-v20-mzrgb --namespace=kube-system
Name:		kube-dns-v20-mzrgb
Namespace:	kube-system
Node:		/
Labels:		k8s-app=kube-dns
		version=v20
Status:		Pending
Tolerations:	CriticalAddonsOnly=:Exists
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----			-------------	--------	------			-------
  50s		19s		7	{default-scheduler }			Warning		FailedScheduling	no nodes available to schedule pods

Update aws deploys to use hypercube v1.6.3

Currently we are using 1.4.7 and 1.5.1, we'd like to get to 1.6.3.

$ grep kubelet_image_ */input.tf 
aws/input.tf:variable "kubelet_image_url" { default = "quay.io/coreos/hyperkube"}
aws/input.tf:variable "kubelet_image_tag" { default = "v1.5.1_coreos.0"}
azure/input.tf:variable "kubelet_image_url" { default = "quay.io/coreos/hyperkube"}
azure/input.tf:variable "kubelet_image_tag" { default = "v1.4.7_coreos.0"}
cross-cloud/input.tf:variable "kubelet_image_url" { default = "quay.io/coreos/hyperkube"}
cross-cloud/input.tf:variable "kubelet_image_tag" { default = "v1.4.7_coreos.0"}
gce/input.tf:variable "kubelet_image_url" { default = "quay.io/coreos/hyperkube"}
gce/input.tf:variable "kubelet_image_tag" { default = "v1.4.7_coreos.0"}
packet/input.tf:variable "kubelet_image_url" { default = "quay.io/coreos/hyperkube"}
packet/input.tf:variable "kubelet_image_tag" { default = "v1.4.7_coreos.0"}

Update packet deploys to use hypercube v1.6.3

Currently we are using 1.4.7 and 1.5.1, we'd like to get to 1.6.3.

$ grep kubelet_image_ */input.tf
aws/input.tf:variable "kubelet_image_url" { default = "quay.io/coreos/hyperkube"}
aws/input.tf:variable "kubelet_image_tag" { default = "v1.5.1_coreos.0"}
azure/input.tf:variable "kubelet_image_url" { default = "quay.io/coreos/hyperkube"}
azure/input.tf:variable "kubelet_image_tag" { default = "v1.4.7_coreos.0"}
cross-cloud/input.tf:variable "kubelet_image_url" { default = "quay.io/coreos/hyperkube"}
cross-cloud/input.tf:variable "kubelet_image_tag" { default = "v1.4.7_coreos.0"}
gce/input.tf:variable "kubelet_image_url" { default = "quay.io/coreos/hyperkube"}
gce/input.tf:variable "kubelet_image_tag" { default = "v1.4.7_coreos.0"}
packet/input.tf:variable "kubelet_image_url" { default = "quay.io/coreos/hyperkube"}
packet/input.tf:variable "kubelet_image_tag" { default = "v1.4.7_coreos.0"}

Add GCE Support

GCE Support Added

bash-4.3# kubectl get nodes
NAME                                  STATUS                     AGE
test-master1.c.test-163823.internal   Ready,SchedulingDisabled   28m
test-master2.c.test-163823.internal   Ready,SchedulingDisabled   28m
test-master3.c.test-163823.internal   Ready,SchedulingDisabled   28m
test-worker1.c.test-163823.internal   Ready                      24m
test-worker2.c.test-163823.internal   Ready                      24m
test-worker3.c.test-163823.internal   Ready                      24m
bash-4.3# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
bash-4.3# kubectl get pods --namespace=kube-system
NAME                                                          READY     STATUS    RESTARTS   AGE
kube-apiserver-test-master1.c.test-163823.internal            1/1       Running   0          29m
kube-apiserver-test-master2.c.test-163823.internal            1/1       Running   1          29m
kube-apiserver-test-master3.c.test-163823.internal            1/1       Running   0          27m
kube-controller-manager-test-master1.c.test-163823.internal   1/1       Running   0          29m
kube-controller-manager-test-master2.c.test-163823.internal   1/1       Running   1          29m
kube-controller-manager-test-master3.c.test-163823.internal   1/1       Running   0          29m
kube-proxy-test-master1.c.test-163823.internal                1/1       Running   0          29m
kube-proxy-test-master2.c.test-163823.internal                1/1       Running   1          29m
kube-proxy-test-master3.c.test-163823.internal                1/1       Running   0          29m
kube-proxy-test-worker1.c.test-163823.internal                1/1       Running   0          24m
kube-proxy-test-worker2.c.test-163823.internal                1/1       Running   0          23m
kube-proxy-test-worker3.c.test-163823.internal                1/1       Running   0          24m
kube-scheduler-test-master1.c.test-163823.internal            1/1       Running   1          29m
kube-scheduler-test-master2.c.test-163823.internal            1/1       Running   1          28m
kube-scheduler-test-master3.c.test-163823.internal            1/1       Running   0          28m

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.