robscott / kube-capacity Goto Github PK
View Code? Open in Web Editor NEWA simple CLI that provides an overview of the resource requests, limits, and utilization in a Kubernetes cluster
License: Apache License 2.0
A simple CLI that provides an overview of the resource requests, limits, and utilization in a Kubernetes cluster
License: Apache License 2.0
kubectl krew install resource-capacity
Updated the local copy of plugin index.
Updated the local copy of plugin index "kvaps".
Installing plugin: resource-capacity
W0829 14:00:23.068357 12540 install.go:164] failed to install plugin "resource-capacity": plugin "resource-capacity" does not offer installation for this platform
F0829 14:00:23.068404 12540 root.go:79] failed to install some plugins: [resource-capacity]: plugin "resource-capacity" does not offer installation for this platform
Is it possible to add darwin/arm64
(I think that's right for go?) to the builds to make it work?
$ kubectl resource-capacity --pods
shows too many pods, so I want to filter those with a given namespace-labels
argument. If you try this: $ kubectl resource-capacity --pods -n kube-system
, it shows all empty for some reason. What I expect is that it should gather the exact same values in the previous run and should list only the given namespace. But It prints *
in the NAMESPACE column, not kube-system
.
in some cases, memory values for a node will not include the 'Mi' suffix:
10.145.197.168 42125m (75%) 148700m (265%) 221838Mi (82%) 416923Mi (154%)
10.145.197.169 45325m (80%) 121200m (216%) 62346Mi (23%) 180263Mi (66%)
10.145.197.170 14425m (25%) 37700m (67%) 45346Mi (16%) 100345Mi (37%)
162.150.14.214 13790m (24%) 45700m (81%) 39411368960000m (29%) 106336625408000m (78%)
162.150.14.215 13790m (24%) 39700m (70%) 38874498048000m (28%) 90767368960000m (67%)
162.150.14.216 16790m (29%) 42700m (76%) 46390690816000m (34%) 98283561728000m (72%)
162.150.14.217 12490m (22%) 39200m (70%) 38606062592000m (28%) 91841110784000m (68%)
In these cases, the report is wrong. need to change the logic here:
https://github.com/robscott/kube-capacity/blob/master/pkg/capacity/resources.go#L356
for example, use more specific requestString and limitString so the code does not fall on the wrong unit:
Example: add requestStringM() and limitStringM() that only converts Memory units to avoid the problem::
func (tp *tablePrinter) printClusterLine() {
tp.printLine(&tableLine{
node: "*",
namespace: "*",
pod: "*",
container: "*",
cpuRequests: tp.cm.cpu.requestString(tp.availableFormat),
cpuLimits: tp.cm.cpu.limitString(tp.availableFormat),
cpuUtil: tp.cm.cpu.utilString(tp.availableFormat),
memoryRequests: tp.cm.memory.requestStringM(tp.availableFormat),
memoryLimits: tp.cm.memory.limitStringM(tp.availableFormat),
memoryUtil: tp.cm.memory.utilString(tp.availableFormat),
podCount: tp.cm.podCount.podCountString(),
})
}
func (tp *tablePrinter) printNodeLine(nodeName string, nm *nodeMetric) {
tp.printLine(&tableLine{
node: nodeName,
namespace: "*",
pod: "*",
container: "*",
cpuRequests: nm.cpu.requestString(tp.availableFormat),
cpuLimits: nm.cpu.limitString(tp.availableFormat),
cpuUtil: nm.cpu.utilString(tp.availableFormat),
memoryRequests: nm.memory.requestStringM(tp.availableFormat),
memoryLimits: nm.memory.limitStringM(tp.availableFormat),
memoryUtil: nm.memory.utilString(tp.availableFormat),
podCount: nm.podCount.podCountString(),
})
}
We have started to use Azure OIDC to authenticate to our clusters. Whenever I execute kube-capacity
command for a cluster, it modifies my kubeconfig
file and removes a setting (environment: AzurePublicCloud
) which makes kubectl
not work with the authentication anymore.
Original content of kubeconfig
:
...
users:
- name: oidc_user
user:
auth-provider:
config:
access-token: <TOKEN>
apiserver-id: <APISERVER ID>
client-id: <CLIENT ID>
environment: AzurePublicCloud
expires-in: "3599"
expires-on: "1579869933"
refresh-token: <REFRESH TOKEN>
tenant-id: <TENANT ID>
name: azure
...
kubeconfig
contents after running kube-capacity
:
...
users:
- name: oidc_user
user:
auth-provider:
config:
access-token: <TOKEN>
apiserver-id: <APISERVER ID>
client-id: <CLIENT ID>
expires-in: "3599"
expires-on: "1579869933"
refresh-token: <REFRESH TOKEN>
tenant-id: <TENANT ID>
name: azure
...
Same bug seems to be in Stern: https://github.com/wercker/stern/issues/119
A Stern fork seems to have fixed the issue by upgrading to a newer Kubernetes API.
Only minor issue, but using the 0.3.3 release prints 0.3.2 when using version
as argument.
https://github.com/robscott/kube-capacity/releases/download/0.3.3/kube-capacity_0.3.3_Linux_x86_64.tar.gz
Great tool!
be able to sort by some field is great, but when we have mixed nodes, the sort end up been a bit weird.
So if we could sort by percent of the metric, would be really nice.
This is helpfull cause I want to see what nodes are close to been full, but because it sort as absolute, this became a bit harder
see bellow what i mean, the request are "out of order" based on percent!
➜ k resource-capacity --util --sort cpu.request
NODE CPU REQUESTS CPU LIMITS CPU UTIL MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL
* 93424m (38%) 177600m (72%) 25953m (10%) 129164Mi (25%) 290377Mi (58%) 172597Mi (34%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx 4555m (57%) 7000m (88%) 171m (2%) 8668Mi (59%) 11392Mi (77%) 2579Mi (17%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx 4555m (57%) 7100m (89%) 242m (3%) 9052Mi (61%) 12160Mi (83%) 2811Mi (19%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx 3905m (49%) 4800m (60%) 1012m (12%) 4700Mi (31%) 11264Mi (76%) 6233Mi (42%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx 3795m (96%) 9800m (250%) 874m (22%) 6436Mi (44%) 13764Mi (94%) 8882Mi (60%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx 3545m (90%) 7900m (201%) 795m (20%) 5726Mi (39%) 15922Mi (108%) 7536Mi (51%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx 3513m (89%) 8050m (205%) 3386m (86%) 7492Mi (51%) 11180Mi (76%) 10849Mi (74%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx 3165m (40%) 3800m (48%) 197m (2%) 2688Mi (18%) 6944Mi (46%) 3320Mi (22%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx 3165m (40%) 3800m (48%) 372m (4%) 2688Mi (18%) 6944Mi (46%) 4207Mi (28%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx 3075m (38%) 3700m (46%) 351m (4%) 2752Mi (18%) 6944Mi (46%) 4489Mi (30%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx 3033m (77%) 6450m (164%) 3081m (78%) 5317Mi (36%) 7785Mi (53%) 10692Mi (73%)
....
When i have some time, I might look if i can implement myself.. but i am not sure if i will have enough time to udnerstand everything!
trying to load with
kubectl krew install resource-capacity
getting this error
Updated the local copy of plugin index.
Installing plugin: resource-capacity
W0611 20:42:27.316608 79007 install.go:164] failed to install plugin "resource-capacity": plugin "resource-capacity" does not offer installation for this platform
F0611 20:42:27.316669 79007 root.go:79] failed to install some plugins: [resource-capacity]: plugin "resource-capacity" does not offer installation for this platform
The newer versions of Golang support Apple M1 architecture, any hope on getting this compiled for that too?
Hello
I've just upgraded from version 0.4.0 to 0.6.0 and I'm noticing garbage when requesting the CPU utilization:
$ kubectl resource-capacity -u
NODE CPU REQUESTS CPU LIMITS CPU UTIL MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL
* 147620m (80%) 474660m (260%) 77370554329n (42%) 233900Mi (33%) 650466Mi (93%) 135871836Ki (19%)
EDIT: this issue appeared in v0.5.0
Please add --context
option like in kubectl
I get this result in some cases, with 1Mi in the cpu column
x.x.x.x rio segment-recorder-2-579c6b8cdf-5mp8m 55980m/56000m 53000m/56000m 289375Mi/289575Mi 284455Mi/289575Mi
x.x.x.x rio segment-recorder-6979fc899c-869mm 55980m/56000m 53000m/56000m 289375Mi/289575Mi 284455Mi/289575Mi
x.x.x.x rio stream-coordinator-9 55980m/56000m 52000m/56000m 289375Mi/289575Mi 281383Mi/289575Mi
x.x.x.x kube-system sumatra-daemonset-855gg 1Mi/1Mi 1Mi/1Mi 289575Mi/289575Mi 289575Mi/289575Mi
Taking kube-capacity for a spin, and error with:
$ kube-capacity
Error getting metrics
panic: the server could not find the requested resource (get nodes.metrics.k8s.io)
goroutine 1 [running]:
github.com/robscott/kube-capacity/pkg/capacity.getMetrics(0xc000116700, 0xc000371b90)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/list.go:66 +0x30a
github.com/robscott/kube-capacity/pkg/capacity.List(0x28b4dd0, 0x0, 0x0, 0x0)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/list.go:29 +0x3e
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x288d420, 0x28b4dd0, 0x0, 0x0)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:38 +0xe1
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x288d420, 0xc0000381b0, 0x0, 0x0, 0x288d420, 0xc0000381b0)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2cc
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x288d420, 0x288d680, 0xc000133f50, 0x1b6061e)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2fd
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(0x288d420, 0x10053b0, 0xc00009c058)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800 +0x2b
github.com/robscott/kube-capacity/pkg/cmd.Execute()
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x2d
main.main()
/Users/rob/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20
kubectl get nodes
does work, however. Am I missing a dependency?
Currently it is possible to filter node labels with --node-labels
.
In reality some nodes have two labels like node-role.kubernetes.io/worker
and node-role.kubernetes.io/infra
.
To exclude a node with a second label I would like to propose a feature to exclude labels with --exclude-node-labels
.
I have some pods that stay with status complete after CronJob ran. But they are still shown in a list
Any plans to make feature for namespace resource quota?
Thank you for a great tool!
First of all - great tool - simple to use and powerful.
We noticed that in some cases the totals that the tool rolls up at the pod level do not match the total cpu limits/requests of the actual containers in the pods. This seems to happen when the pod has init containers that specify cpu requests/limits. For example:
{
"name": "zen-core-api-6bb6b6d64c-p624c",
"namespace": "cp4ba",
"cpu": {
"requests": "100m",
"requestsPercent": "0%",
"limits": "2",
"limitsPercent": "12%"
},
"memory": {
"requests": "256Mi",
"requestsPercent": "0%",
"limits": "2Gi",
"limitsPercent": "3%"
},
"containers": [
{
"name": "zen-core-api-container",
"cpu": {
"requests": "100m",
"requestsPercent": "0%",
"limits": "400m",
"limitsPercent": "2%"
},
"memory": {
"requests": "256Mi",
"requestsPercent": "0%",
"limits": "1Gi",
"limitsPercent": "1%"
}
}
]
}
In this case, there's only one active container in the pod and its cpu.limits
are 400m - but the total reported at the pod level says cpu.limits
is 2. We looked at the pod definition on the actual cluster and saw that it has an init container whose cpu.limits
are in fact 2:
At this point we are left wondering whether this is an expected behavior - and if it is, whether the tool picks up the greater of the two values or just picks the first one for the pod.
Thanks.
Great plugin, very useful. It just miss one thing. Is it possible to add number of pods on node and show limit max pod number on node.
For example column
pods
99/110
I try to catch the available request/limits resource in my cluster with a script to determine if I have enough resource to deploy new pod in my cluster. I want catch this information from a yaml or json output to easily manipulate the output with a jq
or yq
tool.
The option -a
is ignored when the output is not table
$ kube-capacity
NODE CPU REQUESTS CPU LIMITS MEMORY REQUESTS MEMORY LIMITS
* 2939m (30%) 12744m (134%) 28368Mi (50%) 52476Mi (94%)
aks-default-xxxxxxx-vmss000000 1024m (53%) 9344m (491%) 2320Mi (43%) 10260Mi (191%)
aks-spot-yyyyyyyy-vmss000000 510m (26%) 850m (44%) 12224Mi (97%) 16266Mi (129%)
aks-spot-yyyyyyyy-vmss000018 485m (25%) 850m (44%) 5128Mi (40%) 9170Mi (72%)
aks-spot-yyyyyyyy-vmss00001g 460m (24%) 850m (44%) 4336Mi (34%) 8378Mi (66%)
aks-spot-yyyyyyyy-vmss00001t 460m (24%) 850m (44%) 4360Mi (34%) 8402Mi (66%)
$ kube-capacity -a
NODE CPU REQUESTS CPU LIMITS MEMORY REQUESTS MEMORY LIMITS
* 6561m/9500m -3244m/9500m 27387Mi/55755Mi 3279Mi/55755Mi
aks-default-xxxxxxx-vmss000000 876m/1900m -7444m/1900m 3045Mi/5365Mi -4895Mi/5365Mi
aks-spot-yyyyyyyy-vmss000000 1390m/1900m 1050m/1900m 374Mi/12598Mi -3668Mi/12598Mi
aks-spot-yyyyyyyy-vmss000018 1415m/1900m 1050m/1900m 7470Mi/12598Mi 3428Mi/12598Mi
aks-spot-yyyyyyyy-vmss00001g 1440m/1900m 1050m/1900m 8262Mi/12598Mi 4220Mi/12598Mi
aks-spot-yyyyyyyy-vmss00001t 1440m/1900m 1050m/1900m 8238Mi/12598Mi 4196Mi/12598Mi
$ kube-capacity -a -o json
{
"nodes": [
{
"name": "aks-default-xxxxxxx-vmss000000",
"cpu": {
"requests": "1024m",
"requestsPercent": "53%",
"limits": "9344m",
"limitsPercent": "491%"
},
"memory": {
"requests": "2320Mi",
"requestsPercent": "43%",
"limits": "10260Mi",
"limitsPercent": "191%"
}
},
{
"name": "aks-spot-yyyyyyyy-vmss000000",
"cpu": {
"requests": "510m",
"requestsPercent": "26%",
"limits": "850m",
"limitsPercent": "44%"
},
"memory": {
"requests": "12224Mi",
"requestsPercent": "97%",
"limits": "16266Mi",
"limitsPercent": "129%"
}
},
{
"name": "aks-spot-yyyyyyyy-vmss000018",
"cpu": {
"requests": "485m",
"requestsPercent": "25%",
"limits": "850m",
"limitsPercent": "44%"
},
"memory": {
"requests": "5128Mi",
"requestsPercent": "40%",
"limits": "9170Mi",
"limitsPercent": "72%"
}
},
{
"name": "aks-spot-yyyyyyyy-vmss00001g",
"cpu": {
"requests": "460m",
"requestsPercent": "24%",
"limits": "850m",
"limitsPercent": "44%"
},
"memory": {
"requests": "4336Mi",
"requestsPercent": "34%",
"limits": "8378Mi",
"limitsPercent": "66%"
}
},
{
"name": "aks-spot-yyyyyyyy-vmss00001t",
"cpu": {
"requests": "460m",
"requestsPercent": "24%",
"limits": "850m",
"limitsPercent": "44%"
},
"memory": {
"requests": "4360Mi",
"requestsPercent": "34%",
"limits": "8402Mi",
"limitsPercent": "66%"
}
}
],
"clusterTotals": {
"cpu": {
"requests": "2939m",
"requestsPercent": "30%",
"limits": "12744m",
"limitsPercent": "134%"
},
"memory": {
"requests": "28368Mi",
"requestsPercent": "50%",
"limits": "52476Mi",
"limitsPercent": "94%"
}
}
}
When the -a
is specify with a json or yaml output, remplace requests
and limits
(or add a field) by requestsAvailable
and limitsAvailable
$ kube-capacity version
kube-capacity version v0.7.4
Hi!
I'd like to have the possibility to print out some label value (like beta.kubernetes.io/instance-type
) along the rest of the node data, like this:
$ kubectl resource-capacity --show-label "beta.kubernetes.io/instance-type"
NODE CPU REQUESTS CPU LIMITS MEMORY REQUESTS MEMORY LIMITS beta.kubernetes.io/instance-type
* 29695m (62%) 116800m (245%) 126647Mi (40%) 379666Mi (121%) *
ip-10-210-1-115.us-east-1.compute.internal 5075m (31%) 18300m (115%) 25870Mi (41%) 51144Mi (81%) t3.medium
ip-10-210-1-82.us-east-1.compute.internal 6035m (76%) 26900m (340%) 26672Mi (42%) 93640Mi (150%) t3.medium
ip-10-210-3-174.us-east-1.compute.internal 5435m (68%) 19000m (240%) 20781Mi (33%) 71265Mi (114%) t3.small
ip-10-210-3-89.us-east-1.compute.internal 5975m (75%) 26300m (332%) 26432Mi (42%) 92104Mi (147%) t3.large
ip-10-210-8-230.us-east-1.compute.internal 7175m (90%) 26300m (332%) 26894Mi (43%) 71513Mi (114%) g4dn.xlarge
What do you think about this (sorry about the formatting)? is this feasible?
Thank you!
Seem currently we don't have the network related information (like Total Incoming Bandwidth / Total Outgoing Bandwidth) shown the CLI. We can add them as separate columns. Please let me know what do you guys think?
Hi Rob
I've created this github action krew-plugin-release, that can be used to auto-bump the new version in krew-index when you publish a new release of your plugin.
Right now I am using this to publish the modify-secret krew plugin.
If this is something that interests you, I will be happy to open a PR for you.
Thanks
Rajat Jindal
https://danp.net/posts/macos-dns-change-in-go-1-20/
I'm a Mac user who has /etc/resolver/inhousedomain
which works via a VPN connected to our VPC, but resource-capacity
release binary built with Go 1.19.5 can't seem to recognize it.
https://github.com/search?q=repo%3Arobscott%2Fkube-capacity%201.19&type=code
> brew info robscott/tap/kube-capacity
robscott/tap/kube-capacity: stable 0.5.0
kube-capacity provides an overview of the resource requests, limits, and utilization in a Kubernetes cluster
/opt/homebrew/Cellar/kube-capacity/0.5.0 (5 files, 31.0MB) *
Built from source on 2021-04-27 at 13:56:04
From: https://github.com/robscott/homebrew-tap/blob/HEAD/Formula/kube-capacity.rb
Since kube-capacity is using go 1.17 and client-go v0.23.4 is compatible with go 1.17, i think client-go dependency should be increased to take advantage of latest release. @robscott If you assign it to me, i can handle the upgrade process. Regards! Also i can upgrade all direct deps if you want.
$ kube-capacity -pu
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x10cd9f2]
goroutine 1 [running]:
github.com/robscott/kube-capacity/pkg/capacity.(*clusterMetric).addPodMetric(0xc000290a38, 0xc0002914f0, 0x0, 0x0, 0x0, 0x0, 0xc0006bda80, 0x1b, 0x0, 0x0, ...)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:175 +0x932
github.com/robscott/kube-capacity/pkg/capacity.buildClusterMetric(0xc000136930, 0xc000f748c0, 0xc000139d50, 0x0, 0x0, 0x0)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:105 +0x82e
github.com/robscott/kube-capacity/pkg/capacity.FetchAndPrint(0x1010100, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x13670c5, ...)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/capacity.go:50 +0x216
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x20c18a0, 0xc000370840, 0x0, 0x1)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x21d
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x20c18a0, 0xc0000b2030, 0x1, 0x1, 0x20c18a0, 0xc0000b2030)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2ae
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x20c18a0, 0xc000427f68, 0x10e368e, 0x20c18a0)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800
github.com/robscott/kube-capacity/pkg/cmd.Execute()
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:79 +0x32
main.main()
/Users/rob/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20
$ kube-capacity -u
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x10cd9f2]
goroutine 1 [running]:
github.com/robscott/kube-capacity/pkg/capacity.(*clusterMetric).addPodMetric(0xc0004f0a38, 0xc0004f14f0, 0x0, 0x0, 0x0, 0x0, 0xc00061b300, 0x1b, 0x0, 0x0, ...)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:175 +0x932
github.com/robscott/kube-capacity/pkg/capacity.buildClusterMetric(0xc000102930, 0xc000933c70, 0xc00011ecb0, 0x0, 0x0, 0x0)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:105 +0x82e
github.com/robscott/kube-capacity/pkg/capacity.FetchAndPrint(0x1010000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x13670c5, ...)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/capacity.go:50 +0x216
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x20c18a0, 0xc000344860, 0x0, 0x1)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x21d
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x20c18a0, 0xc00000c0b0, 0x1, 0x1, 0x20c18a0, 0xc00000c0b0)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2ae
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x20c18a0, 0xc0003fbf68, 0x10e368e, 0x20c18a0)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800
github.com/robscott/kube-capacity/pkg/cmd.Execute()
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:79 +0x32
main.main()
/Users/rob/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20
$ kube-capacity version
kube-capacity version 0.3.1
Metrics-server is working for me with kubectl:
$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
h1355 1693m 21% 5304Mi 67%
h1879 2835m 35% 5200Mi 66%
h230 3412m 42% 4765Mi 60%
h303 1792m 22% 5162Mi 65%
h504 2599m 32% 4739Mi 60%
h5345 79m 0% 11161Mi 46%
h71 1598m 19% 11731Mi 73%
h783 1161m 14% 4008Mi 50%
h834 1263m 15% 5533Mi 70%
h911 1763m 22% 5406Mi 68%
s234 839m 10% 3980Mi 50%
s237 975m 12% 5451Mi 69%
s238 399m 4% 2985Mi 37%
s239 526m 6% 3192Mi 40%
GET https://api.example.com:8443/apis/metrics.k8s.io/v1beta1/nodes 200 OK in 98 milliseconds
$ kubectl version --short
Client Version: v1.15.0-alpha.3
Server Version: v1.14.1
When using the following command:
~$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ip-10-1-0-100.eu-central-1.compute.internal 589m 7% 12429Mi 81%
there is a difference in the utilization:
~$ kubectl resource-capacity --util --sort cpu.util | sed -r '/^\s*$/d'
NODE CPU REQUESTS CPU LIMITS CPU UTIL MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL
ip-10-1-0-100.eu-central-1.compute.internal 6245m (78%) 9400m (117%) 346m (4%) 13177Mi (86%) 18224Mi (119%) 10593Mi (69%)
Why is that the case? Is this a bug, or do we need to take anything else into consideration?
Thanks.
Hello there,
It will be very convenient if you add option to specify the path to kubernetes config file.
Example: --kubeconfig="path-to-kubeconfig-file"
Most tools allow username and group impersonating like kubectl
does.
--as string Username to impersonate for the operation
--as-group stringArray Group to impersonate for the operation
Would be cool.
Hi I setup this plugin followed instructions, when I issue
kubectl resource-capacity
I get an output as follows
NODE CPU REQUESTS CPU LIMITS MEMORY REQUESTS MEMORY LIMITS
* 1700m (34%) 0m (0%) 330Mi (1%) 340Mi (1%)
kubernetes 250m (12%) 0m (0%) 0Mi (0%) 0Mi (0%)
kubernetes-node2 350m (35%) 0m (0%) 90Mi (1%) 0Mi (0%)
kubernetes2 1100m (55%) 0m (0%) 240Mi (4%) 340Mi (5%)
However, whatever commands I use I get errors as follows
kubectl resource-capacity ‐‐sort cpu.limit
Error: unknown command "‐‐sort" for "kube-capacity"
kubectl resource-capacity ‐‐sort cpu.util --util
Error: unknown command "‐‐sort" for "kube-capacity"
kube-capacity
kube-capacity: command not found
Is there any config to add or update following setup? Than ks
Hello @robscott
It would be nice to be able to select only a particular column like: MEM UTIL as a filtering option.
Showing all the columns sometimes make it harder to read as the screen size cannot fit all the columns without having to add a new line which makes readability harder.
Cheers.
kube-capacity -n ocsl-dev -p
configuration-server-92-42zwb 250m (1%) 1000m (6%) 750Mi (0%) 1024Mi (0%)
But the actual request in 200m. There are init containers, but none of them are 50m, so i don't understand where its getting the 250m from.
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 200m
memory: 750Mi
I was expecting to see cpu/memory util only for pods in the given namespace, but it seems to include all pods, e.g.:
$ kube-capacity -u -n my-namespace
NODE CPU REQUESTS CPU LIMITS CPU UTIL MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL
* 8250m (1%) 20000m (4%) 26768m (5%) 36508Mi (1%) 36508Mi (1%) 434069Mi (17%)
node-1 0Mi (0%) 0Mi (0%) 1124m (32%) 0Mi (0%) 0Mi (0%) 6105Mi (40%)
node-2 0Mi (0%) 0Mi (0%) 1957m (55%) 0Mi (0%) 0Mi (0%) 9312Mi (61%)
node-3 0Mi (0%) 0Mi (0%) 843m (24%) 0Mi (0%) 0Mi (0%) 4347Mi (28%)
node-4 0Mi (0%) 0Mi (0%) 910m (3%) 0Mi (0%) 0Mi (0%) 21097Mi (15%)
node-5 0Mi (0%) 0Mi (0%) 3766m (50%) 0Mi (0%) 0Mi (0%) 8770Mi (27%)
Similar output when filtering with --namespace-labels
.
WDYT?
Hello folks,
I get different outputs in JSON and non-JSON outputs for the following command:
kube-capacity --util --sort cpu.util --output json
JSON output:
{ "nodes": [ { "name": "gke-snapblocs-dpstudi-default-f5be526-2c7e96ef-t6t4", "cpu": { "requests": "2062m", "requestsPercent": "52%", "limits": "5544m", "limitsPercent": "141%", "utilization": "1013446562n", "utilizationPercent": "25%" }, "memory": { "requests": "5821Mi", "requestsPercent": "43%", "limits": "6151Mi", "limitsPercent": "46%", "utilization": "3707128Ki", "utilizationPercent": "27%" } }, { "name": "gke-snapblocs-dpstudi-default-f5be526-e4e0068f-6zts", "cpu": { "requests": "1803m", "requestsPercent": "45%", "limits": "3100m", "limitsPercent": "79%", "utilization": "787564923n", "utilizationPercent": "20%" }, "memory": { "requests": "2768Mi", "requestsPercent": "20%", "limits": "3438Mi", "limitsPercent": "25%", "utilization": "2532912Ki", "utilizationPercent": "18%" } }, { "name": "gke-snapblocs-dpstudi-default-f5be526-e4e0068f-8xjc", "cpu": { "requests": "2083m", "requestsPercent": "53%", "limits": "3410m", "limitsPercent": "86%", "utilization": "626234143n", "utilizationPercent": "15%" }, "memory": { "requests": "3802Mi", "requestsPercent": "28%", "limits": "3458Mi", "limitsPercent": "26%", "utilization": "2229032Ki", "utilizationPercent": "16%" } }, { "name": "gke-snapblocs-dpstudi-default-f5be526-2c7e96ef-8w0p", "cpu": { "requests": "2113m", "requestsPercent": "53%", "limits": "4600m", "limitsPercent": "117%", "utilization": "597228442n", "utilizationPercent": "15%" }, "memory": { "requests": "4172Mi", "requestsPercent": "31%", "limits": "4252Mi", "limitsPercent": "31%", "utilization": "2956028Ki", "utilizationPercent": "21%" } }, { "name": "gke-snapblocs-dpstudi-default-f5be526-2c7e96ef-292x", "cpu": { "requests": "1813m", "requestsPercent": "46%", "limits": "3400m", "limitsPercent": "86%", "utilization": "538324970n", "utilizationPercent": "13%" }, "memory": { "requests": "4696Mi", "requestsPercent": "35%", "limits": "3228Mi", "limitsPercent": "24%", "utilization": "2081224Ki", "utilizationPercent": "15%" } }, { "name": "gke-snapblocs-dpstudi-default-f5be526-e4e0068f-p2m0", "cpu": { "requests": "1713m", "requestsPercent": "43%", "limits": "3200m", "limitsPercent": "81%", "utilization": "528323068n", "utilizationPercent": "13%" }, "memory": { "requests": "3762Mi", "requestsPercent": "28%", "limits": "3228Mi", "limitsPercent": "24%", "utilization": "2317608Ki", "utilizationPercent": "17%" } } ], "clusterTotals": { "cpu": { "requests": "11587m", "requestsPercent": "49%", "limits": "23254m", "limitsPercent": "98%", "utilization": "4091122108n", "utilizationPercent": "17%" }, "memory": { "requests": "25021Mi", "requestsPercent": "31%", "limits": "23755Mi", "limitsPercent": "29%", "utilization": "15823932Ki", "utilizationPercent": "19%" } } }
Non JSON output:
As you can notice for the node `gke-snapblocs-dpstudi-default-f5be526-2c7e96ef-t6t4` CPU utilization and Memory utilization in JSON are having `n` and `Ki` units which are not matching with Non JSON output which is `m` and `Mi`.Version:
kubernetes: 1.20
kube-capacity: 0.6.1
It would be nice if we could add support for node taints.
e.g. If a node has been cordoned and is set to no schedule/execute, it should be possible to exclude this.
e.g.
kubectl get nodes
NAME
example-node-1 Ready,SchedulingDisabled <none> 425d v1.24.6
example-node-2 Ready <none> 227d v1.24.6
kube-capacity
NODE CPU REQUESTS CPU LIMITS MEMORY REQUESTS MEMORY LIMITS
* 560m (28%) 130m (7%) 572Mi (9%) 770Mi (13%)
example-node-1 220m (22%) 10m (1%) 192Mi (6%) 360Mi (12%)
example-node-2 340m (34%) 120m (12%) 380Mi (13%) 410Mi (14%)
Now if we exclude cordoned nodes:
kube-capacity --exclude-noschedule-nodes
NODE CPU REQUESTS CPU LIMITS MEMORY REQUESTS MEMORY LIMITS
* 340m (34%) 132m (12%) 380Mi (13%) 410Mi (13%)
example-node-2 340m (34%) 120m (12%) 380Mi (13%) 10Mi (14%)
We can see have less capacity available than we thought.
There was fixed #49 by #60 . But it shows wrong number of pods, bigger than limit number. Because it shows all kind of pods : completed, Error etc.
But it should show the same number as command kubectl describe node , "Non-terminated Pods" field. This field show if it possible to schedule pod on this node or not.
just an idea, but vpa-recommender
could be used to generate some extra info (if the user is running it in their cluster) for pods... the output might be something like this
NODE NAMESPACE POD CPU REQUESTS CPU LIMITS CPU UTIL CPU RECOMMENDATION MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL MEMORY RECOMMENDATION
...
ip-12-34-567-89.eu-west-2.compute.internal my-namespace my-pod-7d95ccc554-2ltsq 600m (3%) 0m (0%) 4m (0%) 123m 1024Mi (1%) 1024Mi (1%) 431Mi (0%) 789Mi
Hi community,
With my limited permissions, I have this error
kubectl resource-capacity --sort cpu.limit --util --pods
Error listing Nodes: nodes is forbidden: User "320144150" cannot list resource "nodes" in API group "" at the cluster scope
Is possible to use the tool with my limited RBAC permission?
Best regards,
Jizu
$ kube-capacity --node-labels 'kubernetes.io/hostname=h1349' -u
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x10cf0c5]
goroutine 1 [running]:
github.com/robscott/kube-capacity/pkg/capacity.buildClusterMetric(0xc00028a9a0, 0xc0013a0700, 0xc00028a380, 0xc001464150, 0x0, 0x1c, 0x0)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:97 +0x4c5
github.com/robscott/kube-capacity/pkg/capacity.FetchAndPrint(0x1010000, 0x0, 0x0, 0x7fff578efb99, 0x1c, 0x0, 0x0, 0x0, 0x0, 0x136a805, ...)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/capacity/capacity.go:53 +0x286
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x20ca900, 0xc00043c300, 0x0, 0x3)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x21d
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x20ca900, 0xc0000be010, 0x3, 0x3, 0x20ca900, 0xc0000be010)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2ae
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x20ca900, 0xc00043bf68, 0x10e5cae, 0x20ca900)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800
github.com/robscott/kube-capacity/pkg/cmd.Execute()
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:79 +0x32
main.main()
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20
$ kube-capacity version
kube-capacity version 0.3.2
Hi, great tool but I have problems using it with Active Directory authentication.
I get:
$ kube-capacity --pods
Error connecting to Kubernetes: No Auth Provider found for name "service"
I use this connect:
user:
auth-provider:
config:
access-token:
apiserver-id
client-id
One ask here (which i'm happy help with) is a sort feature, Right now I'm with the sort
command in linux and that works OK but it would be nice to sort things by default or with a flag. Bubbling signal of "oversubscription" to the top would be nice. In terms of the output with --usage
, the parenthesis cause sort
by column some issues since it wants to include the parenthesis in the evaluation of sort. As a minimal change, it would be helpful to not encapsulate the percents.
Here kc
set the resources unit:
kube-capacity/pkg/capacity/resources.go
Lines 103 to 112 in 9557a28
And if the cpu
or memory
unit is a blank string(0% usage for example), this code will do the wrong judgement(all the judgement about Format
, here just a example):
kube-capacity/pkg/capacity/resources.go
Lines 366 to 372 in 9557a28
Because the value of cpu
or memory
is a blank string, so it's type won't be resource.DecimalSI
, and it will always do the else
code block. And of couse its calculate progress will be wrong too since the else
code block will call formatTomegiBytes
and the unit is not correct:
kube-capacity/pkg/capacity/resources.go
Lines 387 to 393 in 9557a28
The Wrong output will be like(those red ones):
I will try to fix it and provide a PR, but my skills are not good, will do my best
When $KUBECONFIG
holds multiple paths to multiple config files, colon-separated, kube-capacity doesn't seem to expect this:
➞ echo $KUBECONFIG
/home/serge/.kube/config.d/civo-civo-k3s-t0-kubeconfig:/home/serge/.kube/config.d/kube_config_rk0.yml
➞ kubectx
civo-k3s-t0
rk0
➞ kube-capacity
/home/serge/.kube/config.d/civo-civo-k3s-t0-kubeconfig:/home/serge/.kube/config.d/kube_config_rk0.yml does not exist - please make sure you have a kubeconfig configured.
panic: stat /home/serge/.kube/config.d/civo-civo-k3s-t0-kubeconfig:/home/serge/.kube/config.d/kube_config_rk0.yml: no such file or directory
goroutine 1 [running]:
github.com/robscott/kube-capacity/pkg/kube.getKubeConfig(0x203000, 0x2, 0x0)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/kube/clientset.go:65 +0x2ea
github.com/robscott/kube-capacity/pkg/kube.NewClientSet(0xc000423ad8, 0x40b79f, 0xc0002a2e20)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/kube/clientset.go:33 +0x22
github.com/robscott/kube-capacity/pkg/capacity.getPodsAndNodes(0x0, 0x12d538a)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/list.go:40 +0x34
github.com/robscott/kube-capacity/pkg/capacity.List(0x1f3f558, 0x0, 0x0, 0x0)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/list.go:29 +0x37
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x1f13680, 0x1f3f558, 0x0, 0x0)
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:38 +0xe1
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x1f13680, 0xc0000381b0, 0x0, 0x0, 0x1f13680, 0xc0000381b0)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2cc
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1f13680, 0x1f138e0, 0xc000423f50, 0x105aeae)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2fd
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(0x1f13680, 0x4056b0, 0xc00009c058)
/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800 +0x2b
github.com/robscott/kube-capacity/pkg/cmd.Execute()
/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x2d
main.main()
/Users/rob/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20
Outputting the same data in json would be awesome to leverage jq
or other tools. Thoughts welcome on formatting or if this is a good feature request.
Trying to use these together results in the following error message:
% kubectl resource-capacity --node-labels 'kubernetes.io/role=node' -u
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x10cf0c5]
goroutine 1 [running]:
github.com/robscott/kube-capacity/pkg/capacity.buildClusterMetric(0xc0001324d0, 0xc000269ab0, 0xc000492d20, 0xc0001344d0, 0x0, 0x17, 0x0)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:97 +0x4c5
github.com/robscott/kube-capacity/pkg/capacity.FetchAndPrint(0x1010000, 0x0, 0x0, 0x7ffd49634e63, 0x17, 0x0, 0x0, 0x0, 0x0, 0x136a805, ...)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/capacity/capacity.go:53 +0x286
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x20ca900, 0xc00038c450, 0x0, 0x3)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x21d
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x20ca900, 0xc00003a090, 0x3, 0x3, 0x20ca900, 0xc00003a090)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2ae
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x20ca900, 0xc00041bf68, 0x10e5cae, 0x20ca900)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800
github.com/robscott/kube-capacity/pkg/cmd.Execute()
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:79 +0x32
main.main()
/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20
Using kops deployed kubernetes on AWS, installed this plugin via krew
Warning: Calling bottle :unneeded is deprecated! There is no replacement.
Please report this issue to the robscott/tap tap (not Homebrew/brew or Homebrew/core):
/usr/local/Homebrew/Library/Taps/robscott/homebrew-tap/Formula/kube-capacity.rb:10
See that anytime I do a brew update or install now.
I would like to be able to display non-allocated (non-requested) resources (allocatable - allocated)
This is useful to better understand why the scheduler is not able to schedule a pod due to lack of allocatable resources.
Great tool, thank you!
This is probably a dumb question, but I didn't see it in in the help or README. What do the percentages mean for req/limit/usage?
I see that the homebrew tap has not been updated for 0.6.0, this should happen automatically right? The following worked for me
if OS.mac?
url "https://github.com/robscott/kube-capacity/releases/download/v0.6.0/kube-capacity_0.6.0_Darwin_x86_64.tar.gz"
sha256 "db9161dc99fd217e2f2d4b9c7423d28150a9f47ddce0f8ce8ba8d0c36de06ec3"
end
if OS.linux? && Hardware::CPU.intel?
url "https://github.com/robscott/kube-capacity/releases/download/v0.6.0/kube-capacity_0.6.0_Linux_x86_64.tar.gz"
sha256 "250ae3b2e179c569cdb10b875ed49863d678297d873bfd3d3520c2f8a3f3ebcc"
end
What's the limiting factor for resource-capacity to be supported in Krew for Windows?
Hi, first of all for this super useful plugin, it's exactly what I was missing for some time!
In order to reduce the memory footprint of apps on my cluster I'd like to tune memory requests and like to propose an option that calculates the mem metrics not according to the total node memory, but according to the requests/limits (same should be possible with CPU metrics, but my focus is on memory).
Right now kube-capacity shows 2%
mem usage of the total node memory:
❯ kubectl resource-capacity --pods --util --sort mem.util | grep -E '(NODE|kustom)'
NODE NAMESPACE POD CPU REQUESTS CPU LIMITS CPU UTIL MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL
bitrigger flux-system kustomize-controller-7dd58878b8-7jmnb 100m (2%) 0Mi (0%) 3m (0%) 64Mi (3%) 1024Mi (51%) 49Mi (2%)
What I'd be interested in is the percentage of i.e. the mem requests (which is what k9s shows in the default pod list), which is actually 77%
:
│ NAMESPACE↑ NAME PF READY RESTARTS STATUS CPU MEM %CPU/R %CPU/L %MEM/R %MEM/L IP NODE AGE │
│ flux-system kustomize-controller-7dd58878b8-7jmnb ● 1/1 0 Running 3 50 3 n/a 78 4 10.42.0.38 bitrigger 3h4m
So a flag could be --percentage=[node|req|limit]
which could apply to both CPU and mem metrics.
Hello,
Would it be possible to get a container view?
Thanks for the work you've done here btw
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.