k8s-at-home / charts Goto Github PK
View Code? Open in Web Editor NEW⚠️ Deprecated : Helm charts for applications you run at home
Home Page: https://docs.k8s-at-home.com
License: Apache License 2.0
⚠️ Deprecated : Helm charts for applications you run at home
Home Page: https://docs.k8s-at-home.com
License: Apache License 2.0
Helm chart name and version: [email protected]
Container name and tag: bitwardenrs/server:latest
What steps did you take and what happened:
Deploying the chart with the default persistence option (statefulset) yields the following error:
Screenshots from Lens:
Looking at the chart, the target for the HPA is always:
spec:
scaleTargetRef:
apiVersion: apps/v1
type: Deployment
This means that the HPA looks for a target deployment when scraping metrics, even if the chart is deployed as a StatefulSet.
What did you expect to happen: HPA to reference the StatefulSet when deploying this way.
Anything else you would like to add:
Impending PR to resolve this.
Additional Information:
The author of Frigate, @blakeblackshear is now hosting a charts repo for his charts, and we should 'move' frigate to the new home. This issue is to handle that process.
See also blakeblackshear/frigate#156
Hey ! I was trying to add a chart with this image.
I was able to modified your qbittorrent chart with some success. The issue being, I was able to curl from one of the nodes but not another (I have a 2 node setup using k3s. Curl in master node was giving operation timeout where as a curl in worker node was working just fine for the same clusterIP.). As a result, nginx ingress was giving timeout.
Other charts seems to be working fine under the same conditions.
Hope you can help me. Thank you.
As this project largely serves home projects, a fair number of the audience may be using arm based SBC clusters (especially with there now being an 8GB Raspberry Pi).
I am aware that the Plex maintained docker images only cater to x86, however the linuxserver.io images include arm images.
While it may require significant effort to cleanly migrate, my proposal would be to do so, so as to cater to a wider audience.
While it would mean moving away from the official images, linuxserver.io has a good reputation for maintaining high quality images.
I'm new to kubernetes so I'm not sure if this is expected or not. I'm trying to add the plex
chart to my cluster and I have only a single PV and PVC. After starting the chart it's stuck at ContainerCreating
because it cannot attach the volumes
Unable to attach or mount volumes: unmounted volumes=[data transcode], unattached volumes=[data config transcode shared default-token-8s7wb]: timed out waiting for the condition
It seems to me that it might be because deployment.yml
file storing hardcoded values for volumeMounts
as they don't exist in my cluster. e.g. https://github.com/billimek/billimek-charts/blob/master/charts/plex/templates/deployment.yaml#L196
Is there a reason we are hardcoding those values, do I need to setup extra PVC named as they are?
There is a similar issue here which seems to have the same problem.
Here is the kubectl describe pod ...
Name: plex-69fb48c94f-tf56c
Namespace: media
Priority: 0
Node: mona-2/192.168.0.224
Start Time: Tue, 18 Aug 2020 20:57:35 +0200
Labels: app.kubernetes.io/instance=plex
app.kubernetes.io/name=plex
pod-template-hash=69fb48c94f
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/plex-69fb48c94f
Containers:
plex:
Container ID:
Image: plexinc/pms-docker:1.19.1.2645-ccb6eb67e
Image ID:
Ports: 32400/TCP, 32469/TCP, 1900/UDP, 32410/UDP, 32412/UDP, 32413/UDP, 32414/UDP
Host Ports: 0/TCP, 0/TCP, 0/UDP, 0/UDP, 0/UDP, 0/UDP, 0/UDP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Liveness: http-get http://:32400/identity delay=15s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:32400/identity delay=15s timeout=5s period=10s #success=1 #failure=5
Environment:
TZ: Europe/London
PLEX_CLAIM: REDACTED
PMS_INTERNAL_ADDRESS: http://plex:32400
PMS_IMAGE: plexinc/pms-docker:1.19.1.2645-ccb6eb67e
KUBE_NAMESPACE: media (v1:metadata.namespace)
TRANSCODE_PVC: media-ssd
DATA_PVC: media-ssd
CONFIG_PVC: media-ssd
PLEX_UID: 1000
PLEX_GID: 1000
Mounts:
/config from config (rw)
/data from data (rw)
/shared from shared (rw)
/transcode from transcode (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8s7wb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: media-ssd
ReadOnly: false
config:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: media-ssd
ReadOnly: false
transcode:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: media-ssd
ReadOnly: false
shared:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-8s7wb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8s7wb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned media/plex-69fb48c94f-tf56c to hydra-2
Warning FailedMount 22m (x3 over 35m) kubelet, hydra-2 Unable to attach or mount volumes: unmounted volumes=[data transcode], unattached volumes=[shared default-token-8s7wb data config transcode]: timed out waiting for the condition
In the media-common
chart the image definitions are as follows:
image:
organization: ""
repository: ""
pullPolicy: IfNotPresent
tag: ""
Which separates the organisation part from the image name part. For update automation flux requires the repository value to be in the owner/image-name
format. I think it would be possible to workaround that, by creating a dummy value for the repository and point flux to use that, but this makes it very hacky and one would need to maintain three repetitive fields for custom images and additional annotations for flux. Were there any specific design decisions to separate them?
Hello,
I have a 4 node RPi K3S cluster and I'm running your chart for home-assistant. When setting esphome.enabled: true
by default it tries to mount in the home-assistant pvc so that it can access the home-assistant secrets.yaml. This does not work if the esphome pod is scheduled on a node where the home-assistant pvc does not exist.
I was able to work around this with the following settings to the home-assistant values:
esphome:
enabled: true
extraVolumes: null
extraVolumeMounts: null
If you decide to keep this cross-mounting behavior in the home-assistant chart, you should add an affinity rule when esphome is enabled as well so that the pods will both co-locate. I think that ideally the secrets.yaml would be provided via a kubernetes secret which would allow the file to be mounted into both containers with no affinity concerns.
Several of my deploys are failing recently and I just dug into what's going on. It appears media-common had some updates to the ingress to meet new 1.19 requirements. That change is here.
The change bumped the Minor version from 1.1.1 to 1.2.0. This being non-backwards compatible I believe it should have bumped the major version to 2.0.0 instead.
The charts repository then had this change landed which switches the media-common from "~" to "^" versioning which allows Minor and Patch instead of just Patch upgrades.
The result is that all media-common charts with the 2nd change (sonarr, radarr, organizr) have a backwards incompatible change to their values.yaml. The ingress used in those charts require that the 'paths' key have a map rather than list of paths as was previously in the chart.
It's a good change, I believe the error in this is actually the first commit which should have bumped the Major version, so that anyone using the supporting charts with ~ or ^ matching wouldn't receive a backwards incompatible change. For now I've pinned my chart directly to the release from a few days ago.
I think rolling back the 1.2.0 media-common, bumping the change to 2.0 and then bumping the umbrella charts major versions would be the correct way to version this change such that existing charts are aware this is a non-backwards compatible change and won't pull it in automatically.
When Plex is managing large libraries, it can take a really long time to start (so readiness checks fail and restart the pod). This is because it tries to chown everything inside of /config
, and that directory might be big (in terms of the number of files) if you have a large library.
I've created a PR in upstream plex Docker image which should hopefully speed it up a bit.
I've also resorted to using CHANGE_CONFIG_DIR_OWNERSHIP
set to false
(this brought up a templating issue, for which I'll create a PR), which works but imho isn't ideal.
I was thinking the better approach might be to create an init container or possibly even a job that sets the ownership correctly... This way the readiness/liveness probes only are dealing with actual Plex starting up, not any preamble that might need to happen.
I would be interested in ideas.
Hello,
I am missing some config options for the cloudflare dyndns chart, specifically CF_APITOKEN. This is used to provide a more restricted token so the app has much more limited access.
Because it is not documented, I decided to take a look at the chart code, but I was not able to find it.
Where is it? And, will you consider adding this configuration options?
Thanks and regards
Hi guys, just saw that there is a new common
library chart that replaces media-common
.
There's also something called The Common Helm Helper Chart in the Helm incubator repository. Maybe this could be used and extended by the k8s-at-home chart? It could reduce the amount of code in k8s-at-home chart and at the same time ensure we follow Helm best practices.
There would be a name clash, so the k8s-at-home chart would need a new unique name (as common
is already used by the The Common Helm Helper Chart).
@billimek how do you feel if we remove this field from the Chart.yaml? It seems like the field is useless for a bunch of applications that do not require a chart update when a image is updated.
My line of thinking is that the command helm list --all-namespaces -a
, or the helm-exporter is not returning the correct appVersion if we update the image. That makes this metadata wrong.
Of-course the proper way to handle this is to release a new chart version on every image update, but we don't really have the time to do that since new versions get released all the time, almost daily.
Created this issue as a container issue, to more easily keep track of the things we need/want to fix on the common library chart, but that are not strictly neccessary right away and/or don't really warrant an issue of their own.
{{ template common.whatever . }}
pattern is best replaced by {{ include common.whatever . }}
, according to official Helm docsservice.port.targetPort
is a string in the provided values.yaml
file. Merging in user overrides could cause an error when an integer is given due to conflicting data types.README.md
improvements are requiredhostNetwork
hostAliases
Currently we only check the root path, maybe there is some improvements here:
https://github.com/OpenZWave/Zwave2Mqtt#health-check-endpoints
Hi
Apologies if this is not the right place to ask, since this is probably more due to my lack of experience with Helm rather than any issue with this chart.
I am running a clean k3s environment on a Pi4. Just the one node for now.
The only values I have overridden in the adguard-home chart are the image tag (armhf-latest) and the existingClaim.
I can port-forward to the pod on port 3000 and access and complete the setup wizard at /install.html, but after this it says the connection was reset. Then eventually the liveness probe fails with e.g. Get http://10.42.0.70:3000/login.html: dial tcp 10.42.0.70:3000: connect: connection refused.
Am I missing something in my environment setup?
Environment:
Raspberry Pi4
5.4.73-v8+ #1360 SMP PREEMPT Thu Oct 29 16:00:37 GMT 2020 aarch64 GNU/Linux
Running k3s version v1.18.9+k3s1 (630bebf9)
Many thanks
Traefik-forward-auth does not currently implement health or readiness http handlers. The configured handlers at "/" actually trigger a forward auth check which calls the auth provider and fails to incorrect headers. Instead these handlers should use tcpProbes of httpGet handlers.
Closes #106
Is there any plan to add vpn support ?
Looking for existingClaim instead of the specified claimName causes it to get created no matter what.
Default tag (https://github.com/billimek/billimek-charts/blob/master/charts/esphome/values.yaml#L7) is invalid.
Not a huge fan of the esphome tags, looks like they're commit IDs which will make automated updates a nightmare.
I'm probably missing something obvious, but I don't see a way to specify the public-facing domain for callbacks from third parties. I do have ingress.hosts
set, but when I add Plex as a vendor, that third party site tries to redirect me back to https://<local_ip>:8123/auth/plex/<etc>
. If I swap out the local_ip:8123
with the proper domain, I can continue the auth flow.
As far as I can tell, there is currently no possibility to add annotations
or labels
to the statefulsets
or deployments
managed by the media-common
chart, is that correct?
Use case:
Setting my configuration in the zwave2mqtt UI, results in exception.
Kubernetes version v1.17.0
Cluster of 3 bare metal nodes running ubuntu server 18.04. Aeotec z-stick gen 5, tested on two different nodes.
Deploying with
helm install --namespace=default zwave2mqtt -f ./helm-configs/zwave2mqtt-values.yaml billimek/zwave2mqtt
Deploying the helm chart with default values except I have:
I can provide zwave2mqtt-values.yaml in full if needed.
Setting my configurations in the zwave2mqtt UI results in the following log in the container:
2020-04-04T18:56:23.404Z z2m:Zwave Connecting to /dev/ttyACM0
2020-04-04 18:56:23.408 Always, OpenZwave Version 1.6-1061-g14f2ba74 Starting Up
2020-04-04 18:56:23.408 Warning, Unable to load Localization file /usr/local/etc/openzwave/Localization.xml: Failed to open file
2020-04-04 18:56:23.409 Warning, Exception: Localization.cpp:794 - 1 - Cannot Create Localization Class! - Missing/Invalid Config File?
If I downgraded to 2.2.0 I would get the gateway working, nodes would report values but the gateway would not recognize devices. E.G. Aeotec Siren 6 would show up as "unknown" and would not expose all expected entities. The log would show warnings related to reading xml files.
Could this be related to some missing securityContext
? Seems its having issues reading expected configuration that should have been written at some point but perhaps OpenZwave was not allowed to write it?
The /usr/local/etc/openzwave/
dir is empty.
I assume i'm hitting this line in OpenZwave : https://github.com/OpenZWave/open-zwave/blob/e3bae88f29139032c736144b67e7c67d3a764b2b/cpp/src/Localization.cpp#L794
Candidate charts that would work as an umbrella templated media-common chart
(are there more? will add to the list as needed)
Exploring the 'how' to do this now. Looks like one of the options in settings.js
needs to be updated with a value.
At startup Node-Red sends this warning:
---------------------------------------------------------------------
Your flow credentials file is encrypted using a system-generated key.
If the system-generated key is lost for any reason, your credentials
file will not be recoverable, you will have to delete it and re-enter
your credentials.
You should set your own key using the 'credentialSecret' option in
your settings file. Node-RED will then re-encrypt your credentials
file using your chosen key the next time you deploy a change.
---------------------------------------------------------------------
Currently there are significant amounts of documentation duplication which inevitably leads to drift, copy/paste errors, and toil.
Is there a better way? Current thoughts:
Possible remediation options:
In addition, as we lean into artifacthub.io, we will also have a metadata file which will contain basically all of the above information and more. How do we keep the toil down there?
Mostly for myself
I believe that charts which deploy stateful applications, like for example Home Assistant, radarr et all should deploy statefullsets instead of deployments. I know that one can get almost the same result using a chart hardcoded replica number, and deployment strategy so as to guard from 2 copies accessing the same file (like on a new version rollout) from a PVC. I also realise that most of these applications have no business of being deployed in more than one copy from the same deployment, since they don't cluster in any meaningful way.
Meanwhile statefullsets have these things built-in. They also have a neat optional bonus (or a hindrance, depending on ones usecase), that because they bring pvc templates instead of pvc, helm will not delete the pvc outright on uninstall.
This is not a critique, it's a call for comments: are people using deployments instead of statefullsets because of familiarity, or is there something i'm not seeing?
I plan on working on munnerz/kube-plex#12. I figured if the issue of getting worker pods to consume GPU resources works, a chart to make deployment of the Intel GPU plugin easier could be helpful.
This would also work with k8s-at-home/charts/plex, even for those not using worker pods, by adding:
resources:
requests:
gpu.intel.com/i915: 1
Documentation: https://www.home-assistant.io/integrations/homekit/
I have tried enabling Home-Kit integration but it seems container is not exposing ports
UDP: 5353, TCP: 51827
Allow to expose home-kit bridge
Please allow to Pass dns policy in Jackett chart.
the is a few ports that are missing as a service in the helm chart and in the docker images
https://help.ui.com/hc/en-us/articles/218506997-UniFi-Ports-Used
TCP 6789 - used for mobile speedtest
UDP 1900 - Make controller discoverable on L2 network
The home-assistant chart (https://github.com/k8s-at-home/charts/tree/master/charts/home-assistant) is missing an option to define serviceAccount.
Many of the other charts have that, like https://github.com/k8s-at-home/charts/tree/master/charts/homebridge.
Seems like using additional mounts behavior has changed from the original use case on things like ombi/sonarr/etc. Can someone add description (or potentially fix) additionalVolumeMounts?
Previous method:
persistence:
extraExistingClaimMounts:
- name: "downloads"
mountPath: "/downloads"
existingClaim: "downloads"
readOnly: false
New method:
persistence:
additionalVolumeMounts:
- name: "downloads"
mountPath: "/downloads"
existingClaim: "downloads"
readOnly: false
That results in:
Error: UPGRADE FAILED: error validating "": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[1]): unknown field "existingClaim" in io.k8s.api.core.v1.VolumeMount
Removing the existingClaim results in the mount not being found. I'm sure I have poor formatting but this is not my expertise.
I just noticed the following warning in the GitHub CI:
The
set-env
command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
Should probably address this before they decide to disable the functionality completely 🙂
The livenessProbe in media-common-openvpn should not be populated if it's not present in the values
The Powerdns chart do not works with a postgres database.
When configuring the chart to use a postgres database, for example with the following values:
powerdns:
postgres:
username: pdns
password: mysecret
database: pdns
mariadb:
enabled: false
postgres:
enabled: true
postgresqlUsername: pdns
postgresqlPassword: mysecret
postgresqlDatabase: pdns
The powerdns pod ends up with the following errors in its logs:
ERROR 2005 (HY000): Unknown MySQL server host 'mysql' (-2)
Waiting for database to come up
It looks that the underlying image used by this chart (psitrax/powerdns) only support mysql: psi-4ward/docker-powerdns#22
As the title says, feature request to enable the possibility to deploy an external db. Either as separate pod (e.g. helm chart dependency) or completely external by e.g. host/user/pass configuration.
Was IM'ing with @onedr0p on Discord and chatting on how we could somehow lock down the secrets.yml file so that everything can be stored in public git.
Considering the use of an initContainer that pulls down an entire hass config from a public git repo. The secrets.yml file would be encrypted with either git-crypt (https://github.com/AGWA/git-crypt) or SOPS (https://github.com/mozilla/sops) and would be unlocked as part of the init process.
Would require us to maintain an initContainer that could perform most of this heavy lifting, although I anticipate that will be small.
Thoughts?
A couple of things could be done in this area, could have a stab at it when/if I have time but report it here for now:
See billimek/billimek-charts#204 for context
There's a bunch of charts out there, however there is only one on helm hub:
https://github.com/halkeye-helm-charts/powerdns
https://github.com/halkeye-helm-charts/powerdnsadmin
It would be nice to merge these into one like https://github.com/aescanero/helm-charts
When adding the following to the values:
nodeSelector:
kubernetes.io/hostname: k8s-staticwan
tolerations:
- effect: NoSchedule
operator: Exists
it gets rendered as:
nodeSelector: kubernetes.io/hostname: k8s-staticwan
tolerations: - effect: NoSchedule
operator: Exists
Gotta fix that indenting 😮
Common library has a service.enabled
flag, but the container that is being generated doesn't check for that flag when setting the ports causing it to fail when applying.
Originally posted by @bjw-s in #167 (comment)
With release of Node-RED template 4.0.0/ "common library" the recent PR for hostAliases support looks to have been lost.
Describe the bug
I'm instantiating HASS in a kubernetes cluster using the following definition. https://github.com/billimek/billimek-charts/tree/master/charts/home-assistant
Homeassistant fails to start with AppDaemon enabled.
A clear and concise description of what the bug is.
Version of Helm and Kubernetes:
Helm: v3.2.1
Kubernetes: v1.18.2
What happened:
➜ kubernetes git:(master) ✗ kubectl -n homeassistant logs homeassistant-home-assistant-8ccc9d79-f22bk -c appdaemon
2020-06-14 16:35:45.284536 INFO AppDaemon Version 3.0.5 starting
2020-06-14 16:35:45.284770 INFO Configuration read from: /conf/appdaemon.yaml
2020-06-14 16:35:45.286422 INFO AppDaemon: Starting Apps
2020-06-14 16:35:45.289228 INFO AppDaemon: Loading Plugin HASS using class HassPlugin from module hassplugin
2020-06-14 16:35:45.329504 INFO AppDaemon: HASS: HASS Plugin Initializing
2020-06-14 16:35:45.329869 INFO AppDaemon: HASS: HASS Plugin initialization complete
2020-06-14 16:35:45.330040 INFO Starting Dashboards
2020-06-14 16:35:45.330182 WARNING ------------------------------------------------------------
2020-06-14 16:35:45.330251 WARNING Unexpected error during run()
2020-06-14 16:35:45.330311 WARNING ------------------------------------------------------------
2020-06-14 16:35:45.330992 WARNING Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/appdaemon/admain.py", line 82, in run
self.rundash = rundash.RunDash(self.AD, loop, self.logger, self.access, **hadashboard)
File "/usr/local/lib/python3.6/site-packages/appdaemon/rundash.py", line 130, in __init__
dash_net = url.netloc.split(":")
TypeError: a bytes-like object is required, not 'str'
2020-06-14 16:35:45.331065 WARNING ------------------------------------------------------------
2020-06-14 16:35:45.331140 INFO AppDeamon Exited
Relevant Configuration Section
appdaemon:
enabled: true
## code-server container image
##
image:
repository: acockburn/appdaemon
tag: 3.0.5
pullPolicy: IfNotPresent
## Home Assistant API token
# haToken:
## Additional hass-vscode container environment variable
## For instance to add a http_proxy
##
extraEnv: {}
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- appdaemon.local
tls: []
# - secretName: appdaemon-tls
# hosts:
# - appdaemon.local
service:
type: ClusterIP
port: 5050
annotations: {}
labels: {}
clusterIP: ""
## List of IP addresses at which the hass-appdaemon service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
# nodePort: 30000
I install the chart with the default values but the pod constantly restarts. It comes up but after [s6-finish] waiting for services.
the pod does a [s6-finish] sending all processes the TERM signal.
.
kubectl -n calibre logs calibre-calibre-web-58c874b9d8-5fssm calibre-web -f
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing...
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing...
-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/
Brought to you by linuxserver.io
-------------------------------------
To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------
User uid: 1001
User gid: 1001
-------------------------------------
[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 30-config: executing...
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
Thus web-test-connection
always fails
kubectl -n calibre logs calibre-calibre-web-test-connection
Connecting to calibre-calibre-web:8083 (10.43.52.188:8083)
The codercom/code-server image doesn't have support for ARM and I'm trying to install this on a k3s raspberry pi cluster. It would be nice to have the option to forgo the arguments passed to the image or to define your own. That way you can have the ability to use your own image or have greater customization. In my case I'm trying to use the linuxserver image.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.