kubernetes / release Goto Github PK
View Code? Open in Web Editor NEWRelease infrastructure for Kubernetes and related components
License: Apache License 2.0
Release infrastructure for Kubernetes and related components
License: Apache License 2.0
example:
anago::push_git_objects(): git push origin release-1.4
remote: error: GH006: Protected branch update failed for refs/heads/release-1.4.
remote: error: You're not authorized to push to this branch. Visit https://help.github.com/articles/about-protected-branches/ for more information.
To [email protected]:kubernetes/kubernetes
! [remote rejected] release-1.4 -> release-1.4 (protected branch hook declined)
error: failed to push some refs to '[email protected]:kubernetes/kubernetes'
Right now the anago release tool has a disk check in it to make sure we have enough disk to do a full release build, but it's static and the state of Kubernetes (and how big it is) is wildly in flux
I'd like to dynamically set the disk requirements based on a recent continuous build directory (cross-build?).
Looking for a number like this:
# From a recent release build on my machine
$ df -sh anago-v1.5.0-alpha.1
32G anago-v1.5.0-alpha.1
Some thoughts from the seeding email thread:
@ixdy said:
The crossbuild is happening inside docker, and there are likely simultaneous builds happening at the same time. We might be able to query docker for what the volume sizes are, though I'm not sure exactly how best to do that.
As some point of reference for places we might look for info, cross-build is currently running on agent-heavy-6.
# docker ps -a -s
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
d3e8badf2d4d kube-build:build-6f4b9d8412 "make cross" 19 minutes ago Up 18 minutes kube-build-6f4b9d8412 268.9 MB (virtual 2.292 GB)
262c23accbb7 kube-build:build-6f4b9d8412 "chown -R 1014.1014 /" 19 minutes ago Exited (0) 19 minutes ago kube-build-data-6f4b9d8412 0 B (virtual 2.049 GB)
b92f1e75aa75 kube-build:build-e1f19b0056 "chown -R 1014.1014 /" About an hour ago Exited (0) About an hour ago kube-build-data-e1f19b0056 0 B (virtual 2.049 GB)
5fc2028614c7 kube-build:build-eb31ccc440 "true" About an hour ago Exited (0) About an hour ago kube-build-data-eb31ccc440 0 B (virtual 1.676 GB)
d4ac7913d125 kube-build:build-2c9f945771 "chown -R 0.0 /go/src" 2 hours ago Exited (0) 2 hours ago kube-build-data-2c9f945771 0 B (virtual 1.944 GB)
2267a26d712f kube-build:build-6bf291a109 "chown -R 0.0 /go/src" 2 hours ago Exited (0) 2 hours ago kube-build-data-6bf291a109 0 B (virtual 1.944 GB)
root@agent-heavy-6:/var/lib/jenkins/workspace/kubernetes-cross-build# docker ps -a -s | grep 6f4b9d8412
d3e8badf2d4d kube-build:build-6f4b9d8412 "make cross" 20 minutes ago Up 19 minutes kube-build-6f4b9d8412 498.2 MB (virtual 2.532 GB)
262c23accbb7 kube-build:build-6f4b9d8412 "chown -R 1014.1014 /" 20 minutes ago Exited (0) 20 minutes ago kube-build-data-6f4b9d8412 0 B (virtual 2.049 GB)
# du -sh _output
3.3G _output
We could maybe run du
on the _output dir at the end of runs and save that in metadata somewhere? Not perfect, but it's a start of a metric.
Disk usage shoots up a bit when producing the docker images and tarballs:
5.9G _output/dockerized
20M _output/images
9.2G _output/release-stage
2.6G _output/release-tars
Anyway, we should probably figure out some way of saving the size of _output in structured metadata. At the very least, we could just run du -sh _output
in the build script, so a human can periodically easily update the thresholds. (We already run sha256sum for presumably similar reasons.)
cc @kubernetes/test-infra-maintainers
It should check for this at the start
Announcing k8s v1.4.0-beta.1 to pwittroc...
Checking required system packages: FAILED
PREREQ: Missing prerequisites: sendgmr Run the following and try again:
sudo goobuntu-add-repo sendgmr && sudo apt-get update
sudo apt-get install sendgmr
Exiting...
anago::announce() finished in 0s
I'm getting lots of failures late in the mock workflow:
Send docker containers to gcr.io/kubernetes-release-test...
Release kube-apiserver-amd64:v1.5.0-alpha.2:
- Pushing: .....FAILED
Release legacy kube-apiserver:v1.5.0-alpha.2:
- Tagging: OK
- Pushing: .....FAILED
Release kube-controller-manager-amd64:v1.5.0-alpha.2:
- Pushing: .....FAILED
Release legacy kube-controller-manager:v1.5.0-alpha.2:
- Tagging: OK
- Pushing: .....FAILED
Release kube-scheduler-amd64:v1.5.0-alpha.2:
- Pushing: .....FAILED
etc.
The handling of github authentication is much like the rest of the tooling prereqs in this repo. It should be a no-assumed-knowledge, guided process that handholds the user to getting their auth set up correct so they can get on with their lives (the running of the tool).
We make a few attempts to get auth set up correctly.
Additional things we should be doing:
See https://github.com/kubernetes/release/blob/master/docs/branching.md for details.
This breaks pre-flight checks. I only tested on Debian. Seems like the package create these, and not kubelete itself, but we should check.
Kubelet debian package depends on kubectl
and kubernetes-cni
.
kubectl
? It looks like an optional dependency and is not required to be installed on each minion.kubernetes-cni
? What we currently have there? If kubelet
can work without kubernetes-cni
we may consider adding it to Recommended
section.kubelet
and kubernetes-cni
packages.We should have seperate RPMs for cni plugins and kubelet. They should depend on a compatible docker version. They should be built and published during the release process.
cc @dgoodwin
────────────────────────────────────────────────────────────────────────────────
CHECK CREDENTIALS
────────────────────────────────────────────────────────────────────────────────
Checking for valid github credentials: OK
Releases restricted to certain users!
Signal ERR caught!
i've got an error on installing kubelet on RPI3 :
Platform: RPI3
OS: HypriotOS 1.1.0
$ sudo apt-get install kubelet
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
kubelet
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/11.9 MB of archives.
After this operation, 95.3 MB of additional disk space will be used.
Selecting previously unselected package kubelet.
(Reading database ... 20308 files and directories currently installed.)
Preparing to unpack .../kubelet_1.4.3-00_armhf.deb ...
Unpacking kubelet (1.4.3-00) ...
Setting up kubelet (1.4.3-00) ...
/var/lib/dpkg/info/kubelet.postinst: 38: /var/lib/dpkg/info/kubelet.postinst: [[: not found
It looks like we went a new route with release notes for 1.4. We'll need to make sure we get these new steps and process integrated into the e2e workflow so that 1.5 Just Works.
Addon components get updated in Kubernetes clusters by updating manifests referring to new image tags. The images don't need to be bundled directly with the Kubernetes release because they are just pulled from the registry.
Tools, however, typically can't be pulled in that way. However, we would like to make it easy for users to get popular tools. Helm is a prime candidate. It should be ready for bundling with K8s 1.4. Kompose is another possibility.
Recent invocations of anago have created incomplete releases. This is likely due to running older copies of the tooling. Add a check and sync to ensure the user is using the most up to date copy.
We shouldn't require adding external repos
There's no way to target different release note states for multiple branches due to the fact that the release note state is based on the one source PR, not the individual cherry-picks.
For example, #27861 seems like a good release note for 1.3 but we don't want it as a relnote for 1.2.5 (since it was rolled back).
The fix probably entails checking the cherry-pick for a release note state and letting it override the source PRs state and then updating the cherry-pick doc to allow for that in these special circumstances. I can't immediately think of any issues with that approach.
It will mean more processing time and github polling to check labels for both PRs for every merge pull.
It has to do with the github::last_releases() and how github's release page is our canonical source for releases. We currently treat a branch event as a non-release event, so github's release page won't show a new branch beta tag as a release. We either need to treat beta.0's as releases (probably best) or workaround the fact beta.0 is not a release.
cc @pwittrock @ixdy
It would be very nice to have oficial package with kube-proxy. I can provide pull request for this if needed.
A way to inform issues/pulls that the item has been fixed in-release-X.Y.Z.
In SIG-cluster-lifecycle, we discussed and agreed that we should adhere to local conventions with our packages.
In Debian-based distributions, this means auto-starting daemons when you install them.
The reason that RedHat don't start things is that their default approach
has been to install a whole load of stuff that you might possibly want,
and allow you to enable it when you are inspired to give some new
service a try.The Debian approach has always been to not install anything that you
don't intend to use. It is also to ensure that if you do choose to
install something, it should be doing something useful by the end of the
install (if possible, security considerations allowing).
– https://lists.debian.org/debian-devel/2012/06/msg00047.html
In the case of the kubelet, it should result in the kubelet entering a crashloop until kubeadm
or an administrator has given it working flags.
Does anyone know how to do this? cc @errordeveloper @luxas @mikedanese
[root@k8s-master ~]#kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.0.0.10 53/UDP,53/TCP 2d
[root@k8s-master ~]#cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.12.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_EXTRA_ARGS=--v=4"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGS
relnotes has a mechanism for generating a release preview that is currently used internally at Google, but it would be great to expose this externally so the whole community can keep a regular eye on a release all along the way until it is cut. The report provides the following:
An example run looks like this:
$ relnotes --htmlize-md --branch=release-1.6 --preview --markdown-file=/tmp/k8s-release-1.6.md --html-file=/tmp/k8s-release-1.2.html
or
$ relnotes --htmlize-md --branch=master --preview --markdown-file=/tmp/k8s-master.md --html-file=/tmp/k8s-master.html
This issue will track this current list of wants:
cc: @ixdy @dchen1107 @caesarxuchao @kubernetes/release-maintainers @dclaar @javier-b-perez @krzyzacy
I followed the instruction to set up security layer. The new security layer check passes. But the ACL check fails:
$ ./anago master --nomock
anago: BEGIN main on saadali-dev Wed Oct 12 17:08:05 PDT 2016
────────────────────────────────────────────────────────────────────────────────
CHECK CREDENTIALS
────────────────────────────────────────────────────────────────────────────────
Checking for valid github credentials: OK
Live releases restricted to certain users!
Signal ERR caught!
Traceback (line function script):
171 main ./anago
Exiting...
anago: DONE main on saadali-dev Wed Oct 12 17:08:05 PDT 2016 in 0s
As a sanity check, I modified the anago script to print out the USER
and ACL_LIST
:
Live releases restricted to certain users! USER is saadali ACL_LIST is
djmm|etune|fabioy|filipg|jessfraz|jgrafton|pwittroc|robertbailey|saadali|stclair
Looks like something is wrong with this script:
check_acls () {
case "$USER" in
$ACL_LIST) ;;
*) logecho "Live releases restricted to certain users!"
return 1
;;
esac
}
kubernetes/kubernetes#28132
has a multiline release note, which renders as this in the CHANGELOG.md:
cluster/saltbase/salt/kube-dns
, i.e. cluster/saltbase/salt/kube-dns/{skydns-rc.yaml.base,skydns-rc.yaml.in}
, either substitute one of __PILLAR__FEDERATIONS__DOMAIN__MAP__
or {{ pillar['federations_domain_map'] }}
with the corresponding federation name to domain name value or remove them if you do not support cluster federation at this time. If you plan to substitute the parameter with its value, here is an example for {{ pillar['federations_domain_map'] }
(#28132, @madhusudancs)
myfederation
is the name of the federation and federation.test
is the domain name registered for the federation.I want it to render like this:
cluster/saltbase/salt/kube-dns
, i.e. cluster/saltbase/salt/kube-dns/{skydns-rc.yaml.base,skydns-rc.yaml.in}
, either substitute one of __PILLAR__FEDERATIONS__DOMAIN__MAP__
or {{ pillar['federations_domain_map'] }}
with the corresponding federation name to domain name value or remove them if you do not support cluster federation at this time. If you plan to substitute the parameter with its value, here is an example for {{ pillar['federations_domain_map'] }
(#28132, @madhusudancs)myfederation
is the name of the federation and federation.test
is the domain name registered for the federation.I think the carriage returns need to indent but not have leading stars.
See also #17
While doing the 1.4.3 release it errored out on the Changelog, so then I re-ran it with --noclean
and I realized now it basically did the changelog twice: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md. I will submit a PR to fix it up. Not sure if that is avoidable though.
Moved from kubernetes/kubernetes#23070
Currently, PRs having release-note label is automatically included as part of the release notes. This process does not work for big features though.
We need to track issues that represent features as well and include a gist from those issues in the release notes.
Because the kubeadm deb drops in some kubelet configuration, we must:
systemctl daemon-reload && systemctl restart kubelet
in its postinst configure step, otherwise kubelet starts up with no flags and doesn't crashloop, so kubeadm
setup fails.
Would be very convenient to see https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#changelog-since-v143 here http://gcsweb.k8s.io/gcs/kubernetes-release/release/v1.4.4/
This came up today as we're approaching the 1.3 release. There's not much time to do anything (carefully) for this release, but for 1.4 lets collect the requirements here for allowing for a staged release process, if needed.
This is related to the timing of press announcements and PM engagements as we approach a release. To move ahead here we'll need to collect some requirements.
We want the release process to run continuously on Jenkins or elsewhere so we can validate the final state before pushes occur.
cc @lavalamp
It seems like when a new release is tagged, the release scripts can't parse the version correctly.
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-build-1.3/138
Unable to get latest version from build tree!
kubectl version output:
Client Version: version.Info{Major:"1",
Minor:"3+", GitVersion:"v1.3.11-beta.0",
GitCommit:"759dd0dfb826658c78db42d97b77b9594cd43333", GitTreeState:"clean",
BuildDate:"2016-11-03T19:27:58Z", GoVersion:"go1.6.2", Compiler:"gc",
Platform:"linux/amd64"}
The issue seems to be that there's no .#commits+sha
at the end of the version string, since there are no additional commits on the tag, but https://github.com/kubernetes/release/blob/master/lib/gitlib.sh#L42 expects the .#commits+sha
suffix.
Ran ./anago master
. All steps completed successfully, except ANNOUNCE RELEASE got stuck for a very long time making no progress. Terminated with ctrl-c
:
>>>>>>>> anago::release::gcs::publish_version() finished in 12s
────────────────────────────────────────────────────────────────────────────────
ANNOUNCE RELEASE
────────────────────────────────────────────────────────────────────────────────
Announcing k8s v1.5.0-alpha.1 to saadali...
^C
Signal SIGINT caught!
Traceback (line function script):
1 common::sendmail /usr/local/google/home/saadali/go/src/k8s.io/release/lib/common.sh
Signal ERR caught!
Traceback (line function script):
318 common::trapclean /usr/local/google/home/saadali/go/src/k8s.io/release/lib/common.sh
1 common::sendmail /usr/local/google/home/saadali/go/src/k8s.io/release/lib/common.sh
529 announce ./anago
740 common::runstep /usr/local/google/home/saadali/go/src/k8s.io/release/lib/common.sh
1017 main ./anago
Exiting...
Copying /tmp/anago.log to /usr/local/google/saadali/anago-v1.5.0-alpha.1: OK
anago: DONE main on saadali-dev Wed Oct 12 15:08:30 PDT 2016 in 2h3m8s
From /tmp/anago.log
:
────────────────────────────────────────────────────────────────────────────────
anago::common::stepheader(): ANNOUNCE RELEASE
────────────────────────────────────────────────────────────────────────────────
anago::announce(): Announcing k8s v1.5.0-alpha.1 to saadali...
anago::common::trapclean(): Signal SIGINT caught!
anago::common::trapclean(): Traceback (line function script):
anago::common::trapclean(): Signal ERR caught!
anago::common::trapclean(): Traceback (line function script):
anago::common::exit(): Exiting...
anago::common::cleanexit(): Copying /tmp/anago.log to /usr/local/google/saadali/anago-v1.5.0-alpha.1:
anago::common::cleanexit(): cp -f /tmp/anago.log /usr/local/google/saadali/anago-v1.5.0-alpha.1
We want releases to happen more or less automatically by:
Something (probably Anago?) is trashing the "latest.txt" files in the release
gsutil bucket and making them useless for Jenkins. It's probably been happening for a while, but only the upgrade tests actually care, and they've been woefully ignored.
This is not supposed to produce a build that doesn't exist in GCS:
$ gsutil cat gs://kubernetes-release/release/latest-1.3.txt
v1.3.5-beta.0
I'm going to shift the Jenkins builds over to the CI versions instead, but it should be possible to use the release bucket.
I'd really like to K.I.S.S. and lightweight on the arg parsing, but we should find a way to detect invalid flags too. I fear this might be the straw that bloats things, but let's have a look.
We've had to put CNI bits in /opt
, as there was no way to tell kubelet to look in other places. As of kubernetes/kubernetes#32151 (already in 1.4), we can pass --cni-bin-dir=
as well as --cni-conf-dir=
to kubelet. I makes sense to put the plugins into /usr/lib/cni
or /usr/lib/kubelet/cni/bin
or /usr/libexec/cni
- anything other then /opt
is good. The reason for this is that traditionally /opt
is used for installing ad-hoc pieces, not for things installed by the package manager. We could also move /etc/cni
to /etc/kubernets/cni
, so we have everything in one directory.
I seems like kuberentes-cni
would be something user has to lookup, calling it kubernetes-cni-basic-network-plugins
(or just kubernetes-basic-network-plugins
) would make it more obvious. Word "basic" (or "vanilla" or "generic" or something similar) is also good, so we don't give user the impression that they'll have all the plugins.
The beginnings of a very useful dashboard exist at https://x20web.corp.google.com/~djmm/k8s-master.html (internal only). What else does it need?
Closing. Dup of #2
Ivan Žužak [email protected] said:
For example, I need to search for PRs between tag v1.0..v2.0 and find
specific labels in PRs merged within that range.Today I'm getting the list of PRs to look up with:
git log $range --format="%s" --grep="Merge pull"
And then curl'ing each one to see if it has the label I need.
Is there a way to do this with some api to github so I don't have to make
hundreds of curl execs?
No, there's no GitHub API that allows you to do exactly what you want, but there's still a way you can help yourself and both speed things up + reduce the number of API calls. Instead of hitting the API over and over again, add a webhook to the project which notifies you when a label has been added or removed from an issue or pull request: http://developer.github.com/webhooks/. That webhook would be pointed at the same system that's doing the API calls right now and that system would store that information locally so that when it needs to know if a pull request has a label -- it doesn't need to make the API call because it already has all the information it needs. In short, you'd cache the information you need locally instead of hitting the API. That should be both faster and reduce the number of API requests.
An alternative to this would be to do what I suggested before -- I think that suggestion is still valid. If you know which label you're looking for, then you can fetch the list of pull requests with that label via the API (by not fetching them individually, but by fetching 100 at a time, which will be 100 times more efficient than what you're doing now). If you then locally do an intersection of that list with the list of pull requests you compiled with the git log command you shared -- that should give you the list of pull requests you need (pull requests that are between some tags AND have label X). Even if you have multiple labels you need to check for -- this approach should still help you significantly reduce the number of API calls you need to make (you could do additional filtering locally, for example, because the list of pull requests you'd get from the API would include the full list of labels, so no need to hit the API again). Does that make sense for your use-case
? If not, can you share more details so that we can understand why it doesn't?
Setting up kubelet (1.4.3-00) ...
/var/lib/dpkg/info/kubelet.postinst: 38: /var/lib/dpkg/info/kubelet.postinst: [[: not found
Setting up kubectl (1.4.3-00) ...
There is a great kubelet spec file for building rpms for the kubelet. It would be nice to have the same for these:
It would be nice to have atomic operations for packaging as well as the ability to publish to an edge channel (perhaps per commit in master?), rollbacks, and other features.
I can find someone to help mentor.
07800600576: Preparing
Post https://gcr.io/v2/kubernetes-release-test/hyperkube-arm64/blobs/uploads/: token auth attempt for registry: https://gcr.io/v2/token?
account=oauth2accesstoken&scope=repository%3Akubernetes-release-test%2Fhyperkube-arm64%3Apush%2Cpull&service=gcr.io request failed with
status: 403 Forbidden
anago::logrun(): .
anago::release::docker::release(): /usr/local/google-cloud-sdk/bin/gcloud docker push gcr.io/kubernetes-release-test/hyperkube-arm64:v1.
4.0-beta.1
Looking at https://github.com/kubernetes/kubernetes/releases, the current "latest release" is v1.3.8, which was built 21h ago, rather than v1.4.0, which was built 2 days ago.
I'm guessing this is because v1.3.8 is newer, and the "latest" tag is automatically applied by GitHub? Is there any way to fix this?
Not sure what happened, but the official announcement for 1.4.0 had a bunch of TBDs/TODOs - see the email and CHANGELOG.
@pwittrock has the anago.log
and will upload it somewhere so we can analyze.
It really confuses the build cop and could interfere with what they're doing at the moment.
I want to install Kubernetes 1.5.0-alpha1 on a Centos-7.
The getting-started guide shows me where the stable RPM repo is:
...so I go poking around for an unstable repo and find one at
...but it seems to have the same versions of packages as the stable repo.
# repoquery --location --show-duplicates --disablerepo '*' --enablerepo kubernetes-unstable --repofrompath 'kubernetes-unstable,https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64-unstable' '*'
https://packages.cloud.google.com/yum/pool/bbad6f8b76467d0a5c40fe0f5a1d92500baef49dedff2944e317936b110524eb-kubeadm-1.5.0-0.alpha.0.1534.gcf7301f.x86_64.rpm
https://packages.cloud.google.com/yum/pool/fac5b4cd036d76764306bd1df7258394b200be4c11f4e3fdd100bfb25a403ed4-kubectl-1.4.0-0.x86_64.rpm
https://packages.cloud.google.com/yum/pool/c37966352c9d394bf2cc1f755938dfb679aa45ac866d3eb1775d9c9b87d5e177-kubelet-1.4.0-0.x86_64.rpm
https://packages.cloud.google.com/yum/pool/5ce829590fb4d5c860b80e73d4483b8545496a13f68ff3033ba76fa72632a3b6-kubernetes-cni-0.3.0.1-0.07a8a2.x86_64.rpm
vs
# repoquery --location --show-duplicates --disablerepo '*' --enablerepo kubernetes-stable --repofrompath 'kubernetes-stable,https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64' '*'
https://packages.cloud.google.com/yum/pool/bbad6f8b76467d0a5c40fe0f5a1d92500baef49dedff2944e317936b110524eb-kubeadm-1.5.0-0.alpha.0.1534.gcf7301f.x86_64.rpm
https://packages.cloud.google.com/yum/pool/fac5b4cd036d76764306bd1df7258394b200be4c11f4e3fdd100bfb25a403ed4-kubectl-1.4.0-0.x86_64.rpm
https://packages.cloud.google.com/yum/pool/c37966352c9d394bf2cc1f755938dfb679aa45ac866d3eb1775d9c9b87d5e177-kubelet-1.4.0-0.x86_64.rpm
https://packages.cloud.google.com/yum/pool/5ce829590fb4d5c860b80e73d4483b8545496a13f68ff3033ba76fa72632a3b6-kubernetes-cni-0.3.0.1-0.07a8a2.x86_64.rpm
Am I missing something?
It'd be nice if the build system created RPM and DEB artifacts for every branch it builds so that I can easily install any branch of kubernetes.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.