nodejs / build Goto Github PK
View Code? Open in Web Editor NEWBetter build and test infra for Node.
Better build and test infra for Node.
We currently have one project operating as a sub-WG of the io.js Build Team WG. Do we need something in our GOVERNANCE.md file that addresses these types of groups?
Our production server only had gcc 4.1.2 and can't upgrade gcc in some reason.
Is there possible to build io.js on these old server?
http://doodle.com/d7q3x4ezrze7x73b
Sorry folks, it's totally my fault that we don't have momentum here, I struggle to find good slots in my schedule where I can even propose a meeting let alone make it work for others. Can @iojs/build have a look at the above Doodle and record whether it works for you or not? If it doesn't we can easily push to next week where it might be easier for me.
I'd also like to ask someone else take the reigns for keeping the momentum of meetings up, perhaps @jbergstroem is interested in doing this? If it's left to me then it'll slip too far.
@rvagg you ready to hook this up yet? We've got a node repo in node-forward
that people are going to be working in. What do we have to do to get this running on it.
Would be nice to have pull requests to iojs/website automatically trigger a build and deployment to a staging server and then comment in the PR With the link to an ephemeral domain.
Continuing nodejs/node#40 here. The summary is that to help version managers do their job with io.js we need to provide parsable catalog files detailing what releases are available (and perhaps other metadata).
Suggestion for now is to provide both a simple .txt file list in the same directory as release tarballs as well as a .json file that is extensible so we can put in additional metadata like shasums. See the comments in nodejs/node#40 for some great ideas.
The build team need to come up with a proposal for how it'll work before passing it back to the TC for acceptance. Be sure to include other interested parties in the discussion.
/ @smikes @ljharb @kenperkins @alexgorbatchev @keithamus @naholyr @mostman79 @gkatsev @Fishrock123 @arb
Would really be appreciated if some help could be rendered to getting an alpine image for iojs passing the test suite.
If there could be two build images, one for just node, and another with everything needed for node_gyp and nan to run.
Also, would be nice to see tests for building common binary modules (sqlite3, expat and a few others).
It would be grand to have the ability to provide the same build infrastructure that is used to compile node to be provided as a service for developers of native modules that implement/utilize node-pre-gyp. Happy to recommend/advise on this from experience with node-serialport, but the ideal case would be for on npm publish a web hook is triggered that would build and store the compiled module for each (or a subset thereof) the target platforms.
A thought.
Use ccache to speed up CI builds. The ARM buildbot in particularly is hellishly slow.
I don't know how much storage the buildbots have but if space is at a premium, it may make sense to clean out ~/.ccache once a month or so.
/cc @rvagg
I've been doing some work to make it so npm tests can be run as part of the overall build process for Node / io.js, preferably without too many side effects. It's kinda tough to mirror the behavior of a CI environment, so it would be great if somebody could either point me at a Vagrant image / Docker container, or get me access to a subset of the build environments so I have something to hammer on while I'm getting things sorted out.
also a windows ci environment would be great thanks
I'm making an issue here mainly to collect names of people who can legitimately help with making Windows a first-class citizen. It's such a different beast that it really requires people who spend their time in it and understand the ecosystem rather than drop-ins (like me) who make do with their historic knowledge and occasional googling.
Examples of things that we need people for:
This just came up in the TC meeting. It would be a good idea for the build team to assign someone to sit on the TC in a "non-voting" capacity (not that we've actually brought anything to a vote yet, we've been finding a pretty easy consensus most of the time).
We'll leave it to you to figure out who that is and then we can work on finding a time that we can schedule that works for Europe, USA and wherever the build person is :)
While running windows tests: https://jenkins-node-forward.nodesource.com/job/node-forward+libuv+v1.x+multi/nodes=node-forward-rackspace-win2012r2-msvs2013/lastBuild/console
Is this a one-off? Is there a way to trigger a new build?
Currently only doing x64
builds and test on Windows 2008 and 2012 but we are doing binaries for x86
as well.
We have 2 x test boxes for each of 2008 and 2012 currently set up to be redundant so if Jenkins has more work to do then we can overflow and handle more capacity since Windows is one of the slowest to build & test. We could either re-purpose one each of those for 32-bit and lose redundancy or spin up a duplicate set to do 32-bit builds & test.
Thoughts? Specifically @kenperkins
I want to continue the conversation here. CC'ing nodejs/node#1404
With both Node and libuv being very widely adopted across disparate platforms, it's time for a CI system to match that spread. We should be able to define a list of primary targets that are essential as part of the mix and secondary targets that add additional value but not a main focus of the core team.
Current Node.js core and libuv Jenkins build bot list: http://jenkins.nodejs.org/computer/
Let's try and limit this discussion to CI as much as possible and leave release build platforms for another discussion.
Likely using Jenkins with a very distributed collection of build bots. I've been in contact with DigitalOcean, IBM and @mmalecki so far on hardware provisioning, looking forward to Rackspace and any others that want to step up. NodeSource is happy to cop the maintenance burden and likely some of the cost and do the bidding of the core team(s).
Here's my straw-man, to start discussion off:
Looking for input from anyone but particularly the core team who need to be the ones deciding which are the primary platforms they actually care about, and we're considering both Node and libuv here. I'm happy to do a bunch of the legwork for you but I'll need your guidance because build targets is not my decision to make.
@tjfontaine @bnoordhuis @piscisaureus @trevnorris @TooTallNate @saghul @indutny
Others who have shown an interest in this discussion (or I just feel like pulling in!):
@ingsings @pquerna @voodootikigod @mmalecki @andrewlow @guille @othiym23 @dshaw @wblankenship @wolfeidau
Please subscribe to https://github.com/node-forward/build for further notifications from other issues if you're interested so we don't have to go and pull everyone in each time.
I'm trying to access Jenkins to test a PR but I found that the previous URL gets redirected to http://jenkins-iojs.nodesource.com/ which gives me some hello-world ngnix page.
/cc @rvagg
Things we need to make a first official io.js release in mid January (this is top of my head, please contribute if you see something that I don't have).
Target
At a minimum we need to release a solid source tarball that's tested, tagged and good to compile and use as a fully compatible version of joyent/node, v0.12-worthy. Version will be 1.0.0
, perhaps with an -alpha.x
suffix, that'll be up to the TC.
Binaries would be good but may be practical only for Linux at this stage in lieu of signing keys.
Need
Nice but not essential
Decisions
Currently, the single server for iojs.org is the authoritative source for builds. When a build server finishes, it directly scp's the build to iojs.org.
I'm proposing we make the build outputs store the builds in the cloud, and then sync back down to the iojs.org website. This would allow a quick recovery should we lose our webserver, or if we need to spin up additional capacity.
This could theoretically work in conjunction to #55.
I noticed that https://iojs.org/download/release/v1.1.0/iojs-v1.1.0-linux-x86.tar.gz
unpacks as /iojs-v1.1.0-linux-ia32
instead of iojs-v1.1.0-linux-x86
. Ideally this would be consistent with node which uses x86
for the directory inside the tarball.
https://cloud.online.net/ is probably the way to go but slower than the ODROID XU3 we're using in CI now, that can be solved with ccache for the most part I think though.
@rvagg brought up that some (most?) buildslaves lacks init scripts for starting jenkins after a reboot. Lets add these to ansible.
CI servers should respond to a notification that a docker image iojs/build has been pushed to on the docker registry. This will keep our containers on the CI servers in sync with the docker registry and in turn with our github repos.
What is needed:
Implement a webhook endpoint for the repository and have it initiate a docker pull iojs
when hit. Then register the endpoint at: https://registry.hub.docker.com/u/iojs/build/settings/webhooks/
Should work on >1 server.
This discussion came up on the TC meeting today, prompted by a question on IRC, noting it here as a TODO if someone has spare energy and time to devote to starting this effort.
It would be ideal if io.js was regularly tested against a list of npm packages to test for breakage. Perhaps the list could comprise some of the most popular packages and/or some of the most interesting use-cases of Node/io.js to test for edge-cases. The tests could be simply running the test suites of specific versions against the given version of io.js.
Recently, we've had 2? failures due to release bugs.
I propose we set up CI tasks to run the release in a purely testing way. (So that we can go back and check though commits with it, unlike the nightly, which auto-publishes.)
cc @rvagg
The blame is on me for not organising this sooner, I've been shouldering too much of this effort on my own and would love to make space for others to help out.
This is an open meeting to anyone who feels they have something to contribute. My preference is to include those who have already stepped up with code or help with Build, Docker or other parts of io.js but I also recognise the lack of obvious ways to contribute so far may have held back additional contributors. So if you have some skills and interest in this space then you're welcome too.
Meeting via Google Hangouts, fill in your details here if you want to attend: http://doodle.com/r5cz2dq6rcpd9b5e
The Docker sub-WG is the most active group so I'd love for at least these people to be involved in this meeting:
Other people who have had some involvement with Build, mainly through contributing to discussions and showing an interest in the build repo, you may or may not have an interest in joining us:
(just calling out names here to get the ball rolling, this is not an exhaustive list of people that can be involved by any means)
I'm also interested in having some libuv input since we're taking responsibility for libuv CI.
The proposed meeting dates are a couple of weeks away, mainly selfishly due to constraints on my part but also to give us time to discuss possible agenda items here.
Continued from libuv/libuv#12 also see #1 for additional context.
Here's a strawman proposal for architectures libuv should be tested against. They are split in to 3 classes mainly based on how difficult they will be to set up and include in the build set and how important they are to have solidly tested builds against.
Class A
Class B
Class C
The open questions, for me at least, are:
debug
and test that in all cases or do we need some testing of release
builds somewhere?Are there any concerns here held by the libuv team? / @indutny @saghul @bnoordhuis @piscisaureus (I'm guessing here at who constitutes libuv-core btw).
Instead of having these in .gitignore
and required to be on a local machine to deploy, how about we store these in Ansible Vault?
Example:
ssl_certificate: |
-----BEGIN CERTIFICATE-----
blahblahblah
-----END CERTIFICATE-----
ssl_certificate_key: |
-----BEGIN RSA PRIVATE KEY-----
blahblahblah
-----END RSA PRIVATE KEY-----
Hi hi,
Not sure exactly how to word this, but it would be great to pick some choice benchmarks and track performance from build to build.
Especially if they represented benchmarks that developers sometimes use to pick languages/stacks/frameworks because we're all idiots^H^H^H^H^H^H really interested in incredibly specific use cases.
For example, I'd love to see Node climb a bit higher in some of these tests: http://www.techempower.com/benchmarks/ – and it would be great to be able to track any efforts involved in getting there.
Also, for regression obvs.
Since we already have an issue to kick-start the performance tracking (#11), I thought it might be good to kick-start one about finding ways to reliably measure this over time. I think we can all agree that the lowest possible requirement is full access and control of hardware; so the discussion is rather about what would warrant benchmarking.
My end goal would be to measure the improvement (or decrease) of how io.js performs in "real" environments. This would include being run from different os:es, hardware or emulation/virtualisation/jails. In terms of prioritisation, my hunch is that the most common scenario would be a virtualised linux environment (kvm and xen), followed by linux hardware, then windows and other derivatives (fbsd, docker, ..). Since each environment requires different "warm up" phases, it might take a while to get this right. Additionally, we should probably try to reuse the build artefacts.
Using parts of are we fast yet could be a quick way to get a frontend rolling.
I think this could be a relevant topic for the upcoming build meeting.
Today Node is built using Jenkins.
Given the scale of what we want to do with Node, should we take this opportunity to consider a new CI tool?
If yes, we should generate a list of candidates and discuss.
Two issues, possibly connected:
This is a summary of activity and resources within the io.js Build WG. I'm doing this to present to the WG meeting that's coming up but also to shine a bit of light into things that are mostly in my head. Some of this information could go on the README or other documentation for the project. I'd like to update this information each month so we can see how it evolves over time. Summarising in this way shows up a few TODO items that we need to tackle as a group and should also show us where our priorities should be going forward.
We have a fairly open account with DigitalOcean and this is where we do all of our non-ARM Linux computing. We also run https://iojs.org/ from here.
Currently myself, @wblankenship and now @jbergstroem have access in to all of these machines.
We have a somewhat open account with Rackspace and have @kenperkins on the team who is able to give us more resources if we need them.
Currently it's just myself that have the Administrator passwords for these boxes, I need to identify someone else on the build team member who is competent on Windows so we can reduce our bus-factor here. The release build machines contain signing keys so I'd like to keep that somewhat restricted and will likely share access @wblankenship who is also NodeSource.
Voxer have a primary interest in FreeBSD support in io.js for their own use which is where the FreeBSD machines come in. They are very fast because they are not virtualised at all. The FreeBSD machines are behind the Voxer VPN and the Mac Mini servers will be soon.
Currently myself and @jbergstroem have VPN access into the Voxer network to connect to the FreeBSD machines. Only I have access to the Mac Mini servers but I need to get @wblankenship on to them as well at some point. They contain our signing keys in the release VMs so I'll need to keep access somewhat restricted.
Joyent are provided two zones for test builds, they are multiarch and we are using them to do both 64-bit and 32-bit builds.
Currently myself, @geek and @jbergstroem have access to these machines.
Scaleway, formerly Online Labs, have provided us with a 5-server account on their ARMv7 cluster. We are using them to run plain Debian Wheezy (armhf) on ARMv7 but could potentially be running other OS combinations as well. The ARMv7 release binaries will eventually come from here as Wheezy represents the oldest libc I think we're likely to want to support on ARM.
Currently only I have access to these machines but I should share access with someone else from the build team.
Linaro exists to help open source projects prepare for ARM support. We are being supported by ARM Holdings in this as they have an interest in seeing ARMv8/AArch64 support improved (we now have working ARMv8 builds!). Our access is on a monthly renewal basis but I just need to continue to request continued access.
Currently only I have access, access is via an SSH jump-host so it's a little awkward to just give others access. I haven't asked about getting other keys into that arrangement but it likely would be OK. An interim measure is to create an SSH tunnel for access to this server, which I have done previously for io.js team members needing to test & debug their work.
I'm still investigating further ARMv8 hardware so we can expand our testing but low cost hardware is hard to get hold of at the moment and I'd really like to find a corporate partner that we can work with on this (WIP).
The rest of the io.js ARM cluster is running in my office and consists of hardware donated by community members and NodeSource. I'm still looking for further donations here because the more the better, particularly for the slow hardware. Not included in this list is a Beagle Bone Black donated by @julianduque that I haven't managed to hook up yet, but will do because of an interesting OS combination it comes with (and also its popularity amongst NodeBots users).
Currently only I have access to these machines but have given SSH tunnel access to io.js team members in the past for one-off test/debug situations.
We are only running a single Ubuntu 14.04 4G instance on DigitalOcean for the website, it holds all of the release builds too. The web assets are served via nginx with http redirected to https serving a certificate provided by @indutny.
Only myself, @wblankenship, @indutny and @kenperkins have full access to this machine and I'd like to keep that fairly restricted because of the security implications for the builds.
All of the release build servers in the CI cluster have access to the staging
user on the server in order to upload their build artifacts. A job in crontab promotes nightly builds to the appropriate dist directory to be publicly accessible.
The 3 individuals authorised to create io.js releases (listed on the io.js README) have access to the dist
user on the server in order to promote release builds from staging to the dist directory where they become publicly accessible. Release builds also have their SHASUMS256.txt files signed by the releasers.
The iojs/website team only have access via a GitHub webhook to the iojs
user. The webhook responds to commits on master
of their repo and performs a install and build of their code in an unprivileged account within a Docker container. A successful build results in a promotion of the website code to the public directory. A new release will also trigger a website rebuild via a job in crontab that checks the index.tab file's last update date.
This week I upgraded this machine to a 60G from a 30G because we filled up the disk with nightly, next-nightly and release builds. We'll need to come up with a scalable solution to this in the medium-term.
Jenkins is run on an 80G instance on DigitalOcean with Ubuntu 14.04. It's using the NodeSource wildcard SSL cert so I need to restrict access to this machine. It no longer does any slave work itself but is simply coordinating the cluster of build slaves listed above.
We now have automation of nightly and next-nightly builds via a crontab job running a node program that checks if one should be created at the end of each day UTC and triggers a build via Jenkins if it needs to.
We also have the beginnings of automation for PR testing for io.js. I'm still to publish the source that I have for this but it's currently triggering either a full test run or a containerised test run depending on whether you are in the iojs/Collaborators team or not. New PRs and any updates to commits on PRs will trigger new test runs. Currently there is no reporting of activity back to the PRs so you have to know this is happening and know where to look to see your test run. This is a work in progress, but at least there's progress.
I know @pquerna has facilitated some of this directly, but my team (Developer Experience) has an official program for capacity for OSS projects, for ci/cd, web, builds, etc.
Let me know if I can help with this in any way...
In order to get some things done, it'd be good to get a full list of what resources are available (and where). Based on that, I'd like to:
I can create a list from what @rvagg mentioned on the meeting, but think it'd be even better if @rvagg perhaps kicked this list off? How about:
Making a note here for myself, @piscisaureus wanted TAP output reenabled on builds
pr+multi still targets v1.x by default :)
In case you are following this repo and didn't notice, we've moved to the iojs org where we'll be tracking the io.js project
I believe ARM is an important target but I'm unsure the best way to tackle CI with it. We either need to find some beefy ARM cpus somewhere that don't take 12h to compile Node or do a cross-compile then push to actual boxes to test or a distcc as @mmalecki has suggested or perhaps some crazy virtualization?
Need some input from cleverer people than me on this.
/cc @wolfeidau
Any thought on adding an rsync endpoint for this:
From what I understand, the current Jenkins set up for Node.js and libuv run by Joyent will do builds for commits and pull requests. I'm having a hard time figuring out how this can be made secure outside of the unixes with containerisation (Solaris, Linux, ... ?). The hole I see is in running builds for pull requests basically opens these boxes up to executing arbitrary code from anybody with a GitHub account which could potentially compromise the machines themselves which is a particular concern if some of these builds will end up being actual releases.
Looking for insight from people with more experience on this than me. The most common use-case for Jenkins is in-house builds rather than open source projects so I'm not sure if this comes up a whole lot.
So, GitHub has this "releases" feature. Every tag automatically becomes a "release" but there is also an API to add other resources to that release like our binary builds and even the Changelog.
This came up recently when people started asking for more features from the website's release section like an RSS feed, nodejs/iojs.org#79, which we would actually get for free if we were using the GitHub Releases API.
Also, because these tags/releases already exist it would be great if people found all the relevant resources there if that's the place they decide to look for releases.
I ran across this tweet:
I really don’t like the fact that in @official_iojs people have company names associated to them =[ https://t.co/UkuwQ0HO0T
tomgco sent Jan 20, 2015
I think that the company affiliations send the wrong message. All these people are here of their own merit and would retain membership were they to change companies.
cc @iojs/website
Currently we are using github-webhook with the following configuration on the server:
{
"port": 9999,
"path": "/webhook",
"secret": "orly?",
"log": "/home/iojs/github-webhook.log",
"rules": [{
"event": "push",
"match": "ref == \"refs/heads/master\" && repository.full_name == \"iojs/website\"",
"exec": "cd /home/iojs/website.github/ && git reset --hard && git clean -fdx && git fetch origin && git checkout origin/master && rsync -avz --delete --exclude .git /home/iojs/website.github/public/ /home/iojs/www/"
}]
}
i.e. the "build" process is:
What I want to suggest we add is a build step in between 3 and 4 here, but it needs to be done inside a container so we don't give free reign for code in the website repo to run on the server.
Something like this:
docker pull iojs:latest && \
docker run \
--rm \
-v /home/iojs/website.github/:/website/ \
-v /home/iojs/.npm:/npm/ \
iojs:latest \
bash -c " \
adduser iojs --gecos iojs --disabled-password && \
su iojs -c ' \
npm config set loglevel http && \
npm config set cache /npm/ && \
cd /website/ && \
npm install && \
node_modules/.bin/gulp build \
' \
"
I've just run this and it seems to work fine and I could enable it right now if that's suitable to the website team.
Note for build team (@kenperkins in particular) our Ansible script for the website needs an initial git clone of iojs/website to /home/iojs/website.github/, I don't think we are doing that currently. The above command will also need /home/iojs/.npm/ to be made and owned by iojs.
This is the error they give:
+ mkdir -p build
+ cp -a /home/iojs/gyp/ build/gyp
cp: cannot stat ‘/home/iojs/gyp/’: No such file or directory
Full console output: https://jenkins-iojs.nodesource.com/view/libuv/job/libuv+any-pr+multi/50/nodes=iojs-ubuntu1404-gyp-64/console
/cc @rvagg
PS: Is this the right way to report issues with the build infra?
Voxer was kind enough to donate a couple of mac mini's for the purpose of building. We need to build out OS X oriented targets within VM's for security purposes in a similar fashion to the linux counterparts
The current state of the build infrastructure can be seen here: http://jenkins.node-forward.nodesource.com/
Summary:
Currently we also have the ability to trigger builds on either the full "multi" or "containers" from any repo on GitHub but they must be triggered manually by someone who has access to Jenkins. So far that's only myself, @ryanstevens and @indutny but I can expand that list to the full TC and other trusted helpers.
Short-term goals
Pre-first-release goals
Mid-term goals
Miscellaneous goals
Beyond that, we want to fully make our own CI tooling and deprecate Jenkins, but that's a lower priority than just moving io.js forward.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.