Huh?
goingnative is a workshopper for learning how to write native Node.js addons, it's slated for release & use at NodeConf (next week!). I was supposed to start this long ago but ... you know how it goes.
Who?
I've added a bunch of people to this repo, as far as I'm concerned this will be an open project (well, it's technically "closed" until next week) and I'm happy to share ownership and leadership with anyone that has significant contributions to make.
However, I know I've just thrown a bunch of you in here without asking so please if you don't have time or interest in this at the moment and don't want to be spammed then I'd be happy to take you off the collaborators list, just let me know.
@ceejbot and @tjfontaine are here because they are down as running this workshop at NodeConf with me, unfortunately I know they are both super busy with their respective employment, I'm hoping to be able to squeeze some blood out of that stone though!
@wblankenship and @expr are here because I know, through my work with them at NodeSource that they are awesome and should have a lot to offer here in terms of the learning experience.
@trevnorris and @thlorenz are here because I know they have a ton to offer on the C++ side, I don't know realistically how much I can expect to squeeze out of them in terms of time though and @thlorenz is scheduled to help with another workshop at NodeConf anyway.
@TooTallNate and @kkoopa are here because I know from experience that they have a ton of skill and knowledge in this area that would be hugely valuable if they are able to contribute.
What?
The goal here is pretty ambitious, particularly for NodeConf. We need to teach Node/JS programmers how to write native addons. Unfortunately, there is a strong allergy to C++ amongst that cohort in general so we'll have to do lots of handholding and provide lots of up-front boilerplate to point people in the right direction. The success of this will come from the right balance of boilerplate and document to actual code challenges.
learnyounode is the foundational example for building this type of thing, it's basically a terminal-based self-driven programming learning tool that provides you with challenges that ramp up in complexity and you have to complete those challenges successfully to proceed. The expectation is not that people will be able to finish this in a sitting, or even want to finish it all, it's just to give them a taste and get them started and provide enough of a learning path for those wanting to dig deeper.
I'm also thinking of ditching hopes of Windows support, for NodeConf at least. I have a docker build script that can build a dev environment that I can spin up enough instances of and hand out ssh access to separate containers to people in the workshop (@ NodeConf) who don't have suitable dev environments, or who would waste the whole time trying to get their dev environment ready. Unfortunately this assumes you're comfortable with dev over SSH and using a unixy code editor!
Another complication is the V8 variability, so I'm going to lean on nan to get this done in a way that doesn't involve having to explain all that junk. Unfortunately this is quite heavy so it'll almost make it a "nan workshopper" but I don't see a good way around that, and besides, a good portion of current native addons are using nan so it's not an isolated ecosystem. It might be possible to offer a raw 0.10 and another 0.12 version that doesn't use nan without too much hassle, but that's too complex for now.
Structure
Here's my current thoughts on an exercise structure, this is really rough and will likely change as we develop and I'm hoping that you all have ideas on how to improve this:
- AM I READY?: There's no coding involved in this one, typing
goingnative verify
for this will invoke a checker that will determine if you have: a compiler, node-gyp, python 2.x, and anything else that might be needed to get started.
- LET'S MAKE SOMETHING COMPILE!: Provide the user with a directory containing the basics needed to make an addon, including a package.json, binding.gyp and even an addon.cc that is partially complete but won't compile without adding some really basic code. We might even want to have package.json and binding.gyp incomplete too and they have to complete it all to pass the exercise. I'm thinking that the code can just use a simple
printf
to print something to stdout so there's very little complex C++ or V8 involved in making this work. Validation will be tricky but it'll have to involve at least invoking a build and running the resulting addon to make sure it works as expected.
- IT'S A TWO-WAY STREET: Get the user to extend the previous exercise to accept a method argument and then return something back to the calling function. This could be split up into two exercises ("IT'S A ONE-WAY STREET"?).
- IT'S ALL ABOUT SCOPE: It might be interesting to leave
NanScope()
(HandleScope
) off the previous exercises completely, perhaps leaving a note in the boilerplate file saying something like // this code is intentionally incomplete and requires a Scope
and point them to a later exercise that will introduce the concept of a scope. This exercise could have provide them with an addon that has no scope and when run is observed leaking memory. Their job is to stop the leak by adding a NanScope()
. Maybe too simple?
- CALL ME MAYBE: Invoking a callback argument using
MakeCallback
(well, NanMakeCallback
anyway)
- OBJECTIFICATION: Create a
v8::Object
and populate it with something, a String
property, a Number
property and maybe even a Function
?
- OBJECTIFY ALL THE THINGS: Using
ObjectWrap
to wrap up a C++ object for JS use.
- OFFLOADING THE GRUNT WORK: A precursor to the next exercise, I'm thinking that we could get them to do some CPU intensive work in C++ and pass the result back to JS.
- TEAM GRUNT WORK: Take their previous exercise and split the work off into a worker thread and provide the result via a callback. Pi estimation is the example I keep on using for "CPU intensive" work and could be applicable here.
Validation in most existing workshoppers is done via running the solution code in parallel to the submission code and using stdout to compare the results. Sometimes this involves hijacking stdout to replace it with some kind of reporter. The latest workshopper incarnation (v1) doesn't force this as a requirement and I don't imagine we'll have use for it here. Validation will take the form of a script that performs a series of actions to confirm that individual components of the work are complete and correct. The user gets feedback in the form of ticks and crosses to their console for each of these so they can clearly see where they messed up if they get a failure.
Action
I'm diving right in to this, but I'm going to be doing a rough job of each of them as I go and come back and perfect later. Help would be appreciated with coding and also wording the questions, coding the solutions and making the validations fine-grained enough so that the user gets clear feedback about what they've done wrong and it's very difficult to cheat the system.
Could you please let me know if you don't want to be involved, and if you do want to be involved, in what capacity do you think you could be helpful and what would you like to try and tackle?