Coder Social home page Coder Social logo

decking's People

Contributors

ahmedomarjee avatar asbjornenge avatar guillaumelecerf avatar krohrsb avatar makeusabrew avatar mercyfulfate avatar mrflip avatar page- avatar stephenmelrose avatar toecto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

decking's Issues

Sort out container namespacing

A cluster name shouldn't necessarily namespace a container; it's quite feasible differently named clusters might still want to use the same containers. This currently happens, but the docs contradict it (I was going to make the cluster name a suffix of each container in it).

The only time namespacing should get involved is:

  1. multi-node containers in a cluster: container.[n]
  2. group override opt-ins: container.[group]
  3. both 1& 2: container.[group].[n]

A recent change meant a cluster would opt-in to a group of the same name. This one needs some thought; is that dangerous? The way I use decking, a cluster == a mode, so it works. But that doesn't allow for other uses where clusters aren't directly related to a group. However, those cases could be solved by not naming clusters & groups the same thing...

Whatever happens, this all needs tests around it so I can rework it confidently.

Make create container output more context aware

This will need doing after #31 is implemented; it'd be a lot nicer to show exactly what part of the creation process a container is in. At the moment the text is completely static, which in the event that the target image doesn't exist but can be downloaded (say, from the docker registry) the user will just see "creating container" for a very, very long time...

Create container should use API

The behavior of Decking is currently to use the CLI to run the container for a few seconds then stop it instead of a Remote API create. I think there is a misunderstanding in the comments of the code regarding what the create does, insofar that it is never meant to handle links/etc..

I think you can use the API as it currently is for the create, just withhold the stuff that doesn't need to be there. Then, when starting the container, add the other stuff (links, etc...).

If you look at the documentation for the API, the docker run command actually does quite a few things behind the scenes that interact with the API in order to create, then start, the container, and I think this is meant to illuminate how to implemented all the command line features that seem to be missing: https://docs.docker.com/reference/api/docker_remote_api_v1.14/#31-inside-docker-run

It took me quite a few minutes to figure this out myself, but now having discovered the magic, I realize both that this should be better documented in docker and that decking should be using this interface. I know from my experience so far that the current behavior causes me lots of problems in the startup that cause me to have to put in a bunch of waits in order to circumvent issues that arise from an immediate shutdown in order to create a container.

Docker and/or bash don't like containers with dashes in their names

Thought I might note this down here. A linked container with dashes in its name leads to environment variables that look like this:

CONTAINER-NAME_PORT=1234

The problem there is that the environment variable can't be referenced, as bash sees $CONTAINER-NAME as ending in the dash, and therefore replaces with the value of $CONTAINER (usually empty) followed by -NAME.

I'm a linux beginner, so I may be missing something obvious, but I solved it my renaming my containers to use underscores instead of dashes, and everything worked fine.

If I'm not missing anything, maybe best to disallow dashes in container names to begin with, or at least warn the user, as it will cause them pain down the road.

Out of order issue in Cluster.resolveContainers()

I think there might be an issue in Cluster.resolveContainers().

It's order of operations is,

  • Determine if there's a group override for the cluster
  • Determine dependencies for each container in the cluster
  • If there is a group, merge in group overrides and rename containers so they are namespaced in the group

I came across this problem while working on the "volumes-from" work. In my example decking.json I have mount-from defined in my main container config, but in my "dev" group overrides I remove the mount-form and instead use mount.

Because the group overrides aren't applied when dependencies are generated, the dependency on the container listed in mount-from is still listed as a dependency even though it no longer is.

I think we should just be able to move the dependency resolution logic after the group overrides merge, as long as we ensure it uses the original container name.

Does this make sense @makeusabrew? Any issues? I'll do it as part of the volumes-from work if so.

Image naming

Hi!
I'm using decking from npm (v0.2.1) with docker (v1.0.1), and i found a strange behaviour.
In my decking.json i got this image declaration:

"images":{
    "identity-postgres":"./identity-postgres"
},
(etc...)

i've got a valid Dockerfile in identity-postgres folder.
When i run decking build all it builds the image, but isn't give it a name nor a tag.

REPOSITORY                              TAG                 IMAGE ID
<none>                                  <none>              ba303aaec020

It's not working even if i try it with exact name (decking build identity-postgres)
Because of this, it cannot intstantiate a container from it in decking create phase...

What am i doing wrong?
Thanks!

ps. sorry for my bad english

Ability to Specify registry per-image

Would it be possible to, say, add a flag that would push the images to a private registry upon a successful start?

This would be helpful for CI environments, where your container runs unit/acceptance tests first before starting the main application. By pushing to the registry, you're able to create an artifact of the successful image.

See below response for additional details.

Allow individual + dependency restarts

This relates to decking watch but still want to capture it separately - a user should be able to restart an individual container which will also restart its dependents. Needs care on behalf of the user; restarting a heavily depended on DB will also restart anything which depends on it etc.

Support --help for helpful help

If you try to pass --help to any of the sub commands, you just get a lovely error message:

Error: Cluster --help does not exist in decking.json
    at Decking._run (/usr/local/lib/node_modules/decking/lib/decking.js:231:11)
    at Decking.commands.status (/usr/local/lib/node_modules/decking/lib/decking.js:108:17)
    at Decking.execute (/usr/local/lib/node_modules/decking/lib/decking.js:57:13)
    at Object._onImmediate (/usr/local/lib/node_modules/decking/bin/decking:25:20)
    at processImmediate [as _immediateCallback] (timers.js:330:15)

Would be nice to actually produce help output. Maybe even also ignore anything that starts with a - or -- from being a cluster name?

Seems kind of evil to name your cluster --help.

Changes to decking.js being ignored until you remove the container

I realise this issue is a combination of #4 and #40 but add it here for the benefit of anyone looking for a fast solution without digging through the long discussions on those issues.

I've just spent 4 hours trying to determine why changes to the "env" for a container were being ignored. I wanted to run a simple mysql container...

"containers": {
    "mysql": "mysql"
    ...

and after running decking create dev I realised I was missing a necessary environment variable.

"containers": {
    "mysql": {
            "image": "mysql",
            "env": ["MYSQL_ROOT_PASSWORD=xyz"]
    },
    ...

This change and others I made to the container config were completely ignored.

After running the create command without errors, my expectation was that my containers would match my decking.json file. I got the message already exists - running in case of dependents, but otherwise no indication of what operations were being performed.

After forking the code and inserting debug, I realised that the container's run parameters are only defined as the container is initially created in Docker, which only happens the first time the decking create command is run. Of course once I realised that it was a simple matter to delete the container and run the command again.

My concern is that there was no indication in the project page, or in the output of the commands, that decking.json is a use-once config. Once I knew what the problem was, I was able to find the solution confirmed down within issue #40. Decking does a great job of the set up and tear down lifecycle, except when you change the config. For this, you need to delete everything by hand, and start from scratch, which diminishes the overall Decking experience.

I am quite familiar with Docker, and understand (and agree) that Decking commands should mirror those of Docker. However this is something of a trap, and tweaking parameters like this during development to get a container running should not be considered aberrant behaviour.

In order to keep command semantics as they are, but help others avoid my experience, may I suggest the following:

  1. Change the message mentioned above to Container already exists (config will not be updated)
  2. As per issue #4, it would be great to have a way to easily destroy existing containers without doing docker ps -a and cutting and pasting ids to a docker rm command.
  3. Maybe a sentence or two to describe this issue on the home page.
  4. A debug option to display the underlying docker commands would make solving a problem like this much easier.

In every other way I think Decking is great, and that's why I persisted in working through this problem.
Cheers and thanks.

Support to provide "-h" hostname in containers section of decking.json

Hi ,
Would it be possible to provide a hostname "-h" argument while starting a cluster? My use case requires to run Puppet scripts to install packages on startup and hence a valid FQDN is a must. I don't have much experience in Javascript but I think adding hostname functionality in runner.js will do the trick.

Allow dynamic environment variables based on links

This is a hard one to explain, but I'll give it a shot. I'm not using RoR but it will work as an example.

Ruby on Rails applications expect a DATABASE_URL to be set which points them to the DB instance to use. If the database is in a different container, launched as a dependency, and part of the same cluster, there's no easy way of mapping the environment variables that Docker creates to the environment variable RoR expects to find. Some hack with the dockerfile could probably work, but this feels like cluster configuration, so something that should be in decking.json, not in the dockerfile. Presumably the same dockerfile could be used in different contexts/clusters where these wirings wouldn't make sense.

I would implement this by allowing env variables to reference container.inspect data of (dependency/child) containers using underscore template syntax. So in the case of DATABASE_URL, I would do something like

"env": [
  "DATABASE_URL=postgres://docker:docker@$<%= containers['db_container_name'].NetworkSettings.IPAddress%>:5432/docker"
]

Bit wordy, but super powerful. The .NetworkSettings.IPAddress model is just what docker/dockerode returns on container inspection. I actually have to create four dynamic env variables for one of my container, as it connects to four other child containers and the application inside needs to know where to find them.

Thoughts?

Example Needed of Multiple Mounts

Please show an example of a decking.json dealing with multiple mounts per container:

docker run -d -p 3306:3306
-v /docker/var/lib/mysql:/var/lib/mysql
-v /docker/var/log/mysql:/var/log/mysql
--name clmmysql quasaur/mysqlimage

...thank you!

volumes-from

I don't see any option for --volumes-from setting for containers. Is there another way to set it ?

decking start gives "Error: HTTP code is 500 ..." with docker version 1.3.0 while 1.1.2 works

with docker version 1.3.0 decking doesn't start the cluster while 1.1.2 works. Building and creating works. If I start the container with "docker start" manually everything works fine. The full error-message is:

Error: HTTP code is 500 which indicates error: server error - Content-Type specified () must be 'application/json'

    at Modem.buildPayload (/home/decking/nodejs/lib/node_modules/decking/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:157:15)
    at IncomingMessage.<anonymous> (/home/decking/nodejs/lib/node_modules/decking/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:133:14)
    at IncomingMessage.emit (events.js:117:20)
    at _stream_readable.js:943:16
    at process._tickDomainCallback (node.js:463:13)

Maybe the api of docker changed?

Restart container with a different settings

Here is the sample deckgin.json I have.

{
  "containers": {
    "web": {
      "image": "satyrius/my_web_project",
      "port": ["2200:22", "8000:80"],
      "mount": ["/var/log/my_web_project/nginx:/var/log/nginx"]
    }
  },
  "clusters": {
    "my": ["web"]
  }
}

It works, and I have container running. I decided to change port mapping to

      "port": ["2201:22", "8001:80"],

run decked restart but after restart container is still has old port mapping

Promises?

Just wanted to check what your attitude towards using promises to manage asynchronous calls in decking is.

We've recently converted pretty large internal codebases from async.js to promises and the code became smaller, more declarative, and cleaner at the same time.

Then, moving to using bluebird as a promise library meant even performance is superior than async.js which is completely counterintuitive given the extra machinery, but the author of bluebird is pretty crazy/awesome.

Not sure if I or someone else in the resin.io team can put in the time to do the conversion for decking, but I can try and answer that question depending on your response.

I can certainly work with async,js, so it's not a problem if you want to keep on using it for decking. Just thought I'd check.

Race condition between copying Dockerfile and tarring context

I discovered a race condition today between copying the image Dockerfile to the project root, and tarring of the context.

Basically if you have a really small context, the tar action completes before the Dockerfile has had chance to copy to the project root, causing the following error,

Error: HTTP code is 500 which indicates error: server error - null
    at Modem.buildPayload (/usr/local/lib/node_modules/decking/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:134:15)
    at ClientRequest.<anonymous> (/usr/local/lib/node_modules/decking/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:95:12)
    at ClientRequest.emit (events.js:117:20)
    at HTTPParser.parserOnIncomingClient [as onIncoming] (http.js:1688:21)
    at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:121:23)
    at Socket.socketOnData [as ondata] (http.js:1583:20)
    at Pipe.onread (net.js:527:27)

Looking in the docker logs you see the following confirming this,

[d82a2482] +job build()
Dockerfile cannot be empty
[d82a2482] -job build() = ERR (1)
[error] server.go:1048 Error making handler: Dockerfile cannot be empty
[error] server.go:90 HTTP Error: statusCode=500 Dockerfile cannot be empty

The solution is to make sure docker.buildImage() is not called until the Dockerfile is copied and the context is full tarred up.

This is also seems to be the cause of #35.

error running decking on Centos 6

I'm getting the below error running decking. I've got nodejs-0.10.28-1.el6.x86_64 installed.

[ben@localhost decking-example-master]$ decking build all
Looking up build data for decking/example-api
Building image decking/example-api from ./docker/api/Dockerfile
Uploading compressed context...
Error: HTTP code is 500 which indicates error: server error - null
at Modem.buildPayload (/usr/lib/node_modules/decking/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:134:15)
at ClientRequest. (/usr/lib/node_modules/decking/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:95:12)
at ClientRequest.EventEmitter.emit (events.js:117:20)
at HTTPParser.parserOnIncomingClient as onIncoming
at HTTPParser.parserOnHeadersComplete as onHeadersComplete
at Socket.socketOnData as ondata
at Pipe.onread (net.js:527:27)
Error: read ECONNRESET
at errnoException (net.js:904:11)
at Pipe.onread (net.js:558:19)
[WARN] Could not remove Dockerfile

Decking build creates no-name images

Hi
First of all, thanks for this nifty tool, it's great!
I have an issue while building my images with decking. No name or tag is assigned to the image. If I build them manually using the docker command line it works fine and I can create and start my cluster with decking.

decking build all --no-cache

REPOSITORY                    TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
<none>                        <none>              9dafa7533412        24 seconds ago      840.8 MB

Using boot2docker on OSX 10.9
image declaration in docking.json:

"images": {
    "instaguide/wizard-api": "../../wizard-api"
}

Output:

Looking up build data for instaguide/wizard-api
Building image instaguide/wizard-api from ../../wizard-api/Dockerfile
Uploading compressed context...

introduce concept of groups

allowing base container definitions to be merged with a group which can be run with different group-level params

Specify custom Tags

Currently it seems all images are built with the tag "latest." It would be nice to be able to specify tags within the decking.json file, and even as command line overrides.

For example being able to tag a "dev" image separate from a "test" build without having to name the image differently. This would produce:

my-image:dev
my-image:test

vs

my-image-dev:latest
my-image-dev:test

Might even just produce the tag based on the cluster/group name?

The ability to override tags via command line could be used in a CI environment to name the tag after the SHA-1 hash of a git commit or a brand name, for example.

Rename `decking attach` to `decking tail`?

We're trying to abstract docker terminology so does attach make sense?

If it does get renamed to tail, I'd love it to log first then attach - effectively like tail -f. I really miss those last 10 or so lines when attaching.

ENOENT/socket hangup on Mac OSX + boot2docker

On cluster start and restart (and I'm sure on most other commands), I get "connect ENOENT" and "socket hang up" errors. The containers started fine the first time but I am unable to stop them through decking (only docker itself). I'm assuming this is because I am running everything on Mac OSX + boot2docker and the IPs in decking are probably expected to be local. Is anyone else experiencing this?

What is going on with versions?

This repo has only one tag, which is version 0.0.4. The package.json of the master branch is at version 0.0.16. The latest version in npm registry is at version 0.0.17.

All these three should be in sync.

`decking attach` errors when trying to reconnect

Knee deep in hurriedly debugging another project and happened to notice decking throwing this error:

(proxy.build)      Proxy server listening on port 8887
(proxy.build)      {"level":"debug","message":"Redis ready for connections","timestamp":"2014-08-04T18:33:02.454Z"}
(proxy.build)      gone away, will try to re-attach for two minutes...
TypeError: Object [object Object] has no method 'isRunning'
    at null._onTimeout (/usr/lib/node_modules/decking/lib/decking.js:576:17)
    at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)

Literally not looked into this at all, suspect a method just went missing during the recent refactoring. Just making a mental note-to-self as don't have any time to start looking just yet.

/cc @stephenmelrose

Support launching privileged containers

I am planning to hack in support for containers, probably adding a privileged: (true|false) flag in the container settings. Let me know if there is a different way to do this.

Add better example project

Nodeflakes just isn't a good showcase for decking, but sadly the project which spawned it is private. Need to strike a balance and create an example which flexes decking's muscles a bit. Ideally would use a DB (sharded? Rethink?), Redis, web tier (load balanced would be even better), would demonstrate some good dependency stuff and some useful log tracing.

1.0.0 notes

I made some notes on a train the other day - I had no internet access so there is some overlap with issues we've already noted, and they're pretty haphazard, but still wanted to type them up. While there's no obligation at all, I wondered if any contributors (particularly @stephenmelrose) wanted to add any important features or disagreed with anything in the list. There are some questions which I'll remove when I create a proper task list (as per e.g. bootboxjs/bootbox#220) but thought they were worth keeping for now as discussion points.

I'd like to get 1.0.0 out by the end of the month ideally (I'm away next week). Nothing in the list is too daunting so just need to carve out some time. Slipping to October is no big deal; just nice to set some targets (for myself more than anything).

First class data containers (see #56)

Do data containers ever need running or just creating? If they don't need to be run, how do they remount a host volume (e.g. when -v has /foo:/bar as opposed to just -v /foo). Do people ever use a data container for more than just data? Would we need to explicitly mark data containers as such, and if so would an attribute like "data": true against the container(s) suffice?

decking logs (see #9, sort of)

Need to create a command like decking logs [-f] [-n X] to accompany (or really, to improve on) decking attach. When this feature was first built you couldn't choose to follow logs so attach was the only option available. Now in practice attach is a bit useless as you rarely want all output since container creation.

decking attach

Reattach logic is poor and gives up after something like two minutes. Just attempt it forever but provide exponential back off. Do we need to even log the warnings when a container goes away and/or comes back? Perhaps as a new switch e.g. -q or --quiet?

decking create (see #31)

Decking create should use the API as opposed to shelling out to docker run <args>. If it did we could create every container asynchronously without having to make sure dependencies are a) created and b) running when creating a dependent container. This would also mean we could ditch running in case of dependents which has always been pretty amateur.

Also, be good to be a bit more granular with the creation status if we can, sometimes downloading an image takes ages so it looks like it's hanging on creating when in fact it's in the middle of a docker pull.

decking start

Should create missing containers after warning that it's going to do so. It's annoying particularly when messing about with definitions and manually docker rming a specific container to have to then run decking create again just to create one missing container. This would allow users to just run decking start rather than decking create then decking start when creating a new cluster.

Detect deltas on start or restart (#40). Big task but be good to do, or try. Will need some configuration to avoid always prompting. Current thinking is to add a persistent boolean to container definition (#5), over-ridable per group. Not backwards compatible but better? This flag would also come in handy for...

decking destroy (#4)

Stop and remove all containers in a cluster. Would ignore persistent and data containers. Add flags like --include-persistent to nuke persistent containers too, and possibly a separate one for dta containers (and a third one to nuke both).

decking status

Can we tabularise? Improve output for data containers. Note which containers are persistent. Split columns into ip, ports, etc. Can we show dependencies here? Quite hard to scan at the moment sometimes.

decking recreate

Wrapper for destroy then create.

Miscellaneous

  • debug mode
  • optional ready parameter per container. If omitted, as current - just assume container is 'ready' after a fixed period of time. If port:(n), wait for port (n) to be listening (assume TCP). If string, check in logs for that string to appear and assume ready when found. Time out after X. Since a lot of containers will expose a single port, it'd be nice to make this automagical if poss... (#21)
  • ability to specify custom tag when running decking build <image> (#53)
  • add Travis build
  • add build to Jenkins, include coverage

Multi-machine support ?

Hi I read a lot about decking since I am looking for an orchestration tool for Docker.

I can't seem to figure out if multi-machine cluster is something that is support bu the decking tool.

decking watch?

Don't want to bloat config, but this might be useful; the ability to restart containers if any files in a given path change. Naturally a container's dependents would be restarted too. Could specify container or cluster level restarts.

Wait for service to start?

Maestro has this feature whereby it waits for containers to actually be ready to accept connections before starting to launch others that depend on them. The implementation for that is here. This feels like a feature decking should have, especially considering the strong support for links.

Add support for '--sticky' (or whatever) flag on decking attach

If present - or perhaps, by default - this should take note of when a container goes away and then quietly try and re-attach when it comes back. I hate having to stop and restart the attach when a container is restarted. For now it can poll but we should be able to use the events endpoint.

Dynamic Environment Variables - Advanced

I'm trying to understand the way Dynamic Environment Variables works and apply it for my situation. I'm setting up two containers. ContainerA is a web host, ContainerB is a MongoDB.

I can see that Decking will leverage Dockers built-in link functionality and that will cause a bunch of environment variables to be written to ContainerA's environment. However, none of those env var's is the specific variable name I need (MONGOHQ_URL). Now... as far as I can tell, dynamic environment variables require that the host machine (in my case boot2docker-vm) have the environment variable defined and it can then be mapped to the environment variables for the container being defined.

I either need to be able to either:

  • conveniently set host machine environment variables upon completion of any one container, and have control over the order of container instantiations so that I can depend on a variable set by container A being available for container B
    OR
  • use dynamic environment variables to map the contents of VariableB on container A into VariableA on container A; both as a simple map (ENVAR_A=${ENVVAR_B} and as a token in a more complex string (ENVAR_A=mongodb://${ENVVAR_B}:27099/mydatabase).

Am I missing some functionality that is already available? I feel like I must be, because Decking provides exactly the orchestration I need but this crucial linkage is a deal-breaker - in any composition of sufficiently atomic processes this will invariably be an issue.

Use remote API to create container

The two things I thought it couldn't do:

  1. Name containers
  2. Specify links

It can. Note that 2) only happens at run time, not create; meaning containers can be created asynchronously as deps don't matter - bonus!

Support for first-class data-only containers

This is currently a bit rubbish; data only containers are started with a cluster, and then show up as being stopped on a decking status call. We should build in some extra logic (either automatic or part of config) so we can treat them separately and not include them in these commands, or at least render them differently so it's clear they don't need to be running as part of a cluster.

Still need to apply to a decking create though.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.