Coder Social home page Coder Social logo

jupyterhub / kubespawner Goto Github PK

View Code? Open in Web Editor NEW
521.0 28.0 295.0 1.6 MB

Kubernetes spawner for JupyterHub

Home Page: https://jupyterhub-kubespawner.readthedocs.io

License: BSD 3-Clause "New" or "Revised" License

Python 98.80% HTML 1.05% CSS 0.15%
jupyterhub spawner kubernetes-cluster jupyter jupyterhub-kubernetes-spawner

kubespawner's Introduction

kubespawner (jupyterhub-kubespawner @ PyPI)

Latest PyPI version Latest conda-forge version Documentation status GitHub Workflow Status Code coverage

The kubespawner (also known as the JupyterHub Kubernetes Spawner) enables JupyterHub to spawn single-user notebook servers on a Kubernetes cluster.

See the KubeSpawner documentation for more information about features and usage. In particular, here is a list of all the spawner options.

Features

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. If you want to run a JupyterHub setup that needs to scale across multiple nodes (anything with over ~50 simultaneous users), Kubernetes is a wonderful way to do it. Features include:

  • Easily and elasticly run anywhere between 2 and thousands of nodes with the same set of powerful abstractions. Scale up and down as required by simply adding or removing nodes.

  • Run JupyterHub itself inside Kubernetes easily. This allows you to manage many JupyterHub deployments with only Kubernetes, without requiring an extra layer of Ansible / Puppet / Bash scripts. This also provides easy integrated monitoring and failover for the hub process itself.

  • Spawn multiple hubs in the same kubernetes cluster, with support for namespaces. You can limit the amount of resources each namespace can use, effectively limiting the amount of resources a single JupyterHub (and its users) can use. This allows organizations to easily maintain multiple JupyterHubs with just one kubernetes cluster, allowing for easy maintenance & high resource utilization.

  • Provide guarantees and limits on the amount of resources (CPU / RAM) that single-user notebooks can use. Kubernetes has comprehensive resource control that can be used from the spawner.

  • Mount various types of persistent volumes onto the single-user notebook's container.

  • Control various security parameters (such as userid/groupid, SELinux, etc) via flexible Pod Security Policies.

  • Run easily in multiple clouds (or on your own machines). Helps avoid vendor lock-in. You can even spread out your cluster across multiple clouds at the same time.

In general, Kubernetes provides a ton of well thought out, useful features - and you can use all of them along with this spawner.

Requirements

JupyterHub

Requires JupyterHub 4.0+

Kubernetes

Everything should work from Kubernetes v1.24+.

The Kube DNS addon is not strictly required - the spawner uses environment variable based discovery instead. Your kubernetes cluster will need to be configured to support the types of volumes you want to use.

If you are just getting started and want a kubernetes cluster to play with, Google Container Engine is probably the nicest option. For AWS/Azure, kops is probably the way to go.

Getting help

We encourage you to ask questions on the Jupyter mailing list. You can also participate in development discussions or get live help on Gitter.

License

We use a shared copyright model that enables all contributors to maintain the copyright on their contributions.

All code is licensed under the terms of the revised BSD license.

Resources

JupyterHub and kubespawner

Jupyter

kubespawner's People

Contributors

analect avatar athornton avatar batpad avatar bertr avatar betatim avatar bitnik avatar captnbp avatar choldgraf avatar clkao avatar consideratio avatar danielfrg avatar dependabot[bot] avatar dolfinus avatar droctothorpe avatar dtaniwaki avatar foxish avatar georgianaelena avatar grahamdumpleton avatar gsemet avatar keithcallenberg avatar ktongsc avatar manics avatar minrk avatar mriedem avatar pre-commit-ci[bot] avatar rmoe avatar saladraider avatar stv0g avatar willingc avatar yuvipanda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubespawner's Issues

Add support for setting singleuser service account

Currently we disable all service account mounting from inside the pod.

We should allow people to:

  1. Specify that the default serviceaccount needs to be mounted
  2. Specify that an arbitrary service account should be created and then mounted.

The default would still be no sa, but would be something you can override. This would be great for people who wanna launch helm charts from inside the notebook.

Return statement inside a generator error

Hi Yuvi,

thanks for putting together this nice piece of code.
I wonder if the repo is in a funny state since I setup jh and a kubernetes cluster but I get this error from the KubeSpawner:

Traceback (most recent call last):
File "", line 1, in
File "build/bdist.linux-x86_64/egg/kubespawner/init.py", line 1, in
File "/usr/local/lib/python2.7/dist-packages/kubespawner-0.1-py2.7.egg/kubespawner/spawner.py", line 215
SyntaxError: 'return' with argument inside generator

Do you know what is going on?

Cheers,
D

KubeSpawner and HTTP 409 errors

From @ryanlovett on January 24, 2017 5:26

I'm seeing HTTP 409: Conflict errors generated by kubespawner in datahub's hub log:

$ kubectl --namespace=datahub log hub-deployment-2122412827-21jsq | grep HTTP.409
[E 2017-01-24 00:18:40.950 JupyterHub user:251] Unhandled error starting user1's server: HTTP 409: Conflict
    tornado.httpclient.HTTPError: HTTP 409: Conflict
[E 2017-01-24 00:59:51.329 JupyterHub user:251] Unhandled error starting user2's server: HTTP 409: Conflict
    tornado.httpclient.HTTPError: HTTP 409: Conflict
[E 2017-01-24 05:10:16.857 JupyterHub user:251] Unhandled error starting user3's server: HTTP 409: Conflict
    tornado.httpclient.HTTPError: HTTP 409: Conflict
[E 2017-01-24 05:10:52.445 JupyterHub user:251] Unhandled error starting user4's server: HTTP 409: Conflict
    tornado.httpclient.HTTPError: HTTP 409: Conflict

These errors coincide with users who report that the server redirected you too many times. Here are some sanitized log entries:

[E 2017-01-24 00:18:41.012 JupyterHub web:1548] Uncaught exception POST /hub/api/users/jeffrey_wang1998/server (10.44.12.7)
    HTTPServerRequest(protocol='http', host='datahub.berkeley.edu', method='POST', uri='/hub/api/users/userN/server', version='HTTP/1.1', remote_ip='10.44.12.7', headers={'X-Forwarded-Proto': 'http', 'Accept-Language': 'en-US,en;q=0.8', 'Accept-Encoding': 'gzip, deflate', 'X-Original-Uri': '/hub/api/users/userN/server', 'Dnt': '1', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36', 'X-Requested-With': 'XMLHttpRequest', 'Host': 'datahub.berkeley.edu', 'Origin': 'http://datahub.berkeley.edu', 'Accept': '*/*', 'Content-Length': '0', 'Referer': 'http://datahub.berkeley.edu/hub/admin', 'Cookie': 'jupyter-hub-token="..."; III_EXPT_FILE=aa22678; III_SESSION_ID=...; __cfduid=...; _xsrf=2|7de5eeae|...|1484682597; SESSION_LANGUAGE=eng', 'Content-Type': 'application/json', 'Connection': 'close'})
    Traceback (most recent call last):
      File "/usr/local/lib/python3.4/dist-packages/tornado/web.py", line 1469, in _execute
        result = yield result
      File "/usr/local/lib/python3.4/dist-packages/jupyterhub/apihandlers/users.py", line 171, in post
        yield self.spawn_single_user(user, options=options)
      File "/usr/local/lib/python3.4/dist-packages/jupyterhub/handlers/base.py", line 328, in spawn_single_user
        yield gen.with_timeout(timedelta(seconds=self.slow_spawn_timeout), f)
      File "/usr/local/lib/python3.4/dist-packages/jupyterhub/user.py", line 261, in spawn
        raise e
      File "/usr/local/lib/python3.4/dist-packages/jupyterhub/user.py", line 229, in spawn
        ip_port = yield gen.with_timeout(timedelta(seconds=spawner.start_timeout), f)
      File "/usr/local/lib/python3.4/dist-packages/kubespawner/spawner.py", line 517, in start
        headers={'Content-Type': 'application/json'}
    tornado.httpclient.HTTPError: HTTP 409: Conflict

Copied from original issue: data-8/jupyterhub-k8s#100

Versions & tags

We are just beginning to make use of kubespawner for our Jupyter deployments. (Many thanks by the way!) The version currently released on PyPi (0.5.1) doesn't work for us whereas master does. Is there a timeline for cutting the next release from master? Will that likely be 0.5.1 or 0.5.2?

For the moment, we've forked and created a fake pre-release tag (https://github.com/IDR/kubespawner/releases/tag/0.5.2-IDR1) but would like to match your plans to whatever extent is possible.

All the best,
~Josh

Username issues with usernames that are subsets of other usernames

Not 100% sure this is an issue with the spawner but if a pod is spun up with username test1 then one with name test11 can not be created (the system acts like it already exists and redirects to a 404 page)

I tested this with various names and it seems to always be an issue if a username pod already exists and the new pod starts with the existing name. Guessing its an issue with some lookup for the pod name.

minikube + kubespawner broke docker.

I am doing to minikube + kubespawner setup for a dev environment as explained in Setup.md. Works great. However, I think the ip route surgery has broken my docker network, and I'm not knowledgeable enough to fix it out of hand.

my $ ip route

10.41.1.69 via 10.50.76.1 dev eno1 proto dhcp metric 100
10.50.76.0/24 dev eno1 proto kernel scope link src 10.50.76.167 metric 100
169.254.0.0/16 dev eno1 scope link metric 1000
172.17.0.0/16 via 192.168.99.100 dev vboxnet0
192.168.57.0/24 dev vboxnet1 proto kernel scope link src 192.168.57.1
192.168.99.0/24 dev vboxnet0 proto kernel scope link src 192.168.99.1

my $ tshark -i docker0 when I try to build a dockerfile, when it tries to apt-get update (its first attempt to access the internet)

1 0.000000000 :: -> ff02::16 ICMPv6 90 Multicast Listener Report Message v2
2 0.142017149 02:42:ac:11:00:02 -> Broadcast ARP 42 Who has 172.17.0.1? Tell 172.17.0.2
3 0.395986176 :: -> ff02::16 ICMPv6 90 Multicast Listener Report Message v2
4 0.616012275 :: -> ff02::1:ff11:2 ICMPv6 78 Neighbor Solicitation for fe80::42:acff:fe11:2
5 1.139960068 02:42:ac:11:00:02 -> Broadcast ARP 42 Who has 172.17.0.1? Tell 172.17.0.2
6 1.616047736 fe80::42:acff:fe11:2 -> ff02::16 ICMPv6 90 Multicast Listener Report Message v2
7 1.616080098 fe80::42:acff:fe11:2 -> ff02::2 ICMPv6 70 Router Solicitation from 02:42:ac:11:00:02
8 1.751993366 fe80::42:acff:fe11:2 -> ff02::16 ICMPv6 90 Multicast Listener Report Message v2
9 2.139955486 02:42:ac:11:00:02 -> Broadcast ARP 42 Who has 172.17.0.1? Tell 172.17.0.2
10 5.147103305 02:42:ac:11:00:02 -> Broadcast ARP 42 Who has 172.17.0.1? Tell 172.17.0.2
11 5.627963553 fe80::42:acff:fe11:2 -> ff02::2 ICMPv6 70 Router Solicitation from 02:42:ac:11:00:02
12 6.143958903 02:42:ac:11:00:02 -> Broadcast ARP 42 Who has 172.17.0.1? Tell 172.17.0.2
13 7.143982455 02:42:ac:11:00:02 -> Broadcast ARP 42 Who has 172.17.0.1? Tell 172.17.0.2
14 8.147424895 02:42:ac:11:00:02 -> Broadcast ARP 42 Who has 172.17.0.1? Tell 172.17.0.2
15 9.143964178 02:42:ac:11:00:02 -> Broadcast ARP 42 Who has 172.17.0.1? Tell 172.17.0.2
16 9.635993739 fe80::42:acff:fe11:2 -> ff02::2 ICMPv6 70 Router Solicitation from 02:42:ac:11:00:02
17 10.143963422 02:42:ac:11:00:02 -> Broadcast ARP 42 Who has 172.17.0.1? Tell 172.17.0.2
18 14.147715077 02:42:ac:11:00:02 -> Broadcast ARP 42 Who has 172.17.0.1? Tell 172.17.0.2

... it goes on like that until it times out. My host is on docker0 at 172.17.0.1. For whatever reason, it looks as though my host is refusing to tell my docker container who it is, but why that would be and how to fix it, I do not know.

I've tried restarting docker, and even reinstalling docker, but to no avail.

Have any similar issues when setting up your minikube + kubespawner?

Never orphan pods

Right now, if we start a pod but it doesn't successfully launch in the launch timeout, we just sort of let it exist forever, in an orphaned state. This is very problematic!

We should make sure we clean up after ourselves if we do not launch in time.

"stop my server" results in hung docker instance if user clicks on "my server" before it is done

We are testing this on gcloud:

If user goes to
/hub/home
and clicks "stop my server" and waits they eventually see "my server" and when they click on that it does work
BUT
if the user clicks away and clicks on my server while server is stopping, they get into a state where kubectl says the image with the given username exists and thus a new one can not be started but JupyterHub seems to think none exist so "start my server" shows up on /hub/home and when user clicks on it they get in an infinite redirect loop between
/hub/user/
and
/user/
The hub logs does not show any errors but does show the redirect loop of 302 responses

Some high-level questions around usage of kubespawner

@yuvipanda
Thanks for all your work on kubespawner. I've started experimenting with running jupyterhub on kubernetes, largely thanks to this spawner, but I wanted to get some guidance around my use-cases / workflow from someone a bit more seasoned in this technology. I'm structuring these as a series of high-level questions, where your input would be be much appreciated. For ease of explanation, I may refer to the rough sketch below lower down.

image

My efforts so far, for context:
I was working through the data-8/jupyterhub-k8s implementation, which I think bases itself off your work, since it's structure in a chart form (fro helm) is the easiest to work with, compared to some of the other implementations I've found out there.

I modified that set-up slightly to handle gitlab authentication (rather than google), which worked OK, but I wasn't able to get the spawning of their large user image (>5GB), based on this Dockerfile and their hub image to work. It was constantly stuck in a Waiting: ContainerCreating state and would then try to re-spawn itself. I haven't figured out what the problem is, but there appears to be plenty of space on the cluster. I'm using v1.51 of kubernetes on GCE.

Anyway, I ended up getting things working using instead the hub image (dockerfile below), a variation of the data-8 one, in conjunction with your yuvipanda/simple-singleuser:v1 user image.

FROM jupyterhub/jupyterhub-onbuild:0.7.1
# Install kubespawner and its dependencies
RUN /opt/conda/bin/pip install \
    oauthenticator==0.5.* \
    git+https://github.com/derrickmar/kubespawner \
    git+https://github.com/yuvipanda/jupyterhub-nginx-chp.git
ADD jupyterhub_config.py /srv/jupyterhub_config.py
ADD userlist /srv/userlist
WORKDIR /srv/jupyterhub
EXPOSE 8081
CMD jupyterhub --config /srv/jupyterhub_config.py --no-ssl

This was able to spawn new user persistent volumes, bind them to PVCs and obviously spawn user jupyter notebook servers, which could be stopped/started and re-use the same PV. My initial tests as to whether new files/notebooks were getting persisted on the PV were failing, since I wasn't saving them under /home, which is where the binding to the volume is happening.

i. user management / userid - After various aborted attempts to get the larger data-8 user image working, and where user PVs weren't deleted. I noticed that the userid appended to username for naming the PV incremented up, but it wasn't clear where this numbering logic was coming from, as it wasn't a env variable in any of the manifests. Is this some fail-safe of some sort?

Currently, I'm using a whitelist userlist for users (see code from jupyterhub_config.py) below, and these correspond with my users' gitlab logins that I'm authenticating against. However, it's probably not a clean solution. I see you are working on another approach on the fsgroup and just wanted to get a better understanding around the context of this solution?

# Whitlelist users and admins
c.Authenticator.whitelist = whitelist = set()
c.Authenticator.admin_users = admin = set()
c.JupyterHub.admin_access = True
pwd = os.path.dirname(__file__)
with open(os.path.join(pwd, 'userlist')) as f:
    for line in f:
        if not line:
            continue
        parts = line.split()
        name = parts[0]
        whitelist.add(name)
        if len(parts) > 1 and parts[1] == 'admin':
            admin.add(name)

ii. possibility for interchangeable images - I find the current default set-up with Jupyterhub allowing for spawning a single image very limiting. I can see from #14 that you are considering extending functionality in the kubespawner to allow for an image to be selected. @minrk was able to confirm over here that it could be possible to pass this image selection programmatically via the jupyterhub API, although I'm not sure, as per this issue, as to whether the hub API will work in a kubernetes context.

You pointed to an implementation by Google here. It's not clear to me where they are deriving their list of available images. How do you think something like this should work?

As per the sketch up top, I'm looking to handle a set-up where users have various private/shared repos (marked 1 above in sketch), from which docker images are generated and stored in a registry (2 above). Then my users (3 above) would be able to spawn a compute environment for their chosen repo and have it spawned in kubernetes (4 above), with the possibility, from 5 above, to have the repo cloned (maybe leveraging gitRepo) and for any incrimental work performed on it, while on the notebook server, persisted (6).

iii. multiple simultaneous servers per user based on different images - As far as I understand, it's not possible with jupyterhub to presently allow a user to have multiples instances of a notebook server, each running a different image? Do the tools exist within kubernetes to potentially facilitate this? Thinking out loud, could this be facilitated by having multiple smaller persistent volumes for a user, based on the repo from which the server image is derived? Or maybe this could be achieved within a single PV, by using the subPath functionality?

c.KubeSpawner.volumes = [
    {
        'name': 'volume-{username}-{repo-namespace}-{repo-name}',
        'persistentVolumeClaim': {
            'claimName': 'claim-{username}-{repo-namespace}-{repo-name}'
        }
    }
]

iv. ideas around version-control - Given the various advantages derived from using kubernetes to host jupyter, I would be curious if you had some thoughts around whether kubernetes also potentially makes it easier to manage version control for notebooks and other files created while in a user works in a notebook server environment. Perhaps something like preStop hooks could be used to commit and push changes prior to a container shutting down.

Even facilitating a user to be able to run git commands from a notebook server terminal .. and have SSH keys back to the version-control system handled via the kubernetes secrets/config maps might be a start. Have you seen any implementations solving this?

Thanks for your patience in reading through this!

Deprecate redundant singleuser_ prefix on config

when configuring the Spawner, several traits have a singleuser_ prefix. This is mostly redundant with the fact that a Spawner is all about configuring a singleuser server. Most (probably all) of these should be deprecated in favor of the same name without singleuser_. The singleuser_ names can be kept as a deprecated alias from config to avoid breaking existing config.

Specify container command

Is it possible to specify the container command via config? To use the docker stacks, command must be set to /usr/local/bin/singleuser.sh, but I can't see how to do it here. In DockerSpawner, it's passed to docker.create

kubespawner could not be imported

Hello everyone. I was trying to use the kubespawner, but I got an error when I tried to start the hub server. First I clone the repo, enter to the jupyterhub_config.py file and I only removed the authetication method part, then executed jupyterhub -f jupyterhub_config.py, but I got this output: The 'spawner_class' trait of <jupyterhub.app.JupyterHub object at 0x7efeb59e4240> instance must be a type, but 'kubespawner.KubeSpawner' could not be imported

What can it be? My version of jupyterhub is 0.7.2

Add support for specifying a hash of username in pod name / pvc templates

Currently we support substituting {userid} with the id of the user in the jupyterhub database in pod names / PVCs. This is bad since losing the database now means we can't match users with their pods again, and more importantly we can not match them with their PVCs. This is what happened in https://github.com/data-8/infrastructure/blob/master/incidents/2017-02-09-datahub-db-outage.md.

We should strive to make the database not contain any irreplaceable information. Since the username is already provided to us by the authenticator, the userid is the only bit of information that doesn't exist outside the database. Just using username in pod names isn't good enough, since we have to restrict pod names to a subset of ASCII, which could cause clashes. Userid prevents these clashes, but causes problems when your db goes away.

So let's add support for {USERNAMEHASH} or somesuch, that allows us to add a hash of the username to the pod / PVC names. This should pretty much fix all our problems.

Add ability to handle imagePullSecrets for private docker registries

@yuvipanda
I was wondering if the kubespawner could be extended to handle a new parameter imagePullSecrets.

My use-case is that I'm using a private registry to contain images. I'm running a cluster on GKE, which enables automatic authentication against the Google Cloud Registry (gcr.io), but not private ones.

As per these docs, there are a couple of work-arounds. Specifically at the section Creating a Secret with a Docker Config, it shows how a secret for the private registry can be created.

Then, in order for an image contained in a kubernetes manifest file to get pulled (in our case, the image used to underpin a notebook server), then additionally the imagePullSecrets parameter needs to be supplied.

Here is an example of what I mean.

In the meantime, if I wanted to hard-code in a secret name, could I just add in that parameter here?

Thanks.

Kubespawner stable release?

Releases tab shows zero. Is there any plan to make a stable release available or should we just pull from the master for production builds?. Thanks.

resource variables are not declared in spawner.py

The variables: cpu_limit, cpu_guarantee, mem_limit and mem_guarantee used in spawner.py:328 are not defined and as a result cause a run time exception:

[E 2016-12-08 19:07:56.594 JupyterHub user:237] Unhandled error starting ashoknn's server: 'KubeSpawner' object has no attribute 'cpu_limit'
[E 2016-12-08 19:07:56.657 JupyterHub web:1548] Uncaught exception GET /hub/user/ashoknn (10.117.4.23)
    HTTPServerRequest(protocol='http', host='a.b.c.d:8000', method='GET', uri='/hub/user/ashoknn', version='HTTP/1.1', remote_ip='e.f.g.h', headers={'X-Forwarded-Proto': 'http', 'Referer': 'http://a.b.c.d:8000/hub/home', 'Accept-Encoding': 'gzip, deflate', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'X-Bluecoat-Via': '175ec149a046caea', 'Connection': 'close', 'Accept-Language': 'en-US,en;q=0.5', 'X-Forwarded-Port': '8000', 'Cache-Control': 'max-stale=0', 'X-Forwarded-Host': 'a.b.c.d:8000', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:50.0) Gecko/20100101 Firefox/50.0', 'Upgrade-Insecure-Requests': '1', 'X-Forwarded-For': '10.249.4.10,10.117.4.23', 'Pragma': 'no-cache', 'Cookie': 'jupyter-hub-token="2|1:0|10:1481069512|17:jupyter-hub-token|44:NDFmNWQwMGRiOGZmNDI0OWExMzZiZDM5Njg4MThlM2M=|3bed416fd1f99cd52cd8c92a8721554b818fff21d72b6fd1274e743a1c539bbc"', 'Host': '100.70.20.226:8000'})
    Traceback (most recent call last):
      File "/usr/lib64/python3.4/site-packages/tornado/web.py", line 1469, in _execute
        result = yield result
      File "/usr/lib/python3.4/site-packages/jupyterhub/handlers/base.py", line 494, in get
        yield self.spawn_single_user(current_user)
      File "/usr/lib/python3.4/site-packages/jupyterhub/handlers/base.py", line 312, in spawn_single_user
        yield gen.with_timeout(timedelta(seconds=self.slow_spawn_timeout), f)
      File "/usr/lib/python3.4/site-packages/jupyterhub/user.py", line 247, in spawn
        raise e
      File "/usr/lib/python3.4/site-packages/jupyterhub/user.py", line 228, in spawn
        yield gen.with_timeout(timedelta(seconds=spawner.start_timeout), f)
      File "/home/cloud-user/bqnt-cloud-priv/scripts/bqnt-jupyterhub-testing/kubespawner/kubespawner/kubespawner/spawner.py", line 452, in start
        pod_manifest = self.get_pod_manifest()
      File "/home/cloud-user/bqnt-cloud-priv/scripts/bqnt-jupyterhub-testing/kubespawner/kubespawner/kubespawner/spawner.py", line 328, in get_pod_manifest
        self.cpu_limit,
    AttributeError: 'KubeSpawner' object has no attribute 'cpu_limit'

Documentation and Contribution

Hi,

I'm really interested in trying to get this working, and have no issue contributing code or Docs for this spawner.

If you could let me know how to get this working that would be great. :) I can then contribute back some docs at least, and fix any bugs i find.

Cheers,

Morgan

Kubespawner fails to spawn singleuser POD

I have been testing the kubespawner for a few weeks now and recently switched to the current master branch. Since then, I am unable to spawn the singleuser POD while this worked with a previous version of the kubespawner with Jupyterhub 0.7.2 (bare metal cluster with Kubernetes 1.5.2).

The working kubespawner version used: httpclient + k8s_url
while the new version uses the kubernetes python-client CoreV1Api directly.

The logs showed that the claim request actually works:

header: Content-Type header: Date header: Transfer-Encoding send: b'POST /api/v1/namespaces/notebook/persistentvolumeclaims HTTP/1.1\r\nHost: 10.10.10.1\r\nAccept-Encoding: identity\r\nContent-Length: 368\r\nAccept: application/json\r\nauthorization: bearer eyJhbGciOiJSUzI1NiIsInR5cCI6Ik...\r\nContent-Type: application/json\r\nUser-Agent: Swagger-Codegen/1.0.0-snapshot/python\r\n\r\n' send: b'{"spec": {"resources": {"requests": {"storage": "1Gi"}}, "accessModes": ["ReadWriteMany"]}, "apiVersion": "v1", "metadata": {"labels": {"heritage": "jupyterhub", "hub.jupyter.org/username": "toto", "app": "jupyterhub"}, "name": "pvcjupyterhub-singleuser", "annotations": {"volume.beta.kubernetes.io/storage-class": "glusterfs"}}, "kind": "PersistentVolumeClaim"}' reply: 'HTTP/1.1 409 Conflict\r\n'

The code I added to retrieve the jupyterhub accessible IP from the service name also works:

header: Content-Type header: Date header: Content-Length send: b'GET /api/v1/namespaces/notebook/services/jupyterhub?exact=True&export=False HTTP/1.1\r\nHost: 10.10.10.1\r\nAccept-Encoding: identity\r\nAccept: application/json\r\nauthorization: bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...\r\nContent-Type: application/json\r\nUser-Agent: Swagger-Codegen/1.0.0-snapshot/python\r\n\r\n' reply: 'HTTP/1.1 200 OK\r\n'

The request to kubernetes to create the POD times out:

header: Content-Type header: Date header: Content-Length send: b'POST /api/v1/namespaces/notebook/pods HTTP/1.1\r\nHost: 10.10.10.1\r\nAccept-Encoding: identity\r\nContent-Length: 1676\r\nAccept: application/json\r\nauthorization: bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9....\r\nContent-Type: application/json\r\nUser-Agent: Swagger-Codegen/1.0.0-snapshot/python\r\n\r\n' /opt/conda/lib/python3.5/site-packages/urllib3/connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) 2017-07-28 07:15:50,705 DEBUG https://10.10.10.1:443 "POST /api/v1/namespaces/notebook/pods HTTP/1.1" 201 1965 DEBUG:urllib3.connectionpool:https://10.10.10.1:443 "POST /api/v1/namespaces/notebook/pods HTTP/1.1" 201 1965

[W 2017-07-28 07:16:00.552 JupyterHub base:336] User toto's server is slow to start (timeout=10)
[W 2017-07-28 07:20:50.552 JupyterHub user:246] toto's server failed to start in 300 seconds, giving up
[E 2017-07-28 07:20:50.614 JupyterHub gen:878] Exception in Future <tornado.concurrent.Future object at 0x7fd6b44f9898> after timeout
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.5/site-packages/tornado/gen.py", line 874, in error_callback
        future.result()
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/user.py", line 261, in spawn
        raise e
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/user.py", line 229, in spawn
        ip_port = yield gen.with_timeout(timedelta(seconds=spawner.start_timeout), f)
    tornado.gen.TimeoutError: Timeout

At this point I'm not sure what goes wrong so if you have any clue on this that will help.

periodic reflector health check

We've seen a few cases where the reflector started missing events. We should probably have a periodic re-sync to make sure that the reflector is alive and well and in-sync with Kubernetes. While #81 fixes one known race condition, I suspect that there are other ways for events to be missed.

ReadTheDocs for Kubespawner appears to be duplicating various items (PDF and epub)

@yuvipanda @willingc

I was running through the kubespawner docs today, reacquainting myself with some changes (new features) introduced since I last looked at it earlier this year. I notice that under the 'v:latest' Spawners section, many of the config settings are repeated unnecessarily, leading to some bloat in section 4.3 (if you generate the PDF). So, for example, from the link above, if you search for config c.KubeSpawner.cmd = Command(), you'll find two entries in the docs. Not sure if this is meant by design or not.

From the PDF version (below), various config items described on pages 7/8 of the PDF are repeated on page 15. I'm just showing a few here ... I took a look at the /docs folder in the repo, but I'm not familiar enough with the readthedocs protocol to fix this ... if indeed it is an error. Just thought I'd flag it to you.

image

jupyterhub not understanding pod network

There's probably a config setting for this...
I have a real network on 10.0.0.0/8 on which I have a master and a node. Weave runs on these giving me a pod network on 192.168.0.0/16. When I launch jupyterhub on the metal on the master machine with kubespawner and the basic configs, it's confused because it's trying to access the 192 network:
Failed to connect to http://192.168.96.1:8888/user/scott
but jupyterhub doesn't have access to that network.
I found a few flags for jupyterhub that could be relevant: "ip", "proxy_api_ip" and "hub_ip", but I'm not super familiar with jupyterhub or kubernetes. Any direction is appreciated.

Reduce number of HTTP requests we make to the k8s API

When the hub starts, it poll to find the state of all user pods. However, it does this by making an individual call per pod, and this causes problems with tornado's HTTP client on large (>1k) installations. You end up with a lot of:

tornado.httpclient.HTTPError: HTTP 599: Timeout in request queue

exceptions.

We should instead move to a model similar to what kubernetes itself does:

  1. Fetch all pods and put it in a local datastructure
  2. Use native watch functionality to watch for pod status changes, and update local datastructure
  3. Treat local datastructure as canonical source.

This would need to be properly threaded and perhaps use the kubernetes API client library.

Allow launching user-based images

This is related to #75 & #79 .

We would like to be able to launch a singleuser image that varies based on the authenticated user.

I like the ideas presented in #79 around having a ConfigMap that stores a userId -> option mapping, with a selector presented to the user.

As a stopgap, having singleuser_image_spec accept a Callable value taking the user object would be sufficient.

This is something that I will likely have time to work on implementing in the coming months.

Allow Jupyterhub-kubespawner to connect to an insecure Kubernete server

I have a Kubernetes server with a self-signed certificate but for default the connection from the jupyterhub server to the kubernetes is refused due to an insecure connection.
this is the kube config file:

# cat ~/.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: $(a_big_cert)
    server: https://10.6.91.18:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: $(a_big_hash)
    client-key-data: $(a_big_hash)

This is the output of the starting server:

# jupyterhub -f /etc/jupyterhub/kubespawner/jupyterhub_config.py
[...]
[W 2017-08-23 18:22:49.567 JupyterHub iostream:1276] SSL Error on 9 ('10.6.91.18', 6443): [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:720)
[E 2017-08-23 18:22:49.567 JupyterHub app:1527]
    Traceback (most recent call last):
      File "/usr/lib/python3.5/site-packages/jupyterhub/app.py", line 1524, in launch_instance_async
        yield self.initialize(argv)
      File "/usr/lib64/python3.5/types.py", line 179, in throw
        return self.__wrapped.throw(tp, *rest)
      File "/usr/lib/python3.5/site-packages/jupyterhub/app.py", line 1315, in initialize
        yield self.init_spawners()
      File "/usr/lib/python3.5/site-packages/jupyterhub/app.py", line 1087, in init_spawners
        status = yield spawner.poll()
      File "/usr/lib/python3.5/site-packages/jupyterhub_kubespawner-0.5.1-py3.5.egg/kubespawner/spawner.py", line 510, in poll
        data = yield self.get_pod_info(self.pod_name)
      File "/usr/lib/python3.5/site-packages/jupyterhub_kubespawner-0.5.1-py3.5.egg/kubespawner/spawner.py", line 433, in get_pod_info
        pod_name,
      File "/usr/lib64/python3.5/site-packages/tornado/stack_context.py", line 314, in wrapped
        ret = fn(*args, **kwargs)
      File "/usr/lib64/python3.5/site-packages/tornado/gen.py", line 267, in <lambda>
        future, lambda future: callback(future.result()))
      File "/usr/lib64/python3.5/site-packages/tornado/tcpclient.py", line 174, in connect
        server_hostname=host)
      File "/usr/lib64/python3.5/site-packages/tornado/iostream.py", line 1259, in _do_ssl_handshake
        self.socket.do_handshake()
      File "/usr/lib64/python3.5/ssl.py", line 996, in do_handshake
        self._sslobj.do_handshake()
      File "/usr/lib64/python3.5/ssl.py", line 641, in do_handshake
        self._sslobj.do_handshake()
    ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:720)

And kubectl don't have problem to see the cluster:

kubectl get nodes
NAME              STATUS    AGE
cassaca-node004   Ready     39m
cassaca-node005   Ready     44m
cassaca-node007   Ready     45m
cassaca-node008   Ready     1h

There is a way to tell the kubespawner that he can trust in the server? a flag or config? Thanks!

Long user names used in pod and pvc labels not allowed by k8s

The hub.jupyter.org/username label in pod or pvc manifests (e.g. https://github.com/jupyterhub/kubespawner/blob/master/kubespawner/spawner.py#L663) when user name has more than 63 characters will make the creation of the pod fail with a error like this:

error: invalid label value: "hub.jupyter.org/username=notebook-a8e986f5974c1ee896fa7717cad98511be0388b3d667535d9af7fe18feab7e3d": must be no more than 63 characters

The username I'm using comes from an external OAuth2 IdP and has a fixed length of 64 characters so any pod creation will always fail. As a workaround I have changed the label to hub.jupyter.org/userid and use the id which should be small enough. However I'm not sure if that's the right approach.

Allow spawning a namespace per user

Right now we use one namespace for all user pods. It'll be great to allow spawning one namespace per user, so users can do additional things in their namespaces (like spawning other clusters)

Jupyterhub: supporting multiple service accounts & namespaces with a single server instance

The readme says:

Spawn multiple hubs in the same kubernetes cluster, with support for namespaces. You can limit the amount of resources each namespace can use, effectively limiting the amount of resources a single JupyterHub (and its users) can use. This allows organizations to easily maintain multiple JupyterHubs with just one kubernetes cluster, allowing for easy maintenance & high resource utilization.

In another model, where there is a single hub, we may have individual user pods (notebook instances) are created in different namespaces using different service accounts (on behalf of a user), where the association between JupyterHub identity and pod service-account is maintained separately in a configmap.
Has something like this been explored before? Are you aware of limitations if one were to try and do this?

This would be related to #76 and #75

cc @yuvipanda

Kubespawner jupyterhub dead kernel

Hi,

We tried setting up the kubespawner on two separate local machines running Ubuntu 16.04, but both times got a dead kernel issue. This means, that when a Python 3 notebook is chosen, then it displays a "kernel starting, please wait" message, and just after it starts at last, it immediately dies. After that it tries to restart without any success.

We followed the exact steps described in the https://github.com/jupyterhub/kubespawner/blob/master/SETUP.md.

The only difference was that in the jupyterhub_config.py, the following changes were carried out (as first the DummyAuthenticator did not let signing in):

import os
from dummyauthenticator import DummyAuthenticator
...
c.JupyterHub.authenticator_class = DummyAuthenticator

We tried other images as well by using yuvipanda/tiniest-notebook:v1.3 and yuvipanda/tiniest-notebook:v1.2 instead of the default yuvipanda/simple-singleuser:v1. In the first case the server could not start up, whereas in the second the dead kernel issue persisted.

After some general search on this issue, I modified the argument list of the kernel.json found in the .local/share/jupyter/kernels/python3/ directory, so that it contains "python3" instead of "python" (this served as a solution for others to this issue). However, this did not solve the issue either (even though I was not sure, if this is the same kernel.json file that is used by kubespawner).

Could you perhaps suggest any solutions to this issue?

Thank you very much.

The server should be self.server to work with 0.8 hub instead of self.user.server

Adding based on an interaction with @minrk on the jupyterhub slack channel here.

It was in the context of testing launching a named-server using the 0.5.x branch of zero-to-jupyterhub set-up (ie. on kubernetes).

I had previously been testing against /hub/api/users/:user/server/:server_name instead of /hub/api/users/:user/servers/:server_name (ie. missing an 's').

Now running this, I still get a 500 error, as per these logs, which complains about a missing JPY_COOKIE_NAME, which, as per @minrk relates to the get_env method here, which he suggests should get removed and that I see has been marked as deprecated.

In the interim, to get around this, is it enough to add something like this to the config.yaml in the helm-chart, or would that only partly-solve things?

hub
  extraEnv:
    JPY_COOKIE_NAME: 'hub-secret'

Potential for enabling version-control in conjunction with a notebook server

@yuvipanda
I would value your thoughts on some experimental work I've been doing to try to get a basic version-control for a notebook environment working ... which was part of the 'high-level questions' here. I realise that version-control from the notebook isn't native, yet, at least, but it seems that tooling is in place in a kubernetes context to make it more viable, and I wanted to solicit your input.

Background
Here's a recap of the proposed workflow in order that work performed in a notebook server session gets captured back to source git repo.

From a given gitlab/github repo (1), an image gets generated by the CI build and pushed to a repository (2). A notebook server environment gets spawned for userA (3). Source code from the repo (rather than being contained in the iamge) is pulled using a git-sync sidecar (4) and mounted against /user/jovyan/work (5). In order to persist work performed on the server, in case of the pod falling over, persistent storage to a pvc (persistent-volume-claim) is set, using a user_repo-shortname combination (6). The git-sync appears to be designed for pulling from a repo rather than pushing back to a repo, so the step (7) to get changes pushed back to a branch for merging back into master isn't fully clear, but git-sync does allow for ssh to be used for communication with the repo, and so having ssh on-board the notebook server pod (as a sidecar) may aid with that requirement to push changes back to a repo.

image

Just a note ref (6) above, I realise kubespawner is currently set up to handle a single server per user and this also feeds into a model for a single pvc per user. I know work is afoot elsewhere to potentially allow for multiple servers per user, which ties in with the model I describe above, where a user may have a server/pvc for each user-repo combination, although they wouldn't necessarily be running all of them at the same time.

git-sync in conjunction with kubespawner
The basic idea is that instead of kubespawner spawning a pod with a single notebook-server container, it would instead spawn a server and a side-car git-sync container. This second container could be used to (a) clone a specified repo to 'seed' a notebook server working environment and (b) also set up an SSH facility for interacting with the source repo.

I've been doing some experiments on a fork of kubespawner here to try to get this working. At this point, it's not a very generalised approach and I've hard-coded in settings for how that git-sync container gets deployed as well as modifying the notebook-server deployment to allow for various volume mounts so that both containers can have shared volumes ... to share the git repo cloned as well as the git-ssh set-up.

I think I must be missing something important or perhaps the spawner is not designed to handle what would effectively be two containers in a single pod. I'm getting back these error logs from the hub when I try to spawn a server and sidecar git-sync. By the way, I'm spawning the image using the jupyterhubapi, which has been working for me so far (ie. without these kubespawner modifications I've made in the fork discussed above).

2017-02-19T15:23:19.528392217Z [I 2017-02-19 15:23:19.528 JupyterHub spawner:543] Pvc claim-testuser-2 already exists, so did not create new pvc.
2017-02-19T15:23:19.545075936Z [E 2017-02-19 15:23:19.544 JupyterHub user:251] Unhandled error starting testuser's server: HTTP 400: Bad Request
2017-02-19T15:23:19.575581327Z [E 2017-02-19 15:23:19.572 JupyterHub web:1548] Uncaught exception POST /hub/api/users/testuser/server (10.4.6.120)
2017-02-19T15:23:19.575617804Z     HTTPServerRequest(protocol='http', host='jupyterhub.myserver.com', method='POST', uri='/hub/api/users/testuser/server', version='HTTP/1.1', remote_ip='10.4.6.120', headers={'Content-Length': '130', 'User-Agent': 'curl/7.50.2', 'Authorization': 'token testuser-token', 'Accept': '*/*', 'Content-Type': 'application/x-www-form-urlencoded', 'Connection': 'close', 'Host': 'jupyterhub.myserver.com', 'X-Original-Uri': '/hub/api/users/testuser/server', 'X-Forwarded-Proto': 'http'})
2017-02-19T15:23:19.575623427Z     Traceback (most recent call last):
2017-02-19T15:23:19.575626369Z       File "/opt/conda/lib/python3.5/site-packages/tornado/web.py", line 1469, in _execute
2017-02-19T15:23:19.575629377Z         result = yield result
2017-02-19T15:23:19.575631950Z       File "/opt/conda/lib/python3.5/types.py", line 179, in throw
2017-02-19T15:23:19.575634819Z         return self.__wrapped.throw(tp, *rest)
2017-02-19T15:23:19.575637874Z       File "/opt/conda/lib/python3.5/site-packages/jupyterhub/apihandlers/users.py", line 171, in post
2017-02-19T15:23:19.575623427Z     Traceback (most recent call last):
2017-02-19T15:23:19.575626369Z       File "/opt/conda/lib/python3.5/site-packages/tornado/web.py", line 1469, in _execute
2017-02-19T15:23:19.575629377Z         result = yield result
2017-02-19T15:23:19.575631950Z       File "/opt/conda/lib/python3.5/types.py", line 179, in throw
2017-02-19T15:23:19.575634819Z         return self.__wrapped.throw(tp, *rest)
2017-02-19T15:23:19.575637874Z       File "/opt/conda/lib/python3.5/site-packages/jupyterhub/apihandlers/users.py", line 171, in post
2017-02-19T15:23:19.575641496Z         yield self.spawn_single_user(user, options=options)
2017-02-19T15:23:19.575644136Z       File "/opt/conda/lib/python3.5/site-packages/jupyterhub/handlers/base.py", line 328, in spawn_single_user
2017-02-19T15:23:19.575647180Z         yield gen.with_timeout(timedelta(seconds=self.slow_spawn_timeout), f)
2017-02-19T15:23:19.575649850Z       File "/opt/conda/lib/python3.5/site-packages/jupyterhub/user.py", line 261, in spawn
2017-02-19T15:23:19.575653244Z         raise e
2017-02-19T15:23:19.575656676Z       File "/opt/conda/lib/python3.5/site-packages/jupyterhub/user.py", line 229, in spawn
2017-02-19T15:23:19.575668436Z         ip_port = yield gen.with_timeout(timedelta(seconds=spawner.start_timeout), f)
2017-02-19T15:23:19.575671217Z       File "/opt/conda/lib/python3.5/site-packages/kubespawner/spawner.py", line 557, in start
2017-02-19T15:23:19.575673917Z         headers={'Content-Type': 'application/json'}
2017-02-19T15:23:19.575682356Z     tornado.httpclient.HTTPError: HTTP 400: Bad Request
2017-02-19T15:23:19.575685374Z     
2017-02-19T15:23:19.587148679Z [E 2017-02-19 15:23:19.586 JupyterHub log:99] {
2017-02-19T15:23:19.587172100Z       "Content-Length": "130",
2017-02-19T15:23:19.587175436Z       "User-Agent": "curl/7.50.2",
2017-02-19T15:23:19.587178226Z       "Authorization": "token [secret]",
2017-02-19T15:23:19.587190739Z       "Accept": "*/*",
2017-02-19T15:23:19.587194940Z       "Content-Type": "application/x-www-form-urlencoded",
2017-02-19T15:23:19.587197677Z       "Connection": "close",
2017-02-19T15:23:19.587200541Z       "Host": "jupyterhub.myserver.com",
2017-02-19T15:23:19.587203143Z       "X-Original-Uri": "/hub/api/users/testuser/server",
2017-02-19T15:23:19.587205726Z       "X-Forwarded-Proto": "http"
2017-02-19T15:23:19.587208531Z     }
2017-02-19T15:23:19.587507050Z [E 2017-02-19 15:23:19.587 JupyterHub log:100] 500 POST /hub/api/users/testuser/server ([email protected]) 133.65ms

Any thoughts / ideas on what might be going wrong here? Thanks.

Switch to using official kubernetes client library

https://github.com/kubernetes-incubator/client-python didn't exist when we started this spawner. It exists now, and we should use it.

It has async support of sorts, but it merely just spawns a new thread per request. I'm not sure if this is a valid way to do this, given we make a ton of requests. We've already had problems with this (see #30) We have a few options:

  1. Determine that spawning a ton of threads is ok, and works fine. Test that we have connection re-use and things like that
  2. Implement the async-ness ourselves, with a threadpool or something like that
  3. Rewrite how we use the client - maintain a global, updated copy of all pods in namespace (with a list + watch), and use this copy to implement the polling operation. Do (1) or (2) for spawning and deletion.

I'd highly prefer (3) - that's what all the kubernetes components use, and it's also super efficient. It'll allow us to scale to a much higher number of users than we can today.

Tag volumes with user logins

We need to be able to unambiguously map persistent volume claims to user login names. The claims are named claim-<usernameish>-NNN where usernameish is the username transformed by safe character rules. It'd be easier to have a definitive mapping rather than trying to undo the transformation.

Request GPU

I cannot find a way to request a GPU via kubespawner, the equivalend kubectl specification is:

spec:                                                                                                                               
  containers:                                                                                                                       
    - image: XXXX                                                                             
      resources:                                                                                                                    
        limits:                                                                                                                     
          alpha.kubernetes.io/nvidia-gpu: 1 # requesting 1 GPU 

Use Kubespawner in production for user containers with Jupyterhub outside

Kubespawner looks great for my needs, and I have successfully got dev version and production examples to work. My use case though is that I need my setup to authenticate with LDAP, but which is only accessible on our [wired] network.

Is it possible to connect to run Jupyterhub on a physical machine to do authentication, and then connect that to a GKE (or other cloud) managed cluster for the other containers?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.