Coder Social home page Coder Social logo

jupyterhub-quickstart's Introduction

JupyterHub for OpenShift

This repository contains software to make it easier to host Jupyter Notebooks on OpenShift using JupyterHub.

OpenShift, being a Kubernetes distribution, you can use the JupyterHub deployment method for Kubernetes created by the Jupyter project team. That deployment method relies on using Helm templates to manage deployment. The use of Helm, and that Kubernetes is a platform for IT operations, means it isn't as easy to deploy by end users as it could be. This repository aims to provide a much easier way of deploying JupyterHub to OpenShift which makes better use of OpenShift specific features, including OpenShift templates, and Source-to-Image (S2I) builders. The result is a method for deploying JupyterHub to OpenShift which doesn't require any special admin privileges to the underlying Kubernetes cluster, or OpenShift. As long as a user has the necessary quotas for memory, CPU and persistent storage, they can deploy JupyterHub themselves.

Use a stable version of this repository

When using this repository, and the parallel jupyter-notebooks repository, unless you are participating in the development and testing of the images produced from these repositories, always use a tagged version. Do not use master or development branches as your builds or deployments could break across versions.

You should therefore always use any files for creating images or templates from the required tagged version. These will reference the appropriate version. If you have created your own resource definitions to build from the repository, ensure that the ref field of the Git settings for the build refers to the desired version.

Preparing the Jupyter Images

The first step in deploying JupyterHub is to prepare a notebook image and the image for JupyterHub.

You can use the official Jupyter project docker-stacks images, but some extra configuration is required to use those as they will not work out of the box with OpenShift. Details on how to use the Jupyter project images is described later.

To load an image stream definition for a minimal Jupyter notebook image designed to run in OpenShift, run:

oc apply -f https://raw.githubusercontent.com/jupyter-on-openshift/jupyter-notebooks/master/image-streams/s2i-minimal-notebook.json

An image stream named s2i-minimal-notebook should be created in your project, with tags 3.5 and 3.6, corresponding to Python 3.5 and 3.6 variants of the notebook image. This image is based on CentOS.

For more detailed instructions on creating the minimal notebook image, including how to build it from source code or using a RHEL base image, as well as how to create custom notebook images, read:

To load the JupyterHub image, next run:

oc apply -f https://raw.githubusercontent.com/jupyter-on-openshift/jupyterhub-quickstart/master/image-streams/jupyterhub.json

An image stream named jupyterhub should be created in your project, with a tag corresponding to whatever is the latest version. This image is also based on CentOS.

If you are using OpenShift Container Platform, and need to instead build a RHEL based version of the JupyterHub image, you can use the command:

oc apply -f https://raw.githubusercontent.com/jupyter-on-openshift/jupyterhub-quickstart/master/build-configs/jupyterhub.json

Use one or the other method. Do not load the image stream and try and create a build config to create it, at the same time.

Loading the JupyterHub Templates

To make it easier to deploy JupyterHub in OpenShift, templates are provided. To load the templates run:

oc apply -f https://raw.githubusercontent.com/jupyter-on-openshift/jupyterhub-quickstart/master/templates/jupyterhub-builder.json
oc apply -f https://raw.githubusercontent.com/jupyter-on-openshift/jupyterhub-quickstart/master/templates/jupyterhub-deployer.json
oc apply -f https://raw.githubusercontent.com/jupyter-on-openshift/jupyterhub-quickstart/master/templates/jupyterhub-quickstart.json
oc apply -f https://raw.githubusercontent.com/jupyter-on-openshift/jupyterhub-quickstart/master/templates/jupyterhub-workspace.json

This should result in the creation of the templates jupyterhub-builder, jupyterhub-deployer, jupyterhub-quickstart and jupyterhub-workspace.

Creating the JupyterHub Deployment

To deploy JupyterHub with the default configuration, which will provide you a deployment similar to tmpnb.org, and using the s2i-minimal-notebook:3.6 image, run:

oc new-app --template jupyterhub-deployer

This deployment requires a single persistent volume of size 1Gi for use by the PostgreSQL database deployed along with JupyterHub. The notebooks which will be deployed will use ephemeral storage.

To monitor progress as the deployment occurs run:

oc rollout status dc/jupyterhub

To view the hostname assigned to the JupyterHub instance by OpenShift, run:

oc get route/jupyterhub

Access the host from a browser and a Jupyter notebook instance will be automatically started for you. Access the site using a different browser, or from a different computer, and you should get a second Jupyter notebook instance, separate to the first.

To see a list of the pods corresponding to the notebook instances, run:

oc get pods --selector app=jupyterhub,component=singleuser-server

This should yield results similar to:

NAME                                                         READY     STATUS    RESTARTS   AGE
jupyterhub-nb-5b7eac5d-2da834-2d4219-2dac19-2dad7f2ee00e30   1/1       Running   0          5m

Note that the first notebook instance deployed may be slow to start up as the notebook image may need to be pulled down from the image registry.

As this configuration doesn't provide access to the admin panel in JupyterHub, you can forcibly stop a notebook instance by running oc delete pod on the specific pod instance.

To delete the JupyterHub instance along with all notebook instances, run:

oc delete all,configmap,pvc,serviceaccount,rolebinding --selector app=jupyterhub

Deploying with a Custom Notebook Image

To deploy JupyterHub and have it build a custom notebook image for you, run:

oc new-app --template jupyterhub-quickstart \
  --param APPLICATION_NAME=jakevdp \
  --param GIT_REPOSITORY_URL=https://github.com/jakevdp/PythonDataScienceHandbook \
  --param BUILDER_IMAGE=s2i-minimal-notebook:3.5

The s2i-minimal-notebook:3.5 builder image is used in this specific case instead of the default s2i-minimal-notebook:3.6 build image, as the repository being used as input to the S2I build only supports Python 3.5.

The notebook image will be built in parallel to JupyterHub being deployed. You will need to wait until the build of the image has completed before you can visit JupyterHub the first time. You can monitor the build of the image using the command:

oc logs bc/jakevdp-nb --follow

To deploy JupyterHub using a custom notebook image you had already created, run:

oc new-app --template jupyterhub-deployer \
  --param APPLICATION_NAME=jakevdp \
  --param NOTEBOOK_IMAGE=jakevdp-nb:latest

Because APPLICATION_NAME was supplied, the JupyterHub instance and notebooks in this case will all be labelled with jakevdp.

To get the hostname assigned for the JupyterHub instance, run:

oc get route/jakevdp

To delete the JupyterHub instance along with all notebook instances, run:

oc delete all,configmap,pvc,serviceaccount,rolebinding --selector app=jakevdp

Using the OpenShift Web Console

JupyterHub can also be deployed from the web console by selecting Select from Project from the Add to Project menu, filtering on jupyter and choosing the appropriate template.

Customising the JupyterHub Deployment

JupyterHub, and how notebook images are deployed, can be customised through a jupyterhub_config.py file. The JupyterHub image created from this repository has a default version of this file which sets a number of defaults required for running JupyterHub in OpenShift. You can provide your own customisations, including overriding any defaults, in a couple of ways.

The first is that when using the supplied templates to deploy JupyterHub, you can provide your own configuration through the JUPYTERHUB_CONFIG template parameter. This configuration will be read after the default configuration, with any settings being merged with the existing settings.

The second is to use the JupyterHub image built from this repository as an S2I builder, to incorporate your own jupyterhub_config.py file from a hosted Git repository, or local directory if using a binary input build. This will be merged with the default settings before any configuration supplied via JUPYTERHUB_CONFIG when using a template to deploy the JupyterHub image.

When using an S2I build, the repository can include any additional files to be incorporated into the JupyterHub image which may be needed for your customisations. This includes being able to supply a requirements.txt file for additional Python packages to be installed, as may be required by an authenticator to be used with JupyterHub.

To illustrate overriding the configuration when deploying JupyterHub using the quick start template, create a local file jupyterhub_config.py which contains:

c.KubeSpawner.start_timeout = 180
c.KubeSpawner.http_timeout = 120

Deploy JupyterHub using the quick start template as was done previously, but this time set the JUPYTERHUB_CONFIG template parameter.

oc new-app --template jupyterhub-quickstart \
  --param APPLICATION_NAME=jakevdp \
  --param GIT_REPOSITORY_URL=https://github.com/jakevdp/PythonDataScienceHandbook \
  --param JUPYTERHUB_CONFIG="`cat jupyterhub_config.py`"

If you need to edit the configuration after the deployment has been made, you can edit the config map which was created:

oc edit configmap/jakevdp-cfg

JupyterHub only reads the configuration on startup, so trigger a new deployment of JupyterHub.

oc rollout latest dc/jakevdp

Note that triggering a new deployment will result in any running notebook instances being shutdown, and users will need to start up a new notebook instance through the JupyterHub interface.

Providing a Selection of Images to Deploy

When deploying JupyterHub using the templates, the NOTEBOOK_IMAGE template parameter is used to specify the name of the image which is to be deployed when starting an instance for a user. If you want to provide users a choice of images you will need to define what is called a profile list in the configuration. The list of images will be presented in a drop down menu when the user requests a notebook instance be started through the JupyterHub web interface.

c.KubeSpawner.profile_list = [
    {
        'display_name': 'Minimal Notebook (CentOS 7 / Python 3.5)',
        'kubespawner_override': {
            'image_spec': 's2i-minimal-notebook:3.5'
        }
    },
    {
        'display_name': 'Minimal Notebook (CentOS 7 / Python 3.6)',
        'default': True,
        'kubespawner_override': {
            'image_spec': 's2i-minimal-notebook:3.6'
        }
    }
]

This will override any image defined by the NOTEBOOK_IMAGE template parameter.

For further information on using the profile list configuration see the KubeSpawner documentation.

Using the Jupyter Project Notebook Images

The official Jupyter Project notebook images:

  • jupyter/base-notebook
  • jupyter/r-notebook
  • jupyter/minimal-notebook
  • jupyter/scipy-notebook
  • jupyter/tensorflow-notebook
  • jupyter/datascience-notebook
  • jupyter/pyspark-notebook
  • jupyter/all-spark-notebook

will not work out of the box with OpenShift. This is because they have not been designed to work with an arbitrarily assigned user ID without additional configuration. The images are also very large and the size exceeds what can be deployed to hosted OpenShift environments such as OpenShift Online.

If you still want to run the official Jupyter Project notebook images, you can, but you will need to supply additional configuration to the KubeSpawner plugin for these images to have them work. For example:

c.KubeSpawner.profile_list = [
    {
        'display_name': 'Jupyter Project - Minimal Notebook',
        'default': True,
        'kubespawner_override': {
            'image_spec': 'docker.io/jupyter/minimal-notebook:latest',
            'supplemental_gids': [100]
        }
    },
    {
        'display_name': 'Jupyter Project - Scipy Notebook',
        'kubespawner_override': {
            'image_spec': 'docker.io/jupyter/scipy-notebook:latest',
            'supplemental_gids': [100]
        }
    },
    {
        'display_name': 'Jupyter Project - Tensorflow Notebook',
        'kubespawner_override': {
            'image_spec': 'docker.io/jupyter/tensorflow-notebook:latest',
            'supplemental_gids': [100]
        },
    }
]

The special setting is supplemental_gids, with it needing to be set to include the UNIX group ID of 100.

If you want to set this globally for all images in place of defining it for each image, or you were not providing a choice of image, you could instead set:

c.KubeSpawner.supplemental_gids = [100]

Because of the size of these images, you may need to set a higher value for the spawner start_timeout setting to ensure starting a notebook instance from the image doesn't fail the first time a new node in the cluster is used for that image. Alternatively, you could have a cluster administrator pre-pull images to each node in the cluster.

Enabling the JupyterLab Interface

By default Jupyter Notebook images still use the classic web interface by default. If you want to enable the newer JupyterLab web interface set the JUPYTER_ENABLE_LAB environment variable.

c.KubeSpawner.environment = { 'JUPYTER_ENABLE_LAB': 'true' }

If using a profile list and only want the JupyterLab interface enabled for certain images, add an environment setting to the dictionary of settings for just that image.

c.KubeSpawner.profile_list = [
    {
        'display_name': 'Minimal Notebook (Classic)',
        'default': True,
        'kubespawner_override': {
            'image_spec': 's2i-minimal-notebook:3.6'
        }
    },
    {
        'display_name': 'Minimal Notebook (JupyterLab)',
        'kubespawner_override': {
            'image_spec': 's2i-minimal-notebook:3.6',
            'environment': { 'JUPYTER_ENABLE_LAB': 'true' }
        }
    }
]

Controlling who can Access JupyterHub

When the templates are used to deploy JupyterHub, anyone will be able to access it and create a notebook instance. To provide access to only selected users, you will need to define an authenticator as part of the JupyterHub configuration. For example, if using GitHub as an OAuth provider, you would use:

from oauthenticator.github import GitHubOAuthenticator
c.JupyterHub.authenticator_class = GitHubOAuthenticator

c.GitHubOAuthenticator.oauth_callback_url = 'https://<your-jupyterhub-hostname>/hub/oauth_callback'
c.GitHubOAuthenticator.client_id = 'your-client-key-from-github'
c.GitHubOAuthenticator.client_secret = 'your-client-secret-from-github'

c.Authenticator.admin_users = {'your-github-username'}
c.Authenticator.whitelist = {'user1', 'user2', 'user3', 'user4'}

The oauthenticator package is installed by default and includes a number of commonly used authenticators. If you need to use a third party authenticator which requires additional Python packages to be installed, you will need to use the JupyterHub image as an S2I builder, where the source it is applied to includes a requirements.txt file including the list of additional Python packages to install. This will create a custom JupyterHub image which you can then deploy by overriding the JUPYTERHUB_IMAGE template parameter.

Allocating Persistent Storage to Users

When a notebook instance is created and a user creates their own notebooks if the instance is stopped they will loose any work they have done.

To avoid this, you can configure JupyterHub to make a persistent volume claim and mount storage into the containers when a notebook instance is run.

For the S2I enabled notebook images built previously, where the working directory when the notebook is run is /opt/app-root/src, you can add the following to the JupyterHub configuration.

c.KubeSpawner.user_storage_pvc_ensure = True

c.KubeSpawner.pvc_name_template = '%s-nb-{username}' % c.KubeSpawner.hub_connect_ip
c.KubeSpawner.user_storage_capacity = '1Gi'

c.KubeSpawner.volumes = [
    {
        'name': 'data',
        'persistentVolumeClaim': {
            'claimName': c.KubeSpawner.pvc_name_template
        }
    }
]

c.KubeSpawner.volume_mounts = [
    {
        'name': 'data',
        'mountPath': '/opt/app-root/src'
    }
]

If you are presenting to users a list of images they can choose, if necessary you can add the spawner settings on selected images, and use a different mount path for the persistent volume if necessary.

Note that you should only use persistent storage when you are also using an authenticator and you know you have enough persistent volumes available to satisfy the needs of all potential users. This is because once a persistent volume is claimed and associated with a user, it is retained, even if the users notebook instance was shut down. If you want to reclaim persistent volumes, you will need to delete them manually using oc delete pvc.

Also be aware that when you mount a persistent volume into a container, it will hide anything that was in the directory it is mounted on. If the working directory for the notebook in the image was pre-populated with files from an S2I build, these will be hidden if you use the same directory. When /opt/app-root/src is used as the mount point, only notebooks and other files create will be preserved. If you install additional Python packages, these will be lost when the notebook is shutdown, and you will need to reinstall them.

If you want to be able to pre-populate the persistent volume with notebooks and other files from the S2I built image, you can use the following configuration. This will also preserve additional Python packages which you might install.

c.KubeSpawner.user_storage_pvc_ensure = True

c.KubeSpawner.pvc_name_template = '%s-nb-{username}' % c.KubeSpawner.hub_connect_ip
c.KubeSpawner.user_storage_capacity = '1Gi'

c.KubeSpawner.volumes = [
    {
        'name': 'data',
        'persistentVolumeClaim': {
            'claimName': c.KubeSpawner.pvc_name_template
        }
    }
]

c.KubeSpawner.volume_mounts = [
    {
        'name': 'data',
        'mountPath': '/opt/app-root',
        'subPath': 'app-root'
    }
]

c.KubeSpawner.singleuser_init_containers = [
    {
        'name': 'setup-volume',
        'image': 's2i-minimal-notebook:3.6',
        'command': [
            'setup-volume.sh',
            '/opt/app-root',
            '/mnt/app-root'
        ],
        'resources': {
            'limits': {
                'memory': '256Mi'
            }
        },
        'volumeMounts': [
            {
                'name': 'data',
                'mountPath': '/mnt'
            }
        ]
    }
]

Because the Python virtual environment and installed packages are kept in the persistent volume in this case, you will need to ensure that you have adequate space in the persistent volume and may need to increase the requested storage capacity.

Culling Idle Notebook Instances

When a notebook instance is created for a user, they will keep running until the user stops it, or OpenShift decides for some reason to stop them. In the latter, if the user was still using it, they would need to start it up again as notebook images will not be automatically restarted.

If you have many more users using the JupyterHub instance than you have memory and CPU resources, but you know not all users will use it at the same time, that is okay, so long as you shut down notebook instances when they have been idle, to free up resources.

To add culling of idle notebook instances, add to the JupyterHub configuration:

c.JupyterHub.services = [
    {
        'name': 'cull-idle',
        'admin': True,
        'command': ['cull-idle-servers', '--timeout=300'],
    }
]

The cull-idle-servers program is provided with the JupyterHub image. Adjust the value for the timeout argument as necessary.

Multi User Developer Workspace

The jupyterhub-workspace template combines a number of the above configuration options into one template. These include:

  • Authentication of users using OpenShift cluster OAuth provider.
  • Optional specification of whitelisted users, including those who are admins.
  • Optional allocation of a persistent storage volume for each user.
  • Optional culling of idle sessions.

Note that the template can only be used with Jupyter notebook images based on the s2i-minimal-notebook images. You cannot use official images from the Jupyter Project.

The jupyterhub-workspace template can only be deployed by a cluster admin, as it needs to create an oauthclient resource definition, which requires cluster admin access.

You will also need to supply template arguments giving the sub domain used for the cluster for hosting applications, and the name of the project the instance is being deployed to.

To deploy the template and provide persistent storage and idle session culling you can use:

oc new-app --template jupyterhub-workspace --param CLUSTER_SUBDOMAIN=A.B.C.D.nip.io --SPAWNER_NAMESPACE=`oc project --short` --param VOLUME_SIZE=1Gi --param IDLE_TIMEOUT=3600

To delete the deployment first use:

oc delete all,configmap,pvc,serviceaccount,rolebinding --selector app=jupyterhub

You then need to delete the oauthclient resource. Because this is a global resource, verify you are deleting the correct resource first by running:

oc get oauthclient --selector app=jupyterhub

If it is correct, then delete it using:

oc delete oauthclient --selector app=jupyterhub

If there is more than one resource matching the label selector, delete by name the one corresponding to the project you created the deployment in. The project name will be part of the resource name.

jupyterhub-quickstart's People

Contributors

grahamdumpleton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jupyterhub-quickstart's Issues

Upgrade JupyterHub to 1.0.0

Hi!

Is there a possibility to upgrade JupyterHub from 0.9.4 to 1.0.0?
Reason for the request:
It has features like:

  • c.Authenticator.auth_refresh_age
  • c.Authenticator.refresh_auth
  • c.Authenticator.refresh_user

These could be very well used when creating a custom authenticator.
I already have a custom OpenShiftOAuthenticator class, that can restrict access to only those OpenShift users, who have any role in the namespace on OpenShift, where JupyterHub is running. This is working perfectly fine, however, if a user is deleted from the membership of the namespace, and the user is already logged in, I want the user to be forcefully logged out from JupyterHub. I think I could implement this using the latest version of JupyterHub using the above mentioned new features.

Thank you for your help in advance!

Cheers,

Zoltán

How can I get the image s2i-minimal-notebook and jupyterhub

I will deploy the project jupyter-on-openshift at Private Clouds.It means I can't connect to Internet.
I have tried modify the "source:git" in "https://raw.githubusercontent.com/jupyter-on-openshift/jupyterhub-quickstart/master/images.json" to "source:contextDir" ,but I found in the project which has more action needing Internet which like "pip install"etc.
Also I have tried using the "manage.openshift.com" to deploy the project ,but when I confirm the plan for init,there is an error "There was an error processing your request for access: reCAPTCHA verification failed, please try again." for hours.
May I get the 2 images directly? If you can tell me how can I deploy the project at private clouds is much better.
my email: [email protected] thank you so much

persist notebooks

As a single user, I want to spawn a notebook using persistent storage, so that my work is not lost if the jupyterhub pod is redeployed.

Move wait-for-database into /opt/app-root/bin so in PATH.

Install wait-for-database into /opt/app-root/bin so in PATH and eliminate need for full path in init container of template. For backwards compatibility for a while with deployments using old template, need to still allow full path to work for a while.

Sign in issue

Hi Graham,

I was wondering if you could help me out with an issue. I am trying to setup Jupyterhub Workspace for a demo on Friday. I followed your documentation and after deployment I open the URL and I get a prompt to sign in with Openshift but then receive this msg:
{"error":"unauthorized_client","error_description":"The client is not authorized to request a token using this method.","state":"eyJzdGF0ZV9pZCI6ICJmZjk4ZjUzYmVlNjQ0ZWU2Yjc4YmE2YTg4OWYxZDdiZSIsICJuZXh0X3VybCI6ICIifQ=="}
I have a feeling it has something to do with oauthclient but when I run the cmd to get the oauthclient for my app, I get a msg that no resource is found. I am running Openshift 4.2.8 with crc version: 1.2.0+c2e3c0f.

Unable to create JupyterHub deployment using the templates in the tutorial

I am facing couple of issues trying to deploy JupyterHub in OpenShift using the templates provided in the tutorial

  1. When I try to run oc new-app --template jupyterhub-deployer, I am getting error: rolebindings.authorization.openshift.io "jupyterhub-edit" is forbidden: attempt to grant extra privileges but all the other resources are getting created successfully and the deployment results in Failed status
  2. Although the deployment results in Failed status, when I go to OpenShift and monitor the deployment, the container with jupyterhub-quickstart image gets created and started but the pod gets stuck in pods-initializing state and times out after sometime. I am not getting any useful information from the Events log and not able to get around the issue.

Can someone please help? I really looking to get this up and running

Call Program before Jupyter Hub Launch

Disclaimer: I'm an absolute OpenShift newbee, but want to use the JupyterHub Quickstart for a HPC tutorial soon.

Is it possible to execute a Program (/Bash script) before JupyterHub is launched? I need to set up environment variables and move things around before the Notebook is started.

jupyterhub_config.sh seems to be intended for Shell commands, but I don't know how to use the file. There's also a corresponding entry in the configmap, but I don't know how to use it (and am not sure if this is really intended to be used for this kind of thing).

JupyterHub issue with Authentication

Dear Mr. Dumpleton,

I use OAuth 2.0 authentication against OpenShift with JupyterHub. I experience the issue that JupyterHub does not force the users to authenticate again after logout, but only if the logout happened shortly after a login. I guess it must have something to do with missing invalidation of cookies at JupyterHub's side.
Additional Info:

  • OpenShift Master: v3.10.83
  • Kubernetes Master: v1.10.0+b81c8f8
  • jupyterhub-quickstart:3.0.5

However, the same issue is present with - jupyterhub-quickstart:1.0.3 and OpenShift 3.6.

Any ideas what could be the solution for this?

Thank you for your help in advance!

Cheers,

Zoltán

Outdated documentation regarding KubeSpawner

Dear Mr. Dumpleton,

I found a bug in the documentation:
c.KubeSpawner.hub_connect_ip became deprecated in KubeSpawner version 0.10.
See KubeSpawner changelog.

Also:

  • c.KubeSpawner.user_storage_pvc_ensure -> c.KubeSpawner.storage_pvc_ensure
  • c.KubeSpawner.user_storage_capacity -> c.KubeSpawner.storage_capacity

Cheers,

Zoltán

Error Pulling Image s2i-minimal-notebook

[dymurray@dymurray-desktop sbcli]$ oc get is                          
NAME                      DOCKER REPO                                     TAGS      UPDATED                                                 
jupyterhub                172.30.1.1:5000/dylan/jupyterhub                latest    21 minutes ago                                          
s2i-minimal-notebook      172.30.1.1:5000/dylan/s2i-minimal-notebook      3.5       26 minutes ago                                          
s2i-scipy-notebook        172.30.1.1:5000/dylan/s2i-scipy-notebook        3.5       23 minutes ago                                          
s2i-tensorflow-notebook   172.30.1.1:5000/dylan/s2i-tensorflow-notebook 
$ oc describe pod jupyterhub-nb-272442c5-2de4d4-2d40fd-2d8054-2dfcc1349e9c42
  FirstSeen     LastSeen        Count   From                    SubObjectPath                   Type            Reason          Message
  ---------     --------        -----   ----                    -------------                   --------        ------          -------
  11s           11s             1       default-scheduler                                       Normal          Scheduled       Successfully assigned jupyterhub-nb-272442c5-2de4d4-2d40fd-2d8054-2dfcc1349e9c42 to localhost
  10s           10s             1       kubelet, localhost      spec.containers{notebook}       Normal          Pulling         pulling image "s2i-minimal-notebook:3.5"
  5s            5s              1       kubelet, localhost      spec.containers{notebook}       Warning         Failed          Failed to pull image "s2i-minimal-notebook:3.5": rpc error: code = Unknown desc = repository docker.io/s2i-minimal-notebook not found: does not exist or no pull access
  5s            5s              1       kubelet, localhost      spec.containers{notebook}       Warning         Failed          Error: ErrImagePull
  4s            4s              1       kubelet, localhost      spec.containers{notebook}       Normal          BackOff         Back-off pulling image "s2i-minimal-notebook:3.5"
  4s            4s              1       kubelet, localhost      spec.containers{notebook}       Warning         Failed          Error: ImagePullBackOff

Any idea why I am seeing this?

RFE - Support GPUs

Working with OCP 3.10, i am able to get the nvidia device plugin working in my project. This allows me to deploy the standard tensorflow+jupyter image on my GPUs (via Openshift).

I would like the jupyterhub s2i image for Openshift to have this same capability.

-Nick

Deployment on Openshift \\"wait-for-database\\

Hi, I am trying to deploy jupyterhub on openshift using the templates and building jupyterhub image using as a reference this template

oc apply -f https://raw.githubusercontent.com/jupyter-on-openshift/jupyterhub-quickstart/master/build-configs/jupyterhub.json

I am able to create the build config and the image stream, my problem is when I try to deploy the application, the pod for the dabase is running ok, but for some reason the pod for jupyterhub is not running and reviewing the Events section this show this error:

Error: container create failed: time="2020-09-23T02:02:31Z" level=error msg="container_linux.go:349: starting container process caused "exec: \"wait-for-database\": executable file not found in $PATH"" container_linux.go:349: starting container process caused "exec: "wait-for-database": executable file not found in $PATH"

Could someone please help me understand if this is something related to permissions while building the image? or when I try to deploy the application?

Thanks in advanced.

Delete pre-loaded authenticators in config.

This is a hack as temporary workaround so that users using jupyterhub-quickstart can still set environment variables in jupyterhub_config.py to set up authenticators. Long term must move to using jupyterhub_config.sh file to set them.

jupyterlab support?

Hello

I apologise for posting these general questions in perhaps the wrong location.

Does this project currently support jupyterlabs on openshift?
Is support for jupyterlabs on your roadmap?

Thanks and regards

Upstream failure verifying auth token: [502] Host not found

502 : Bad Gateway
Upstream failure verifying auth token: [502] Host not found

I'm trying to run Jupyterhub on openshift with the following command.

oc new-app --template jupyterhub-quickstart --param APPLICATION_NAME=app_name --param GIT_REPOSITORY_URL=https://github.com/jakevdp/PythonDataScienceHandbook --param BUILDER_IMAGE=s2i-minimal-notebook:3.5

Below the output from the log file.
[W 2019-09-27 22:00:24.537 SingleUserNotebookApp configurable:168] Config option open_browsernot recognized bySingleUserNotebookApp. Did you mean browser? [I 2019-09-27 22:00:25.611 SingleUserNotebookApp extension:168] JupyterLab extension loaded from /opt/app-root/lib/python3.5/site-packages/jupyterlab [I 2019-09-27 22:00:25.612 SingleUserNotebookApp extension:169] JupyterLab application directory is /opt/app-root/share/jupyter/lab [I 2019-09-27 22:00:25.614 SingleUserNotebookApp singleuser:406] Starting jupyterhub-singleuser server version 0.9.6 [I 2019-09-27 22:00:26.638 SingleUserNotebookApp log:158] 302 GET /user/2fa17b8d-d67e-4e4f-bf0c-62123edb0bdc/ -> /user/2fa17b8d-d67e-4e4f-bf0c-62123edb0bdc/tree? (@102.101.111.17) 0.92ms [I 2019-09-27 22:00:26.693 SingleUserNotebookApp log:158] 302 GET /user/2fa17b8d-d67e-4e4f-bf0c-62123edb0bdc/?redirects=1 -> /user/2fa17b8d-d67e-4e4f-bf0c-62123edb0bdc/tree?redirects=1 (@10.201.12.1) 0.82ms [W 2019-09-27 22:00:26.738 SingleUserNotebookApp auth:586] Detected unused OAuth state cookies [I 2019-09-27 22:00:26.740 SingleUserNotebookApp log:158] 302 GET /user/2fa17b8d-d67e-4e4f-bf0c-62123edb0bdc/tree?redirects=1 -> /hub/api/oauth2/authorize?client_id=jupyterhub-user-2fa17b8d-d67e-4e4f-bf0c-62123e db0bdc&response_type=code&redirect_uri=%2Fuser%2F2fa17b8d-d67e-4e4f-bf0c-62123edb0bdc%2Foauth_callback&state=[secret] (@10.201.16.1) 2.76ms [E 2019-09-27 22:00:26.886 SingleUserNotebookApp auth:299] Upstream failure verifying auth token: [502] Host not found [E 2019-09-27 22:00:26.886 SingleUserNotebookApp auth:300]

Lab always enabled

Hi!
I have a config map with some launch items enabling JupyterLab, some not:

    c.ProfilesSpawner.profiles = [
	(
        "Minimal Notebook w/ JupyterLab",
        's2i-minimal-notebook',
        'kubespawner.KubeSpawner',
        dict(singleuser_image_spec='s2i-minimal-notebook:3.5',environment=dict(JUPYTER_ENABLE_LAB='true'))
    ),
    (
        "Minimal Notebook",
        's2i-minimal-notebook',
        'kubespawner.KubeSpawner',
        dict(singleuser_image_spec='s2i-minimal-notebook:3.5')
    ),...

However all my images are launched with the JupterLab interface. And of course I don't have c.KubeSpawner.environment = dict(JUPYTER_ENABLE_LAB='true') in the configuration.

I'm pretty sure this did not behave like this in my previous installations. What do I miss?

Open port in workspace

Hello,

I would like to know how I can open a port in jupyterhub notebook that is being deployed from jupyterhub. This is for spark so it can connect to a spark cluster.

The ports I want to use are already in my dockerfile but when jupyterhub rols out this container it only opens port 8080. but if I run the image standalone the ports are being opend. I look in openshift but I can't find the deployment config for the notebook that is being used so i can change that.

error starting server on minishift

  • trap 'kill -TERM $PID' TERM INT
  • PID=24
  • wait 24
  • start-jupyterhub.sh
    [W 2018-07-31 05:41:24.944 JupyterHub app:452] JupyterHub.proxy_api_port is deprecated in JupyterHub 0.8, use ConfigurableHTTPProxy.api_url
    [I 2018-07-31 05:41:24.945 JupyterHub app:1667] Using Authenticator: tmpauthenticator.TmpAuthenticator
    [I 2018-07-31 05:41:24.945 JupyterHub app:1667] Using Spawner: kubespawner.spawner.KubeSpawner
    /opt/app-root/lib/python3.5/site-packages/psycopg2/init.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: http://initd.org/psycopg/docs/install.html#binary-install-from-pypi .
    """)
    [I 2018-07-31 05:41:25.010 alembic.runtime.migration migration:117] Context impl PostgresqlImpl.
    [I 2018-07-31 05:41:25.010 alembic.runtime.migration migration:122] Will assume transactional DDL.
    [I 2018-07-31 05:41:25.020 alembic.runtime.migration migration:327] Running stamp_revision -> 896818069c98
    [I 2018-07-31 05:41:25.094 JupyterHub proxy:431] Generating new CONFIGPROXY_AUTH_TOKEN
    [W 2018-07-31 05:41:25.095 JupyterHub app:1171] No admin users, admin interface will be unavailable.
    [W 2018-07-31 05:41:25.095 JupyterHub app:1172] Add any administrative users to c.Authenticator.admin_users in config.
    [I 2018-07-31 05:41:25.095 JupyterHub app:1199] Not using whitelist. Any authenticated user will be allowed.
    [I 2018-07-31 05:41:25.138 JupyterHub app:1849] Hub API listening on http://0.0.0.0:8081/hub/
    [I 2018-07-31 05:41:25.138 JupyterHub app:1851] Private Hub API connect url http://jupyterhub-1-nm8rk:8081/hub/
    [W 2018-07-31 05:41:25.139 JupyterHub proxy:552] Running JupyterHub without SSL. I hope there is SSL termination happening somewhere else...
    [I 2018-07-31 05:41:25.139 JupyterHub proxy:554] Starting proxy @ http://:8080/
    05:41:25.386 - info: [ConfigProxy] Proxying http://*:8080 to (no default)
    05:41:25.388 - info: [ConfigProxy] Proxy API at http://127.0.0.1:8082/api/routes
    05:41:25.506 - info: [ConfigProxy] 200 GET /api/routes
    [I 2018-07-31 05:41:25.514 JupyterHub proxy:301] Checking routes
    [I 2018-07-31 05:41:25.514 JupyterHub proxy:370] Adding default route for Hub: / => http://jupyterhub-1-nm8rk:8081
    05:41:25.519 - info: [ConfigProxy] Adding route / -> http://jupyterhub-1-nm8rk:8081
    05:41:25.520 - info: [ConfigProxy] 201 POST /api/routes/
    [I 2018-07-31 05:41:25.522 JupyterHub app:1906] JupyterHub is now running at http://:8080/
    [I 2018-07-31 05:41:36.524 JupyterHub log:158] 302 GET / -> /hub (@::ffff:172.17.0.1) 1.28ms
    [I 2018-07-31 05:41:36.535 JupyterHub log:158] 302 GET /hub -> /hub/ (@::ffff:172.17.0.1) 1.05ms
    [I 2018-07-31 05:41:36.543 JupyterHub log:158] 302 GET /hub/ -> /hub/login (@::ffff:172.17.0.1) 0.70ms
    [I 2018-07-31 05:41:36.551 JupyterHub log:158] 302 GET /hub/login -> /hub/tmplogin (@::ffff:172.17.0.1) 0.99ms
    [I 2018-07-31 05:41:36.570 JupyterHub log:158] 302 GET /hub/tmplogin -> /user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ (@::ffff:172.17.0.1) 12.97ms
    [I 2018-07-31 05:41:36.580 JupyterHub log:158] 302 GET /user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ -> /hub/user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ (@::ffff:172.17.0.1) 0.75ms
    [W 2018-07-31 05:41:36.608 JupyterHub configurable:168] Config option common_labels not recognized by KubeSpawner.
    [I 2018-07-31 05:41:36.639 JupyterHub reflector:129] watching for pods with label selector heritage=jupyterhub,component=singleuser-server in namespace datascience
    [W 2018-07-31 05:41:46.645 JupyterHub base:679] User 2e8572d2-7acd-4944-8cec-6b5bd7d39eea is slow to start (timeout=10)
    [I 2018-07-31 05:41:46.645 JupyterHub base:1016] 2e8572d2-7acd-4944-8cec-6b5bd7d39eea is pending spawn
    [I 2018-07-31 05:41:46.674 JupyterHub log:158] 200 GET /hub/user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 10087.45ms
    [W 2018-07-31 05:43:36.671 JupyterHub user:468] 2e8572d2-7acd-4944-8cec-6b5bd7d39eea's server failed to start in 120 seconds, giving up
    [E 2018-07-31 05:43:36.704 JupyterHub gen:974] Exception in Future <Task finished coro=<BaseHandler.spawn_single_user..finish_user_spawn() done, defined at /opt/app-root/lib/python3.5/site-packages/jupyterhub/handlers/base.py:619> exception=TimeoutError('Timeout',)> after timeout
    Traceback (most recent call last):
    File "/opt/app-root/lib/python3.5/site-packages/tornado/gen.py", line 970, in error_callback
    future.result()
    File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result
    raise self._exception
    File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
    File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/handlers/base.py", line 626, in finish_user_spawn
    await spawn_future
    File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/user.py", line 486, in spawn
    raise e
    File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/user.py", line 406, in spawn
    url = await gen.with_timeout(timedelta(seconds=spawner.start_timeout), f)
    File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 358, in iter
    yield self # This tells Task to wait for completion.
    File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/tasks.py", line 290, in _wakeup
    future.result()
    File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result
    raise self._exception
    tornado.util.TimeoutError: Timeout

[W 2018-07-31 05:43:36.712 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress
[W 2018-07-31 05:43:36.712 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress
[W 2018-07-31 05:43:36.713 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress
[I 2018-07-31 05:43:36.713 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 109504.49ms
[I 2018-07-31 05:43:36.714 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 76344.35ms
[I 2018-07-31 05:43:36.714 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 43148.36ms
[I 2018-07-31 05:43:36.715 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 10099.33ms
[E 2018-07-31 05:43:48.436 JupyterHub gen:974] Exception in Future <Future finished exception=TimeoutError('pod/jupyterhub-nb-2e8572d2-2d7acd-2d4944-2d8cec-2d6b5bd7d39eea did not start in 120 seconds!',)> after timeout
Traceback (most recent call last):
File "/opt/app-root/lib/python3.5/site-packages/tornado/gen.py", line 970, in error_callback
future.result()
File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/opt/app-root/lib/python3.5/site-packages/kubespawner/spawner.py", line 995, in start
timeout=self.start_timeout
File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/utils.py", line 155, in exponential_backoff
raise TimeoutError(fail_message)
TimeoutError: pod/jupyterhub-nb-2e8572d2-2d7acd-2d4944-2d8cec-2d6b5bd7d39eea did not start in 120 seconds!

[I 2018-07-31 05:44:18.543 JupyterHub log:158] 200 GET /hub/home (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 40.56ms
[I 2018-07-31 05:44:21.278 JupyterHub log:158] 302 GET /hub/spawn -> /user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 6.47ms
[I 2018-07-31 05:44:21.286 JupyterHub log:158] 302 GET /user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ -> /hub/user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ (@::ffff:172.17.0.1) 0.63ms
[W 2018-07-31 05:44:31.300 JupyterHub base:679] User 2e8572d2-7acd-4944-8cec-6b5bd7d39eea is slow to start (timeout=10)
[I 2018-07-31 05:44:31.301 JupyterHub base:1016] 2e8572d2-7acd-4944-8cec-6b5bd7d39eea is pending spawn
[I 2018-07-31 05:44:31.302 JupyterHub log:158] 200 GET /hub/user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 10009.35ms
[E 2018-07-31 05:46:20.682 JupyterHub user:474] Unhandled error starting 2e8572d2-7acd-4944-8cec-6b5bd7d39eea's server: pod/jupyterhub-nb-2e8572d2-2d7acd-2d4944-2d8cec-2d6b5bd7d39eea did not start in 120 seconds!
[E 2018-07-31 05:46:20.717 JupyterHub gen:974] Exception in Future <Task finished coro=<BaseHandler.spawn_single_user..finish_user_spawn() done, defined at /opt/app-root/lib/python3.5/site-packages/jupyterhub/handlers/base.py:619> exception=TimeoutError('pod/jupyterhub-nb-2e8572d2-2d7acd-2d4944-2d8cec-2d6b5bd7d39eea did not start in 120 seconds!',)> after timeout
Traceback (most recent call last):
File "/opt/app-root/lib/python3.5/site-packages/tornado/gen.py", line 970, in error_callback
future.result()
File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/handlers/base.py", line 626, in finish_user_spawn
await spawn_future
File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/user.py", line 486, in spawn
raise e
File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/user.py", line 406, in spawn
url = await gen.with_timeout(timedelta(seconds=spawner.start_timeout), f)
File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 358, in iter
yield self # This tells Task to wait for completion.
File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/tasks.py", line 290, in _wakeup
future.result()
File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/opt/app-root/lib/python3.5/site-packages/kubespawner/spawner.py", line 995, in start
timeout=self.start_timeout
File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/utils.py", line 155, in exponential_backoff
raise TimeoutError(fail_message)
TimeoutError: pod/jupyterhub-nb-2e8572d2-2d7acd-2d4944-2d8cec-2d6b5bd7d39eea did not start in 120 seconds!

[W 2018-07-31 05:46:20.717 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress
[W 2018-07-31 05:46:20.718 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress
[W 2018-07-31 05:46:20.718 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress
[I 2018-07-31 05:46:20.718 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 108968.22ms
[I 2018-07-31 05:46:20.719 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 75618.98ms
[I 2018-07-31 05:46:20.719 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 42533.58ms
[I 2018-07-31 05:46:20.720 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 9426.51ms
05:46:25.527 - info: [ConfigProxy] 200 GET /api/routes
[I 2018-07-31 05:46:25.529 JupyterHub proxy:301] Checking routes
[I 2018-07-31 05:51:25.530 JupyterHub proxy:301] Checking routes
05:51:25.528 - info: [ConfigProxy] 200 GET /api/routes
05:56:25.527 - info: [ConfigProxy] 200 GET /api/routes
[I 2018-07-31 05:56:25.529 JupyterHub proxy:301] Checking routes
06:01:25.526 - info: [ConfigProxy] 200 GET /api/routes
[I 2018-07-31 06:01:25.528 JupyterHub proxy:301] Checking routes

Updating jupyterhub image on quay

Hello @GrahamDumpleton

Thanks for the detailed guide on hosting jupyterhub on openshift.

I have a question related to the jupyterhub image available in quay, link below:
https://quay.io/repository/jupyteronopenshift/jupyterhub?tab=tags

  1. Whether the image is managed by you?
  2. If yes, then are you planning to push/update the latest jupyterhub image frequently?

The last push was around 2 years ago and it contains jupyterhub version 1.2.3 but the latest available is 1.4.2 as on docker.
https://hub.docker.com/r/jupyterhub/jupyterhub

Thanks & Regards
Sahil Singla

Tracking: Image centos/python-35-centos broken

I've noticed I cannot run jupyterhub anymore due to error described in sclorg/s2i-python-container#246

Upgrade to 3.6 brings in different error

+ exec jupyterhub -f /opt/app-root/etc/jupyterhub_config.py
Traceback (most recent call last):
  File "/opt/app-root/bin/jupyterhub", line 3, in <module>
    from jupyterhub.app import main
  File "/opt/app-root/lib/python3.6/site-packages/jupyterhub/app.py", line 118, in <module>
    class NewToken(Application):
  File "/opt/app-root/lib/python3.6/site-packages/jupyterhub/app.py", line 134, in NewToken
    name = Unicode(getuser())
  File "/opt/rh/rh-python36/root/usr/lib64/python3.6/getpass.py", line 169, in getuser
    return pwd.getpwuid(os.getuid())[0]
KeyError: 'getpwuid(): uid not found: 1000930000'

500: Internal Server Error - Permission failure checking authorization, I may need a new token

Hello, I'm currently trying to get Jupyterhub to run on Openshift Container Platform. I have managed to follow your guide with some changes and get it to build and deploy.

The change that I made was removing the parts in the template's YAML related to service accounts in the templates because it kept conflicting with RBAC, causing errors that prevented it from deploying at all.

After it is deployed, when I go to the created route and try to use it the server starts, but then I get this error:

500: Internal Server Error - Permission failure checking authorization, I may need a new token.

When I look at the logs from Openshift, it shows this:

[I 2019-07-24 10:44:56.863 JupyterHub log:158] 302 GET / -> /hub (@::ffff:ip) 1.11ms
[I 2019-07-24 10:44:56.883 JupyterHub log:158] 302 GET /hub -> /hub/ (@::ffff:ip) 0.52ms
[I 2019-07-24 10:44:56.918 JupyterHub log:158] 302 GET /hub/ -> /user/token/ (token@::ffff:ip) 11.60ms
[I 2019-07-24 10:44:57.012 JupyterHub log:158] 302 GET /hub/api/oauth2/authorize?client_id=jupyterhub-user-token&redirect_uri=%2Fuser%2Ftoken%2Foauth_callback&response_type=code&state=[secret] -> /user/token/oauth_callback?code=[secret]&state=[secret] token@::ffff:ip) 23.87ms

I tried looking for similar issues but I couldn't find any. So I decided to ask you directly. What could possibly cause this issue and how could I solve it?

Any help would be appreciated. Thank you very much in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.