This repository contains governance-related work for Nebari, including:
⚖️ The complete Nebari Code of Conduct
🚀 Official project roadmap
📈 Details about documentation analytics
💬 Issues that are requests for discussions
✨ Governance-related work for Nebari-dev
License: BSD 3-Clause "New" or "Revised" License
This repository contains governance-related work for Nebari, including:
⚖️ The complete Nebari Code of Conduct
🚀 Official project roadmap
📈 Details about documentation analytics
💬 Issues that are requests for discussions
In addition to CoC I'd like to adopt explicit inclusivity guidelines like the ones from Kubeflow
Status | Draft 🚧 |
---|---|
Author(s) | Adam-D-Lewis |
Date Created | 03-31-2023 |
Date Last updated | 03-31-2023 |
Decision deadline | ? |
In Argo Workflows users with permissions to use Argo Workflows can mount any other users home directory. This is not acceptable. I discuss some options to limit this behavior. Some options include:
workflows.argoproj.io/creator=452fcf19-d3ca-4813-a250-2b2e1bb7bd9d
tag on the workflow (keycloak user ID).I think the AdmissionController is the best way forward at the moment.
Status | Accepted ✅ |
---|---|
Author(s) | @costrouc |
Date Created | 03-28-2023 |
Date Last updated | 03-28-2023 |
Decision deadline | 04-15-2023 |
Extension Mechanism for Nebari
Over the past 3 years we have consistently run into the issue where extending and customizing Nebari has been a hard task. Several approaches have been added:
terraform_overrides
and helm_overrides
keyword to allow for arbitrary overrides of helm values.helm_extensions
in stage 8 which allow for the addition of arbitrary helm chartstf_extensions
which integrate oauth2 and ingress to deploy a single docker imageDespite these features we still have needs from users and we are not addressing them all. Additionally we have issues when we want to add a new services it typically has to be directly added to the core of Nebari. We want to solve this by making extensions first class in Nebari.
I see quite a few benifits from this proposal:
Overall I propose we adopt pluggy. Pluggy has been adopted by many major projects including: datasette, conda, (TODO list more). Pluggy would allow us to expose a plugin interface and "install" extensions via setuptools entrypoints. Making extension installation as easy as pip install ...
Usage from a high level user standpoint
pip install nebari
pip install nebari-ext-clearml
pip install nebari-ext-helm
pip install nebari-ext-cost
Once a user installs the extensions we can view the installed extensions via:
$ nebari extensions list
Name Description
---------------------------------------------------------------------------
nebari-ext-clearml "ClearML integration into nebari"
nebari-ext-helm "Helm extensions"
....
Within nebari we will expose several plugins:
A plugin interface for arbitrary additional typer
commands. All commands will be passed in the nebari config along with all specified command line arguments from the user. Conda has a similar approach with typer for their system.
nebari cost
class Stage:
name: str
description: str
priority: str # defaults to value of name
def validate(self, config: schema.NeabriConfig):
"""Perform additional validation of the nebari configuration specific to stage
"""
def render(self, config: schema.NebariConfig) -> typing.Union[typing.Dict[str, bytes], pathlib.Path]:
"""Given a set configuration render a set of files
Returns
------------
typing.Union[typing.Dict[str, bytes], pathlib.Path]
Returns either a directory to copy over files or a dictionary of keys mapping to file bytes
"""
...
def deploy(self, directory: pathlib.Path, stages: typing.Dict[str, typing.Any]) -> typing.Any:
"Deploy all resources within the stage
"
...
def destroy(self, directory: pathlib.Path):
"Destroy all resources within the stage"
...
Nebari will use pluggy within its core and separate each stage into a pluggy Stage
. Each stage will keep it's original name.
As far as plugin/extension systems go I am only aware of two major ones within the python ecosystem:
This will encourage the practice of extending nebari via extensions instead of direct PRs to the core.
It is possible to make this transition seamless to the user without changing behavior.
I feel confident in the approach since I have seen other project use pluggy succefully for similar work.
Status | Accepted ✅ |
---|---|
Author(s) | @aktech |
Date Created | 04-04-2024 |
Date Last updated | 10-04-2024 |
Decision deadline | 30-04-2024 |
Nebari doesn't have a proper RBAC model yet. As a consequence, providing fine grained control of access to Nebari's services to users and groups is not possible. This poses a risk of the user might inadvertently accessing and modifying any data or service within Nebari that violates the principle of least privilege.
Role Based Access Control (RBAC) in Nebari will provides fine grained control of access to Nebari’s services.
To understand the proposal for the new RBAC, its important to understand current permissions model, we will go through this very briefly here. Also see nebari-dev/nebari#2304 for more context.
RBAC came in JupyterHub in 2.x and we upgraded from 1.5 in Aug, 2023:
This means we never got to implement JupyterHub's RBAC and have our limited in-house baked permissions model.
Which is basically getting server options for the given user from keycloak
Only two levels of permissions available at this point:
If the user is not in any group, the user will get 403 on accessing Nebari.
At the moment we have the following roles:
This is set in conda-store configuration:
c.CondaStoreServer.authentication_class = KeyCloakAuthentication
The KeyCloakAuthentication
class fetches the user data from keycloak
via keycloak's conda-store client and finds the roles user have and based on that, it returns user's role binding such that user have corresponding permissions on conda-store.
This also fetches user's groups and creates conda-store namespaces (adding them to the conda-store db).
Nebari has following Grafana roles, which in code maps to corresponding Grafana roles (1-to-1 mapping):
Nebari Roles | Grafana roles |
---|---|
grafana_admin | Admin |
grafana_developer | Editor, Viewer |
grafana_viewer | Viewer |
This uses NebariAuthentication(JupyterHubAuthenticator)
to define custom authentication, which checks if either of the above are present on user's roles
If none of these are present, user has no access to create dask clusters.
This makes calls to JupyterHub's API to get user roles and groups from the following keys in JupyterHub's /user
endpoint.
auth_state.oauth_user.roles
auth_state.oauth_user.groups
Argo has following roles:
Three k8s service accounts are created with the above three levels of permissions and based on that user is assigned permissions. These roles are assigned in keycloak.
The idea is to not re-invent the wheel but rather try to use existing RBAC frameworks wherever possible with little or no modifications to support wide range of fine-grained control. We will use JupyterHub's RBAC as a motivation to implement RBAC in Nebari. We'll also try to use similar conventions to avoid confusion and reduce the learning curve of yet another rbac system.
Read more about it here: https://jupyterhub.readthedocs.io/en/latest/rbac/index.html
JupyterHub defines the following fundamental concepts:
Groups/Users are assigned Roles which are a collection of scopes.
The idea is to be able to manage roles and permissions from a central place, in this case keycloak. An admin or anyone who has permissions to create a role in keycloak will create role(s) with assigned scopes (permissions) to it and attach it to user(s) or group(s). We define the following concepts (some of them already exist):
This represents the main services in Nebari. There is a keycloak client created for most nebari services. The idea here is those services will call keycloak API (they already do at the moment for authentication) to fetch roles from a particular client for a user and using the roles's attributes it will decide what permissions the user have on that service. Here are some of the main core services:
A service can have several components and each component can have the need for a user or group to have different levels of access. We call them as component. For example, the JupyterHub service can have the following components:
This figure depicts how services, components and scopes are related. Note that the scopes for grafana and argo are only for demonstration purpose.
Role is a collection of scopes (permissions).
Scope is a permissions to resource in a component of a service. We're borrowing the syntax for defining scopes (or permissions) from JupyterHub's RBAC. See https://jupyterhub.readthedocs.io/en/latest/rbac/scopes.html#scope-conventions for reference.
In a nutshell, it looks something like this:
<access-level><resource>:<subresource>!<object>=<objectname>
<resource>:<subresource>
- vertical filtering<resource>!<object>=<objectname>
- horizontal filtering.<access-level>
is not provided, we assume maximum permissions.<subresource>
is optional!<object>=<objectname>
is optionalThis is an example of how users, groups and roles interact. In this the group gpu-access-conda-store-pycon-argo-viewer-group
has 3 roles, group gpu-users-group
has one role and the group pycon-tutorial-group
has one role attached to it. User alice has one role attached to them and the user john has no roles attached to them.
/shared
.You'll go to keycloak client jupyterhub
and create a role with a meaningful name and add the following attributes to the role:
Role: create-shared-directory
Key | Value |
---|---|
component | shared-directory |
scopes | create:shared |
When you'd attach this role to a group, then Nebari will make sure to create a shared directory for the group.
Next, we want to have two sets of permissions:
We will create two roles:
Role: read-access-to-pycon-shared-directory-role
This role will be attached to pycon-read-group
Key | Value |
---|---|
component | shared-directory |
scopes | read:shared!shared=pycon |
Role: write-access-to-pycon-shared-directory-role
This role will be attached to pycon-read-group
Key | Value |
---|---|
component | shared-directory |
scopes | write:shared!shared=pycon |
Note: The names of roles and groups are arbitrary and can be anything.
Role: read-access-conda-pycon-namespace-role
This role will be attached to read-conda-pycon-group
Key | Value |
---|---|
component | conda-store |
scopes | read:conda-store!namespace=pycon |
Role: write-access-conda-pycon-namespace-role
This role will be attached to write-conda-pycon-group
Key | Value |
---|---|
component | conda-store |
scopes | write:conda-store!namespace=pycon |
Create a role to allow everyone to share apps in a particular group.
Since we're using JupyterHub's RBAC, we can use scopes convention directly here.
Role: allow-app-sharing-role
This role will be attached to allow-app-sharing-group
Key | Value |
---|---|
component | jupyterhub |
scopes | shares!user,read:users:name,read:groups:name |
See https://jupyterhub.readthedocs.io/en/latest/reference/sharing.html#enable-sharing
By default, it will be disabled for everyone.
Since this is a complex piece of functionality, we would need to implement this in small modular steps. The steps are mentioned below:
/user
endpoint in the Hub API, under following keys. These needs to to be synced with JupyterHub roles and groups so that the permissions are actually applied at JupyterHub level.
auth_state.oauth_user.roles
auth_state.oauth_user.groups
render_profiles
functionality in profiles in jupyterhub config, such that it creates shared directory only if the group has permissions. This would also require us to implement a way to parse role scopes, e.g: parsing read:shared!shared=pycon
for shared directory, when the component is shared-directory
.shared-directory
work for it. For jupyterhub
component it would work without parsing, after they are synced from keycloak into jupyterhub.The goal of this proposal is to implement the following best practices:
The implementation would change the default permissions of a user so it would affect the user, but we can set up with sensisble defaults to reduce the impact.
Status | Draft 🚧 / Open for comments 💬/ Accepted ✅ /Implemented 🚀/ Obsolete 🗃 / Rejected ⛔️ |
---|---|
Author(s) | @viniciusdc |
Date Created | 02-02-2023 |
Date Last updated | -- |
Decision deadline | -- |
Currently, our integration tests are responsible for deploying a target version of Nebari (generally based on main/develop) to test stability and confirm that the code is deployable in all cloud providers. These tests can be divided into three categories: "Deploy", "User-Interaction," and "Teardown".
The user interaction is executed by using Cypress to mimic the steps a user would take to use the basic functionalities of Nebari.
The general gist of the workflow can be seen in the diagram above. Some providers like GCP have yet another intermediate job right after the deployment, where a slightly small change is made in the nebari-config.yaml
to assert that the inner actions (those that come with Nebari) are working as expected.
While the above does help when testing and asserting everything "looks" OK, we still need to double-check in every release doing yet another independent deployment to carefully test all features/services and ensure everything is working as expected. This seems like extra work that takes some time to complete (remember that a new deployment on each cloud provider takes around 15~20 min, + any additional checks).
That said, there are still a lot of functionalities that we might need to remember to test that are part of the daily use of Nebari, and making sure all of that works in all providers would become impractical.
what we could do to enhance our current testing suit. These are divided into three major updates:
Refactor the "deploy" phase of the workflow so instead of executing the full deployment in serial (aka. just run nebari deploy
), we could instead deploy each stage
of nebari in parts, and this would give us the freedom to do more testing around each new artifact/resource added in each stage. This can now be easily done due to the recent additions of a Nebari dev command in the CLI. A way to achieve this would be adding an extra dev
flag to the neabari deploy
command to stop at certain checkpoints (which in this case, are the beginning of a new stage)
nebari deploy -c .... --stop-at 1
. This would be responsible for deploying nebari until the first stage (generating the corresponding terraform state files for state tracking). The CI would then execute a specialized test suit (could be pytest
, python scripts
...) to assert that:
nebari deploy -c .... --stop-at 2
, which would refresh the previous resources and create the new ones. Then stop and run tests accordingly....
Now that the infrastructure exists and is working as planned, we can mimic the user interaction by running a bigger testing suit for cypress (we could also migrate to another tool for easier maintenance). Those tests would then be responsible for checking that Jupyter-related services works, Dask, any extra services like Argo, kbatch, VScode, Dashboards, conda-store...
Once all of this completes, we can then move to destroy all the components, right now there is no extra changes to this step, but something we could add it would be beneficial are this:
nebari destroy
The user, in this case, would be the maintainers and developers of Nebari who would be able to trust more in the integration tests and retrieve more information on each runs, reducing a lot of the time used by testing all features as well as the confidence that all services and resources were tested and validated before release.
Status | Open for comments 💬 |
---|---|
Author(s) | @Adam-D-Lewis |
Date Created | 04-11-2024 |
Date Last updated | 04-11-2024 |
Decision deadline | TBD |
The average user doesn't know what options they have available to them to adjust from nebari-config.yaml
. Those who deploy fresh deployments may run into the situation where they need to adjust these immediately (see here, and it's not apparent how to do so and not straightforward at the moment.
Additionally, extensions InputSchema are not currently written to nebari-config.yaml, meaning users have to copy those in manually to manage them from nebari-config.yaml.
Users will better understand what they can configure from the nebari-config.yaml file. They will also see what they've set to custom values easily since those will be the only values not commented out.
The following features are a part of this design proposal.
nebari-config.yaml
nebari upgrade
should notice the updated defaults and update the corresponding commented out default values in nebari-config.yamlA sample nebari config.yaml might look something like below under this design proposal (inspired from grafana's defaults.ini)
##################### Main #####################
# Whether to prevent deploying after upgrades due to potentially problematic changes
prevent_deploy: false
# Current nebari version used for deployment
nebari_version: 2024.3.3.dev138+g924659d5
# whether to use a cloud provider, local, or existing k8s cluster
provider: local
# k8s namespace to deploy into
namespace: dev
# project name for the deployment
project_name: scratch-2efc
##################### Bootstrap Stage #####################
# whether to use CI/CD for deployments
# ci_cd:
# type: none
# branch: main
# commit_render: true
# before_script: []
# after_script: []
##################### Terraform State Stage #####################
# where to store the terraform state
terraform_state:
type: remote
backend:
config: {}
##################### Infrastructure Stage #####################
# local deployment settings
local:
kube_context:
node_selectors:
general:
key: kubernetes.io/os
value: linux
user:
key: kubernetes.io/os
value: linux
worker:
key: kubernetes.io/os
value: linux
##################### Kubernetes Initialize Stage #####################
# configuration for external container registry
external_container_reg:
enabled: false
access_key_id:
secret_access_key:
extcr_account:
extcr_region:
##################### Kubernetes Ingress Stage #####################
# dns configuration
dns:
provider:
auto_provision: false
# ingress
ingress:
terraform_overrides: {}
# certificate settings
certificate: self-signed
self_signed_certificate:
certificate:
type: self-signed
domain: github-actions.nebari.dev
##################### Kubernetes Keycloak Stage #####################
security:
authentication:
type: password
shared_users_group: true
keycloak:
initial_root_password: 9u9le3v5f3nb2kf98rh77xthjsl00v80
overrides: {}
realm_display_name: Nebari
##################### Kubernetes Services Stage #####################
# jhub apps settings
jhub_apps:
enabled: false
# jupyterlab settings
jupyterlab:
default_settings: {}
idle_culler:
terminal_cull_inactive_timeout: 15
terminal_cull_interval: 5
kernel_cull_idle_timeout: 15
kernel_cull_interval: 5
kernel_cull_connected: true
kernel_cull_busy: false
server_shutdown_no_activity_timeout: 15
initial_repositories: []
preferred_dir:
# jupyterhub settings
jupyterhub:
overrides: {}
# jupyterlab telemetry settings
telemetry:
jupyterlab_pioneer:
enabled: false
log_format:
# monitoring settings
monitoring:
enabled: true
overrides:
loki: {}
promtail: {}
minio: {}
minio_enabled: true
# argo workflows settings
argo_workflows:
enabled: true
overrides: {}
nebari_workflow_controller:
enabled: true
image_tag: 2024.3.3
# conda_store settings
conda_store:
extra_settings: {}
extra_config: ''
image: quansight/conda-store-server
image_tag: 2024.3.1
default_namespace: nebari-git
object_storage: 200Gi
# default conda environments
environments:
environment-dask.yaml:
name: dask
channels:
- conda-forge
dependencies:
- python==3.11.6
- ipykernel==6.26.0
- ipywidgets==8.1.1
- nebari-dask==2024.3.3
- python-graphviz==0.20.1
- pyarrow==14.0.1
- s3fs==2023.10.0
- gcsfs==2023.10.0
- numpy=1.26.0
- numba=0.58.1
- pandas=2.1.3
- xarray==2023.10.1
environment-dashboard.yaml:
name: dashboard
channels:
- conda-forge
dependencies:
- python==3.11.6
- cufflinks-py==0.17.3
- dash==2.14.1
- geopandas==0.14.1
- geopy==2.4.0
- geoviews==1.11.0
- gunicorn==21.2.0
- holoviews==1.18.1
- ipykernel==6.26.0
- ipywidgets==8.1.1
- jupyter==1.0.0
- jupyter_bokeh==3.0.7
- matplotlib==3.8.1
- nebari-dask==2024.3.3
- nodejs=20.8.1
- numpy==1.26.0
- openpyxl==3.1.2
- pandas==2.1.3
- panel==1.3.1
- param==2.0.1
- plotly==5.18.0
- python-graphviz==0.20.1
- rich==13.6.0
- streamlit==1.28.1
- sympy==1.12
- voila==0.5.5
- xarray==2023.10.1
- pip==23.3.1
- pip:
- streamlit-image-comparison==0.0.4
- noaa-coops==0.1.9
- dash_core_components==2.0.0
- dash_html_components==2.0.0
# jupyterlab profiles for users / dask workers
profiles:
jupyterlab:
- access: all
display_name: Small Instance
description: Stable environment with 2 cpu / 8 GB ram
default: true
users:
groups:
kubespawner_override:
cpu_limit: 2.0
cpu_guarantee: 1.5
mem_limit: 8G
mem_guarantee: 5G
- access: all
display_name: Medium Instance
description: Stable environment with 4 cpu / 16 GB ram
default: false
users:
groups:
kubespawner_override:
cpu_limit: 4.0
cpu_guarantee: 3.0
mem_limit: 16G
mem_guarantee: 10G
dask_worker:
Small Worker:
worker_cores_limit: 2.0
worker_cores: 1.5
worker_memory_limit: 8G
worker_memory: 5G
worker_threads: 2
Medium Worker:
worker_cores_limit: 4.0
worker_cores: 3.0
worker_memory_limit: 16G
worker_memory: 10G
worker_threads: 4
# Nebari theme settings
theme:
jupyterhub:
hub_title: Nebari - scratch-2efc
hub_subtitle: Your open source data science platform, hosted
welcome: Welcome! Learn about Nebari's features and configurations in <a href="https://www.nebari.dev/docs/welcome">the
documentation</a>. If you have any questions or feedback, reach the team on
<a href="https://www.nebari.dev/docs/community#getting-support">Nebari's support
forums</a>.
logo:
https://raw.githubusercontent.com/nebari-dev/nebari-design/main/logo-mark/horizontal/Nebari-Logo-Horizontal-Lockup-White-text.svg
favicon:
https://raw.githubusercontent.com/nebari-dev/nebari-design/main/symbol/favicon.ico
primary_color: '#4f4173'
primary_color_dark: '#4f4173'
secondary_color: '#957da6'
secondary_color_dark: '#957da6'
accent_color: '#32C574'
accent_color_dark: '#32C574'
text_color: '#111111'
h1_color: '#652e8e'
h2_color: '#652e8e'
version: v2024.3.3.dev138+g924659d5
navbar_color: '#1c1d26'
navbar_text_color: '#f1f1f6'
navbar_hover_color: '#db96f3'
display_version: 'True'
# conda_store settings
storage:
conda_store: 200Gi
shared_filesystem: 200Gi
# docker images to use for deployment
default_images:
jupyterhub: quay.io/nebari/nebari-jupyterhub:2024.3.3
jupyterlab: quay.io/nebari/nebari-jupyterlab:2024.3.3
dask_worker: quay.io/nebari/nebari-dask-worker:2024.3.3
##################### Nebari Terraform/Helm Extensions Stage #####################
# extensions to install
tf_extensions: []
helm_extensions: []
Not all Nebari config will be shown. Instead, just the config relevant to what they've chosen in the nebari init command will be shown.
E.g. Only the e.g. aws provider will be shown instead of all the cloud provider config sections.
We could allow more flexibility by allowing each stage to have it's own write_config method which returns yaml, but I'm not sure we have a use case for that currently and it's nice to enforce some stylistic uniformity between the sections.
Status | Open for comments 💬 |
---|---|
Author(s) | @pmeier |
Date Created | 07-04-2023 |
Date Last updated | 07-04-2023 |
Decision deadline | xx |
nebari
internals aggressively privateCurrently, all internals of nebari
are public and with that there comes a set of expectations from the users with the main one being backwards compatibility. Although there is no such thing as true private functionality in Python, it is the canonical understanding of the community that a leading underscore in a module / function / class name implies privacy and thus no BC guarantees.
AFAIK, nebari
does not have an API. Thus, I propose to "prefix" everything with a leading underscore to avoid needing to keep BC for that.
This proposal brings no benefit for the user, but rather for the developers. As explained above, having a fully public API brings BC guarantees with it. At least that is what users expect. With them in place it can be really hard to refactor / change internals later on although we never intended that to happen.
The canonical understanding for privacy in Python is that it is implied by a leading underscore somewhere in the "path". For example
_foo
foo._bar
,_foo.bar
,foo._Bar.baz
foo.Bar._baz
_foo.Bar.baz
_foo.Bar.baz
are all considered private. This gives us multiple options to approach this:
nebari
package private, e.g. nebari._schema
rather than nebari.schema
. Since we aren't exposing anything from the main namespace this would effectively make everything private.nebari._internal
and move everything under that. This is what pip
does._nebari
, but still provide the script under the nebari
name. This makes it a little awkward for invoking it through Python, i.e. python -m _nebari
. If this is something we want to support, we can also create a coexisting nebari
package that does nothing else but importing the CLI functionality from _nebari
. This is what pytest
does.These are ordered in increasing order of my preference.
Instead of fixing our code to be private, we could also put a disclaimer in the documentation that we consider all internals private and thus there are no BC guarantees. However, we need to be honest with ourselves here. Although this would suffice from a governance stand point, we are making it easier for users to shoot themselves in the foot. And that is rarely a good thing.
If we want to adapt this proposal or something similar, we need to do it sooner than later. Since this change is BC breaking for everyone who is already importing our internals, we should do it as long as the user base is fairly small and thus even fewer people (hopefully none) are doing something we don't want for the the future.
Depending on how much disruption we anticipate, we could also go through a deprecation cycle with the prompt to get in touch with us in case a user depends on our internals. Maybe there is actually a use case for a public API?
Status | Open for comments 💬 |
---|---|
Author(s) | @iameskild |
Date Created | 2023-01-15 |
Date Last updated | 2023-02-06 |
Decision deadline | 2023-02-13 |
See relevant discussion:
SOPS is a command-line tool for encrypting and decrypting secrets on your local machine.
In the context of Nebari, SOPS can potentially solve the following high-level issues:
Starting point: a Nebari admin has a new secret some of their users may need (such as credentials for external data source). They have the appropriate cloud credentials available.
Item 1. and 2. from the workflow outlined above can be performed directly using the cloud provider CLI (aws kms create-key
) and the SOPS CLI (sops --encrypt <file>
).
To make it easier for Nebari admins, I propose we add a new CLI command, nebari secret
to handle items 1. and 2. This might look something like:
# requires cloud credentials
nebari secret create-kms-key -c nebari-config.yaml --name <kms-name>
.sops.yaml
configuration file to store the KMS and creation_rules
.# encrypt secrets stored as a file
nebari secret encrypt --name <secret-name> --file <path/to/file>
# or from a literal string
nebari secret encrypt --name <secret-name> --literal <tOkeN>
# a decrypt command can be included as well
nebari secret decrypt --name <secret-name>
encrypt
command encrypts the secret and stores the encrypted secret in the designated location in the repo (./secrets.yaml
).decrypt
command decrypts the secret and prints it stdout.Items 3. and 4. from the workflow outlined above refers to how to get these secrets included in the Nebari cluster so that they can be used by those who need them.
There exists this SOPS terraform provider which can decrypt these encrypted secrets during the deployment. To grab these secrets and use them, we can create a secrets
module in stage/07-kubernetes-services
that returns the output (i.e. secret) that can be used to create kubernetes_secrets
as such:
secret.yaml
:data "sops_file" "secrets" {
source_file = "/path/to/secrets.yaml"
}
output "my-password" {
value = data.sops_file.demo-secret.data["password"]
sensitive = true
}
resource "kubernetes_secret" "k8s-secret" {
metadata {
name = "sops-demo-secret"
}
data = {
username = module.sops.my-password
}
}
At this point, the kubernetes secrets exist (encoded, NOT encrypted) on the Nebari cluster.
Including secrets in the KubeSpawner's c.extra_pod_config
) (in 03-profiles.py
) will allow us to mount those secrets to the JupyterLab's user pod, thereby making them useable by the people.
c.extra_pod_config = {
# as environment variables
"containers": [
"env": {}
]
# to pull images from private registries
"image_pull_secret": {}
# as mounted files
"volumes": [
"secret": {}
]
}
How these secrets are configured on the pod (as a file, env var, etc.), and which Keycloak groups have access to these secrests (if we want to add some basic "role-based access"), can be configured in the nebari-config.yaml
.
Something like this:
secrets:
- name: <my-secret>
type: file
keycloak_group_access:
- admin
- name: <my-second_secret>
type: image_pull_secret
...
To accomplish this, we will need to add another callable that is used in the `c.kube_spawner_overrides in 03-profile.py:render_profiles.
There are many specifics that can be modified, such as how users are granted access or how the secrets that are consumed by the deployment.
As for a different usage of SOPS, I can think of one more. That would be to create the kubernetes secret from the encrypted file directly and then have the users decrypt the secret in their JupyterLab pod. This would eliminate the need for the sops-terraform-provider
above.
It might be possible to create tiered- secret files that are then associated to the keycloak groups again. This would introduce multiple KMS-keys.
The question that's hard to answer then becomes how to safely and conveniently disperse the KMS key to those who need to access the secrets.
Access to secrets they may need to access job specific resources.
Given that SOPS is a GitOps tool, it's important to ensure that admins don't accidentally commit plain text secret files in their repos. Adding more strict filters in the .gitignore
will help a little but there's always a chance for mistakes.
Status | Draft |
---|---|
Author(s) | @Adam-D-Lewis |
Date Created | 2023-09-21 |
Date Last updated | 2023-09-21 |
Decision deadline | ? |
For CICD, Nebari supports Github Actions. To support other git platforms (e.g. Gitlab Runners, Azure DevOps, BitBucket Pipelines etc.) we have to port the github action over to the format accepted by those other platforms. Additionally, for private clusters where the nodes themselves are not publicly accessible, Nebari does not have a CICD solution since Github Action runners would not be able to publicly access the k8s cluster directly.
Additionally, if an admin wants to view the state of Nebari, they have to have k8s credentials which requires some manual steps. With ArgoCD, we could solve all the above problems. In this proposal, we deploy ArgoCD alongside the other components of Nebari via nebari deploy
. Then when changes are made to the deployment repo, ArgoCD will discover the changes and update the cluster configuration. Because ArgoCD uses a pull based approach rather than a push based approach, a private cluster will still be able to be managed via a GitOps approach.
ArgoCD also provides a dashboard of the running resources where users could view pod logs and we could even give admins/developers the ability to modify k8s spec on the fly as admins and developers sometimes do during debugging in k9s currently eliminating the manual steps required by k9s mentioned earlier We can manage these permissions in Keycloak.
Using ArgoCD also allows us to scope permissions more precisely. Currently, Github Actions has permission to modify the entire K8s cluster. With ArgoCD, we can scope permissions so ArgoCD can modify only specific K8s namespaces. ArgoCD is commonly used to manage environment promotion (dev -> UAT -> prod) (reference1, reference2) and this could possibly be our solution to doing this in Nebari as well.
We could follow the format of this repo and described in a recent CNCF talk though there are many articles about ways to integrate terraform and ArgoCD.
Status | Declined ❌ |
---|---|
Author(s) | @costrouc |
Date Created | 12/08/2023 |
Date Last updated | 12/08/2023 |
Decision deadline | 30/08/2023 |
Support gitops staging/development/production deployments
This RFD is constructed from issue nebari-dev/nebari#924. We need to have the ability to easily deploy several nebari clusters to respresent dev/staging/production etc within a gitops model. Whatever solution we adopt it should be backwards compatible and easy to adopt.
There are several benifits:
I propose using folders for the different nebari deployments. The current folder structure is:
.github/workflows/nebari-ops.yaml
stages/...
nebari-config.yaml
For backwards compatibility we keep this structure and add new namespaced ones based on the filename extension.
For example nebari-config.dev.yaml
would imply the following files are written
.github/workflows/nebari-ops.dev.yaml
dev/stages/...
The github/gitlab workflows will be templated to watch and trigger only on updates to the specific files for that environment. This approach is independent of git branching.
dev -> staging -> prod
is not always how changes flow. It is also hard to compare production vs. dev side by side without diffs.
This would provide an easy way for users to have different deployments on the same git repository.
This change would not affect any existing nebari deployments as far as I am aware and would be backwards compatible.
Gitlab doesn't support multiple files for CI it wants a single entrypoint .gitlab-ci.yml
. Pipelines would allow us to do this but then the separate stages will have to all write to the same gitlab-ci.yml file. This is solvable.
Since we are moving to a more community-driven project, we should revisit the CoC
Status | Open for comments 💬 |
---|---|
Author(s) | @pt247 |
Date Created | 20-04-2024 |
Date Last updated | 05-05-2024 |
Decision deadline | 22-05-2024 |
A design proposal for Backup and Restore service in Nebari.
As Nebari becomes more popular, it's essential to have dependable backup and restore capabilities. Automated scheduled backups are also necessary. Planning is vital since Nebari has several components, including conda-store environments, user profiles in KeyCloak, and user data in the NFS store.
We need to look at the development, maintenance, administration, and support requirements to decide on an appropriate strategy for this service. Following is a list of key criteria for the service:
This Request for Discussion (RFD) aims to establish a high-level strategy for backup and restoration. The goal is to reach a consensus on design choices, API, and a development plan for the backup and restoration of individual components. The implementation details of the identified design will be part of another RFD. The focus of this RFD is to develop a backup and restoration strategy for the following components:
Following Nebari components are not covered in this document.
You can find the existing docs for backup on this page.
There are several approaches to Nebari backup and restore. Some are closer to the current backup and restore, and some are entirely novel approaches. Each of these methods has its own set of advantages and disadvantages. In this section, we will summarise the various approaches suggested in the comments, outline the pros and cons, and briefly describe the implementation.
flowchart TD
Backup --> Storage
Nebari --> |1. config| Backup
Nebari --> |2. Keycloak | Backup
Nebari --> |3. Conda Store | Backup
Nebari --> |3. User Data | Backup
Storage --> Restore1
Restore1 --> |1. config| Nebari1
Restore1 --> |2. Keycloak | Nebari1
Restore1 --> |3. Conda Store | Nebari1
Restore1 --> |3. User Data | Nebari1
This approach aims to automate the current manual backup and restore process.
A typical Nebari deployment consists of several components like Keycloak, conda-store, user data and more.
flowchart TD
A1[CLI] --> B(Backup workflow)
A2[Nebari config.backup.schedule] --> B
A3[Argo workflows UI] --> B
B --> F(Backup Nebari config)
F --> D(Backup Keycloak)
D --> C(Backup NFS)
D --> E(Backup Conda Store)
C --> X(Backup Location)
D --> X
E --> X
F --> X
flowchart TD
A[Nebari Restore CLI - Specified backup] --> B(Backup workflow - latest backup)
A1[Argo Workflows UI] --> B
B --> B1(Restore workflow - Specified backup)
B1 --> F(Restore Nebari config)
F --> D(Restore Keycloak)
D --> C(Restore NFS)
D --> E(Restore Conda Store)
C --> Z(Validate restore completion)
D --> Z
E --> Z
Z --> |failure| X(Restore workflow - latest backup)
Z --> |success| Y(Stop)
X --> |success| Y
X --> |failure| Y
Note
: Both these workflows are, for example, and must be refined/refactored.
Let's look at the pros and cons of this approach:
Pros
Cons
We could look at nebari from the perspective of the user. Each user has some shared and dedicated state in each Nebari component.
Nebari | Shared | Dedicated |
---|---|---|
Keycloak | Groups, Roles, permissions | User profiles |
Conda store | Shared environments | User environments |
JupyterHub | Shared user data | Dedicated user data |
The solution recommends backing or restoring shared resources first. We can then backup/restore users in parallel or any order.
User migration workflow
flowchart LR
rc[Restore user] --> rs[Restore shared state] --> ru[Restore user]
s[Storage] -.-> rs
s -.-> ru
bc[Backup user] --> bs[Migrate shared state] --> bu[Migrate user]
bu -.-> Storage
bs -.-> Storage
Nebari migration overall
Backup flowchart
flowchart LR
nb[Nebari Backup] ==> rs[Backup shared state]
rs ==> bu1[Backup user A] & bu2[Backup user B] & bu3[Backup user C] -.-> Storage
rs --> | ... | Storage
rs --> |Backup user n| Storage
Restore flowchart
flowchart LR
nr[Nebari Restore] ==> rsr[Restore shared state]
Storage -.-> ru1[Backup user A] & ru2[Backup user B] & ru3[Backup user C] & ru4[...] & ru5[Backup user N]
rsr ==> ru1 & ru2 & ru3 & ru4 & ru5
Storage -.-> rsr
Let's look at the pros and cons of this approach:
Pros
Cons
The last two designs include the backup and restore functionality in Nebari. The central assumption was that Nebari should be able to back up and restore itself. However, thanks to helpful comments in this RFD, this design challenges this premise and proposes an alternative solution.
This design breaks the implementation into two: the interface and the strategy. It argues that Nebari should only provide the interface for importing/exporting data. The backup and restore strategy should be part of the client code. We can extend the interface by providing a Python library.
block-beta
columns 1
j["Client Script (Backup strategy maintained by Nebari Admin)"]
blockArrowId6<[" "]>(updown)
L["Nebari backup and restore library (Python package)"]
blockArrowId7<[" "]>(updown)
D["Nebari Backup and restore REST API"]
blockArrowId6<[" "]>(updown)
block:ID
A["Conda Store REST API"]
B["User DATA REST API"]
B2["Keycloak REST API"]
end
The idea is simple: instead of building a backup and restoring service, we could build a backup and restore interface. The only job of this interface will be to provide users' state and data to authenticated users outside Nebari. The entire backup and restore
logic can be built and maintained outside Nebari. This backup and restore client can then be run from anywhere, providing Admins with flexibility that other designs do not offer.
flowchart LR
subgraph Backup and restore library
Client
end
Client-->I
subgraph Nebari
I
I[Backup and restore interface API]-->K[Keycloak API]
I-->C[Conda store API]
I-->J[JupiterHub API]
I-->N[User data API]
end
An essential requirement for this design is to expose data and state. APIs like Keycloak and conda-store API already provide the bulk of serializable states. However, not all states are serializable, e.g., user data and conda packages. In this case, the design recommends APIs to download location URLs. APIs in Nebari could be completely stateless.
Let's look at a few transactions with this proposed API.
Serializable data
sequenceDiagram
Client->>API: GET /users
API-->>Keycloak: GET /admin/realms/{realm}/users/
Keycloak-->>API: [A, B, C]
API-->>Client: [A, B, C]
Non-Serializable data
sequenceDiagram
Client->>API: GET /users/A/environments
API-->>conda-store: GET /api/v1/environment/?namespace={..}
conda-store-->>API: [E1, E2, E3 ...]
API->>Client: [{envs:[E1, E2]}]
Client-->NFS: FTP/Rsync/Restic FETCH Artifact from E1, E2, E3 ...
Let's see the pros and cons of this design.
Pros
Cons
Each of the above-discussed designs has its pros and cons. We could also extend the
designs.For example,we could extend Approach#2 and#1 via an API toprovide simple
interfaces like/users/{uid}/backup/keycloak.
Let's look at a few possible options we can vote on. More suggestions welcome.
Conda store is one of the more complicated pieces to replicate among the Nebari
components. We will need to work with conda-store team to come up with a detailed plan
on backup-restore. But, here is a initial analysis based on conda-store docs.
The S3 server is used to store all build artifacts for example logs, docker layers,
and the Conda-Pack tarball. The PostgreSQL database is used for storing all states
on environments and builds along with powering the conda-store web server UI, REST
API, and Docker registry. Redis is used for keeping track of task state and results
along with enabling locks and realtime streaming of logs.
Backup the object storage and dump the database. Restore would be reverse. We might
have to ensure that database location entries for artifacts and Conda-pack are pointing
to the right location. This might involve simple find and replace operations to the
SQL dump.
environment
as deleted by setting deleted_on
field.environment
s and reset deleted_on
to make them available.
environment
-> build
-> build_conda_package_build
-> conda_package_build
Please note:
Status | Draft 🚧 |
---|---|
Author(s) | Adam-D-Lewis |
Date Created | 02-03-2023 |
Date Last updated | 02-03-2023 |
Decision deadline | ? |
The current method of running Argo workflows from with Jupyterlab is not particularly user friendly. We'd like to have a beginner friendly way of running simple Argo Workflows even if this method has limitations making it not appropriate for more complex/large workflows.
Many users have asked for ways to run/schedule workflows. This would fill many of those needs.
from kubespawner import KubeSpawner
import json
class MySpawner(KubeSpawner):
def pre_spawn_start(self, user, spawner_options):
# Get the JWT token from the authentication server
token = self.user_options.get('token', {}).get('id_token', '')
# Decode the JWT token to obtain the OIDC claims
decoded_token = json.loads(self.api.jwt.decode(token)['payload'])
# Extract the OIDC groups from the claims
groups = decoded_token.get('groups', [])
# Modify the notebook server configuration based on the OIDC groups
if 'group1' in groups:
self.user_options['profile'] = 'group1_profile'
# Call the parent pre_spawn_start method to perform any additional modifications
super().pre_spawn_start(user, spawner_options)
import nebari_workflows as wf
from wf.hera import Task, Workflow, set_global_host, set_global_token, set_global_verify_ssl, GlobalConfig, get_global_verify_ssl
# maybe make a widget like the dask cluster one
wf.settings(
conda_environment='', # uses same as user submitting it by default
instance_type='', # uses same as user submitting it by default
)
with Workflow("two-tasks") as w: # this uses a service with the global token and host
Task("a", p, [{"m": "hello"}], node_selectors={"beta.kubernetes.io/instance-type": "n1-standard-4"})
Task("b", p, [{"m": "hello"}], node_selectors={"beta.kubernetes.io/instance-type": "n1-standard-8"})
wf.submit(w)
Here
Here's what I've done so far
dev
get_pod
permissionsconda run -n myEnv
and all the user directory and shared directories.
So deviations from that are still untested.
Status | Draft 🚧 |
---|---|
Author(s) | @viniciusdc |
Date Created | 08-12-2022 |
Date Last updated | 08-12-2022 |
Decision deadline | NA |
Considerations around Bitnami retention policies
Just a note regarding using Bitnami as the repo source for helm charts, as happened in the past with Minio, they have a 6m retention policy for their repo index, which means that old versions will be dropped from the main index after that period. This is, in the future, our deployments are bound to break if the version is not found by Helm.
v0.4.0
and v0.4.1
are still (at this date) broken due to the fact these versions have in their source code a pointer to a Minio version that does not exist anymore in the main index.yaml (fixed on v0.4.2
).There are some ways to suppress this problem, each one with their pros x cons:
Update the team structure and verify permissions for each team:
Note: The following plan is tentative to get us started, and will be updated after further discussion.
nebari-dev/design
nebari-dev/nebari-doc
References:
cc @trallard
Status | Open for comments 💬 |
---|---|
Author(s) | @iameskild |
Date Created | 2022-11-28 |
Date Last updated | 2023-03-15 |
Decision deadline | --- |
Let me start by stating that Nebari is not your typical Python package. For Python packages that are intended to be installed alongside other packages, pinning all of your dependencies will likely cause version conflicts and result in failed environment builds.
Nebari on the other hand is a package that is responsible for managing your infrastructure and the last thing you want is for the packages that Nebari relies on to introduce breaking changes. This has happened now twice this week alone (the week of 2023-01-16, issue 1622 and issue 1623).
As part of this RFD, I propose pinning all packages that Nebari requires or uses. This includes the following:
pyproject.toml
and making sure the package can be built on conda-forge
nebari-docker-images
repo)constants.py
In Nebari, the Python code is used primarily to pass the input variables to the Terraform scripts. As such, I propose that any of the pinned versions - be they pinned Terraform providers, image/tags combinations, etc. - used by Terraform be set somewhere in the Python code and then passed to Terraform.
As an example, I recently did this with the version of Traefik we use:
Which is then used as input for this Terraform variable:
Once packages start getting pinned, it's important to regularly review and upgrade these dependencies in order to keep up-to-date with upstream changes. We have already discussed the important of testing these dependencies and I believe we should continue with that work (See issue 1339).
As part of this RFD, I propose we review, test and upgrade our dependencies once per quarter as part of our release process.
Although we may not need to update each dependency for every release, we might want to consider updating dependencies in a staggered fashion.
pyproject.toml
and ensure that the package is buildable on conda-forge
.We don't necessarily need to make the update process this rigid but the idea is to update a few things at a time and ensure that nothing breaks. And if things do break, fix them promptly to avoid running into situations where we are forced to make last-minute fixes.
In my opinion, there are a few benefits to this approach:
The design proposal is fairly straightforward, simply move the pinned version of the Terraform provider or image-tag used to the constants.py
. This would likely require an additional input variable (as demonstrated by the Traefik example above).
We can be sure that when we perform our release testing and cut a release, that version will be stable from then on out. This is currently NOT the case.
This RFD is mostly concerned with the main nebari
package and doesn't really cover how we should handle:
I think these are less of a concern for us since once the nebari-jupyterhub-theme
is included in the Nebari Jupyterhub Docker image, and once the images are built they don't change, there is little chance that users will be negatively affected by dependency updates. The only exception would be if users pull the image tag main
which is updated with every new merge into nebari-docker-images
- this does not follow best-practices and we will continue to advise against it.
I still need to test if this is possible for the pinned version of a particular Terraform provider used, such as:
https://github.com/nebari-dev/nebari/blob/bd777e6448b5e2d6339bc3d9ef35672163ae1945/nebari/template/stages/04-kubernetes-ingress/versions.tf#L1-L13
required_provider
block (usually in the version.tf
file).
tfupdate
.As we adopt a community-first approach to Nebari development, it will be nice to open our team syncs to everyone (which are currently internal to Quansight).
Proposal:
...
No response
We want to foster an inclusive, supportive and safe environment for all our community members. Need to adopt a CoC with the following:
Status | Open for comments 💬 |
---|---|
Author(s) | @viniciusdc |
Date Created | 13-03-2023 |
Date Last updated | 13-03-2023 |
Decision deadline | --/--/-- |
Nebari heavily depends on terraform to handle all of our IaC needs. While HCL (the .tf
files) is a great language for describing infrastructure, it is not the best language for writing code where multiple ecosystems are involved. We can see such cases where adding a simple new feature requires us to sometimes re-write the same piece of code multiple times in HCL (e.g the variables that are used across different modules)
Our main code that handles most of the execution of the terraform binaries is already written in python (a subprocess is responsible to run terraform plan
and terraform apply
), as well as almost all of our interactions within the already deployed cluster during testing is also done in python. Due the complexity of our ecosystem having such situations where we need to write a lot of HCL code to handle the edge cases that we have is not only time consuming but also error prone. In this RFD I would like to suggest moving our infrastructure code to python using terraformpy to make it easier to maintain and extend.
There are multiple benefits to this change:
Right now to write a simple new variable, we need to do something like this:
# in the variables.tf file in the main.tf root directory
variable "my_var" {
type = string
default = "my_value"
}
-----------------------
# in the main.tf file in the main.tf root directory
module "my_module" {
source = "./my_module"
my_var = var.my_var
}
-----------------------
# in the main.tf file in the my_module directory
variable "my_var" {
type = string
}
# in the variables.tf file in the my_module directory
variable "my_var" {
type = string
}
And we also need to make sure we are passing it over to input_vars.py. This is a lot of code to write for a simple variable that we need to pass over to a module. (image when we need to pass outputs to different stages)
With python we would instead have a function that received the vars as input and passes it over to the correct module under its hood. This would make the code much easier to maintain and extend. For example:
from terraformpy import Module
from .vars import my_var
def pass_vars_to_module(my_var):
Module(
source="./my_module",
my_var=my_var
)
That's it, of course, this example is very simple and do not take in consideration the full complexity of the codebase, but I think it would be a good starting point to see how we can simplify the codebase.
We have a new release process documented here: https://www.nebari.dev/docs/community/maintainers/release-process-branching-strategy
We can have the documentation page (at nebari.dev) as the source of truth and move undocumented details from the governance repo to the official docs. :)
...
We can document how to access Plausible analytics for Nebari.
Right now we are handling permissions by hand - would be ideal to have a declarative way to handle this
Proposal: adopt Peribolos
Status | Open for comments 💬 |
---|---|
Author(s) | costrouc |
Date Created | 14-02-2022 |
Date Last updated | 14-02-2022 |
Decision deadline | 14-02-2022 |
Have spent around 2 days familiarizing myself with Vault and trying it out by using Hashicorp's managed vault deployment. We have the features that we would need to allow:
There are two users I have in mind this this proposal end users e.g. regular users/developers on Nebari and devops/it sysadmins managing the deployment of Nebari. This proposal would satisfy both of these.
Implementation
Notice user does not have to store/remember any secrets!
How would we configure vault:
users/<username>/*
and group/<group>/*
to write arbitrary secrets# patch-basic-annotations.yaml
spec:
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-helloworld: "secrets/helloworld"
vault.hashicorp.com/role: "myapp"
Kubernetes service accounts are at the heart of this. Would would assign identities to users/services via attaching a service account e.g. <namespace>/service-<service-name>
or <namespace>/user-<username>
There is currently a proposal for using SOPS for secret management #29.
Do we use separate namespaces for users?
Create guidelines on how we make decisions as a team, including:
TBD
Status | Open for comments 💬 |
---|---|
Author(s) | @iameskild |
Date Created | 2022-11-22 |
Date Last updated | -- |
Decision deadline | -- |
At present, this repo builds and pushes standard images for JupyterHub, JupyterLab and Dask-Workers. These images are the default used by all Nebari deployments.
However, many users have expressed an interest in adding customize packages (conda
, apt
or otherwise) to their default JupyterLab image and doing so at the moment is not really feasible (at least not without a decent amount of extra leg work). To accommodate users, we have often simply resorted to adding their preferred package to these default images. This solution is not scalable.
By giving Nebari users the ability to customize these images, we greatly open up what is possible for them. This will give users further control over what packages get installed and how they use and interact with their Nebari cluster.
I have already heard from a decent number users that this would be a much-appreciated feature.
Ultimately, we want to allow users to add whatever packages (and possibly other configuration changes) they want to their JupyterHub, JupyterLab, and Dask-Worker images. We also want to make this process as simple and straightforward as possible.
Users should NOT need to know:
In the nebari
code base we already have a way of generating gitops
and nebari-linter
workflows for GitHub-Actions and GitLab-CI (for clusters that leverage ci_cd
redeployments). We currently do this by building up these workflows from basic pydantic classes that were modeled off of the JSON schema for GitHub-Actions workflows
and GitLab-CI pipelines
respectively.
Why not do the same thing for building and pushing docker images?
With some additional work, we can render a build-push
workflow (or pipeline) that can do just that. This proposed build-push
workflow would look something like:
environment.yaml
, apt.txt
, etc. that resides in an images
folder in their repo.*docker/build-push-action
(or similar for GitLab-CI) to build and push images to GHCR (or similar for GitLab-CI)
- This new workflow would live in the same repo that the deployment resides in so there is no need for managing multiple repos.
As I currently see it, this would require:
nebari-config.yaml
(perhaps under the ci_cd
section) that can be used as a trigger to render this new workflow filebuild-push
workflow file
gitops
or nebari-linter
quay.io/nebari
images
) that contains an environment.yaml
, apt.txt
, etc.No user impact unless they decide to use this feature.
There are a few other enhancements that we could make to make:
Status | Accepted ✅ |
---|---|
Author(s) | @pavithraes |
Date Created | 13-Sep-2023 |
Date Last updated | 13-Sep-2023 |
Decision deadline | 19-Sep-2023 |
Create a new Nebari blog to have a space for community articles, something we don't currently have.
It'll be open to contributions from all community members, and the scope will be everything related to Nebari usage & development.
We can use the Blog functionality that Docusaurus provides: https://docusaurus.io/docs/2.2.0/blog
N/A
Since this will be on the documentation side, we'll have a PR that adds the blog section.
The first blog post can be about the new extension mechanism
N/A
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.