nebari-dev / governance Goto Github PK
View Code? Open in Web Editor NEW✨ Governance-related work for Nebari-dev
License: BSD 3-Clause "New" or "Revised" License
✨ Governance-related work for Nebari-dev
License: BSD 3-Clause "New" or "Revised" License
Status | Open for comments 💬 |
---|---|
Author(s) | @iameskild |
Date Created | 2022-11-28 |
Date Last updated | 2023-03-15 |
Decision deadline | --- |
Let me start by stating that Nebari is not your typical Python package. For Python packages that are intended to be installed alongside other packages, pinning all of your dependencies will likely cause version conflicts and result in failed environment builds.
Nebari on the other hand is a package that is responsible for managing your infrastructure and the last thing you want is for the packages that Nebari relies on to introduce breaking changes. This has happened now twice this week alone (the week of 2023-01-16, issue 1622 and issue 1623).
As part of this RFD, I propose pinning all packages that Nebari requires or uses. This includes the following:
pyproject.toml
and making sure the package can be built on conda-forge
nebari-docker-images
repo)constants.py
In Nebari, the Python code is used primarily to pass the input variables to the Terraform scripts. As such, I propose that any of the pinned versions - be they pinned Terraform providers, image/tags combinations, etc. - used by Terraform be set somewhere in the Python code and then passed to Terraform.
As an example, I recently did this with the version of Traefik we use:
Which is then used as input for this Terraform variable:
Once packages start getting pinned, it's important to regularly review and upgrade these dependencies in order to keep up-to-date with upstream changes. We have already discussed the important of testing these dependencies and I believe we should continue with that work (See issue 1339).
As part of this RFD, I propose we review, test and upgrade our dependencies once per quarter as part of our release process.
Although we may not need to update each dependency for every release, we might want to consider updating dependencies in a staggered fashion.
pyproject.toml
and ensure that the package is buildable on conda-forge
.We don't necessarily need to make the update process this rigid but the idea is to update a few things at a time and ensure that nothing breaks. And if things do break, fix them promptly to avoid running into situations where we are forced to make last-minute fixes.
In my opinion, there are a few benefits to this approach:
The design proposal is fairly straightforward, simply move the pinned version of the Terraform provider or image-tag used to the constants.py
. This would likely require an additional input variable (as demonstrated by the Traefik example above).
We can be sure that when we perform our release testing and cut a release, that version will be stable from then on out. This is currently NOT the case.
This RFD is mostly concerned with the main nebari
package and doesn't really cover how we should handle:
I think these are less of a concern for us since once the nebari-jupyterhub-theme
is included in the Nebari Jupyterhub Docker image, and once the images are built they don't change, there is little chance that users will be negatively affected by dependency updates. The only exception would be if users pull the image tag main
which is updated with every new merge into nebari-docker-images
- this does not follow best-practices and we will continue to advise against it.
I still need to test if this is possible for the pinned version of a particular Terraform provider used, such as:
https://github.com/nebari-dev/nebari/blob/bd777e6448b5e2d6339bc3d9ef35672163ae1945/nebari/template/stages/04-kubernetes-ingress/versions.tf#L1-L13
required_provider
block (usually in the version.tf
file).
tfupdate
.Status | Draft 🚧 / Open for comments 💬/ Accepted ✅ /Implemented 🚀/ Obsolete 🗃 / Rejected ⛔️ |
---|---|
Author(s) | @viniciusdc |
Date Created | 02-02-2023 |
Date Last updated | -- |
Decision deadline | -- |
Currently, our integration tests are responsible for deploying a target version of Nebari (generally based on main/develop) to test stability and confirm that the code is deployable in all cloud providers. These tests can be divided into three categories: "Deploy", "User-Interaction," and "Teardown".
The user interaction is executed by using Cypress to mimic the steps a user would take to use the basic functionalities of Nebari.
The general gist of the workflow can be seen in the diagram above. Some providers like GCP have yet another intermediate job right after the deployment, where a slightly small change is made in the nebari-config.yaml
to assert that the inner actions (those that come with Nebari) are working as expected.
While the above does help when testing and asserting everything "looks" OK, we still need to double-check in every release doing yet another independent deployment to carefully test all features/services and ensure everything is working as expected. This seems like extra work that takes some time to complete (remember that a new deployment on each cloud provider takes around 15~20 min, + any additional checks).
That said, there are still a lot of functionalities that we might need to remember to test that are part of the daily use of Nebari, and making sure all of that works in all providers would become impractical.
what we could do to enhance our current testing suit. These are divided into three major updates:
Refactor the "deploy" phase of the workflow so instead of executing the full deployment in serial (aka. just run nebari deploy
), we could instead deploy each stage
of nebari in parts, and this would give us the freedom to do more testing around each new artifact/resource added in each stage. This can now be easily done due to the recent additions of a Nebari dev command in the CLI. A way to achieve this would be adding an extra dev
flag to the neabari deploy
command to stop at certain checkpoints (which in this case, are the beginning of a new stage)
nebari deploy -c .... --stop-at 1
. This would be responsible for deploying nebari until the first stage (generating the corresponding terraform state files for state tracking). The CI would then execute a specialized test suit (could be pytest
, python scripts
...) to assert that:
nebari deploy -c .... --stop-at 2
, which would refresh the previous resources and create the new ones. Then stop and run tests accordingly....
Now that the infrastructure exists and is working as planned, we can mimic the user interaction by running a bigger testing suit for cypress (we could also migrate to another tool for easier maintenance). Those tests would then be responsible for checking that Jupyter-related services works, Dask, any extra services like Argo, kbatch, VScode, Dashboards, conda-store...
Once all of this completes, we can then move to destroy all the components, right now there is no extra changes to this step, but something we could add it would be beneficial are this:
nebari destroy
The user, in this case, would be the maintainers and developers of Nebari who would be able to trust more in the integration tests and retrieve more information on each runs, reducing a lot of the time used by testing all features as well as the confidence that all services and resources were tested and validated before release.
Status | Open for comments 💬 |
---|---|
Author(s) | @iameskild |
Date Created | 2022-11-22 |
Date Last updated | -- |
Decision deadline | -- |
At present, this repo builds and pushes standard images for JupyterHub, JupyterLab and Dask-Workers. These images are the default used by all Nebari deployments.
However, many users have expressed an interest in adding customize packages (conda
, apt
or otherwise) to their default JupyterLab image and doing so at the moment is not really feasible (at least not without a decent amount of extra leg work). To accommodate users, we have often simply resorted to adding their preferred package to these default images. This solution is not scalable.
By giving Nebari users the ability to customize these images, we greatly open up what is possible for them. This will give users further control over what packages get installed and how they use and interact with their Nebari cluster.
I have already heard from a decent number users that this would be a much-appreciated feature.
Ultimately, we want to allow users to add whatever packages (and possibly other configuration changes) they want to their JupyterHub, JupyterLab, and Dask-Worker images. We also want to make this process as simple and straightforward as possible.
Users should NOT need to know:
In the nebari
code base we already have a way of generating gitops
and nebari-linter
workflows for GitHub-Actions and GitLab-CI (for clusters that leverage ci_cd
redeployments). We currently do this by building up these workflows from basic pydantic classes that were modeled off of the JSON schema for GitHub-Actions workflows
and GitLab-CI pipelines
respectively.
Why not do the same thing for building and pushing docker images?
With some additional work, we can render a build-push
workflow (or pipeline) that can do just that. This proposed build-push
workflow would look something like:
environment.yaml
, apt.txt
, etc. that resides in an images
folder in their repo.*docker/build-push-action
(or similar for GitLab-CI) to build and push images to GHCR (or similar for GitLab-CI)
- This new workflow would live in the same repo that the deployment resides in so there is no need for managing multiple repos.
As I currently see it, this would require:
nebari-config.yaml
(perhaps under the ci_cd
section) that can be used as a trigger to render this new workflow filebuild-push
workflow file
gitops
or nebari-linter
quay.io/nebari
images
) that contains an environment.yaml
, apt.txt
, etc.No user impact unless they decide to use this feature.
There are a few other enhancements that we could make to make:
Create guidelines on how we make decisions as a team, including:
TBD
Status | Draft 🚧 |
---|---|
Author(s) | @viniciusdc |
Date Created | 08-12-2022 |
Date Last updated | 08-12-2022 |
Decision deadline | NA |
Considerations around Bitnami retention policies
Just a note regarding using Bitnami as the repo source for helm charts, as happened in the past with Minio, they have a 6m retention policy for their repo index, which means that old versions will be dropped from the main index after that period. This is, in the future, our deployments are bound to break if the version is not found by Helm.
v0.4.0
and v0.4.1
are still (at this date) broken due to the fact these versions have in their source code a pointer to a Minio version that does not exist anymore in the main index.yaml (fixed on v0.4.2
).There are some ways to suppress this problem, each one with their pros x cons:
Update the team structure and verify permissions for each team:
Note: The following plan is tentative to get us started, and will be updated after further discussion.
nebari-dev/design
nebari-dev/nebari-doc
References:
cc @trallard
As we adopt a community-first approach to Nebari development, it will be nice to open our team syncs to everyone (which are currently internal to Quansight).
Proposal:
...
No response
We want to foster an inclusive, supportive and safe environment for all our community members. Need to adopt a CoC with the following:
Since we are moving to a more community-driven project, we should revisit the CoC
Status | Open for comments 💬 |
---|---|
Author(s) | @pmeier |
Date Created | 07-04-2023 |
Date Last updated | 07-04-2023 |
Decision deadline | xx |
nebari
internals aggressively privateCurrently, all internals of nebari
are public and with that there comes a set of expectations from the users with the main one being backwards compatibility. Although there is no such thing as true private functionality in Python, it is the canonical understanding of the community that a leading underscore in a module / function / class name implies privacy and thus no BC guarantees.
AFAIK, nebari
does not have an API. Thus, I propose to "prefix" everything with a leading underscore to avoid needing to keep BC for that.
This proposal brings no benefit for the user, but rather for the developers. As explained above, having a fully public API brings BC guarantees with it. At least that is what users expect. With them in place it can be really hard to refactor / change internals later on although we never intended that to happen.
The canonical understanding for privacy in Python is that it is implied by a leading underscore somewhere in the "path". For example
_foo
foo._bar
,_foo.bar
,foo._Bar.baz
foo.Bar._baz
_foo.Bar.baz
_foo.Bar.baz
are all considered private. This gives us multiple options to approach this:
nebari
package private, e.g. nebari._schema
rather than nebari.schema
. Since we aren't exposing anything from the main namespace this would effectively make everything private.nebari._internal
and move everything under that. This is what pip
does._nebari
, but still provide the script under the nebari
name. This makes it a little awkward for invoking it through Python, i.e. python -m _nebari
. If this is something we want to support, we can also create a coexisting nebari
package that does nothing else but importing the CLI functionality from _nebari
. This is what pytest
does.These are ordered in increasing order of my preference.
Instead of fixing our code to be private, we could also put a disclaimer in the documentation that we consider all internals private and thus there are no BC guarantees. However, we need to be honest with ourselves here. Although this would suffice from a governance stand point, we are making it easier for users to shoot themselves in the foot. And that is rarely a good thing.
If we want to adapt this proposal or something similar, we need to do it sooner than later. Since this change is BC breaking for everyone who is already importing our internals, we should do it as long as the user base is fairly small and thus even fewer people (hopefully none) are doing something we don't want for the the future.
Depending on how much disruption we anticipate, we could also go through a deprecation cycle with the prompt to get in touch with us in case a user depends on our internals. Maybe there is actually a use case for a public API?
Status | Draft 🚧 |
---|---|
Author(s) | Adam-D-Lewis |
Date Created | 03-31-2023 |
Date Last updated | 03-31-2023 |
Decision deadline | ? |
In Argo Workflows users with permissions to use Argo Workflows can mount any other users home directory. This is not acceptable. I discuss some options to limit this behavior. Some options include:
workflows.argoproj.io/creator=452fcf19-d3ca-4813-a250-2b2e1bb7bd9d
tag on the workflow (keycloak user ID).I think the AdmissionController is the best way forward at the moment.
Status | Open for comments 💬 |
---|---|
Author(s) | costrouc |
Date Created | 14-02-2022 |
Date Last updated | 14-02-2022 |
Decision deadline | 14-02-2022 |
Have spent around 2 days familiarizing myself with Vault and trying it out by using Hashicorp's managed vault deployment. We have the features that we would need to allow:
There are two users I have in mind this this proposal end users e.g. regular users/developers on Nebari and devops/it sysadmins managing the deployment of Nebari. This proposal would satisfy both of these.
Implementation
Notice user does not have to store/remember any secrets!
How would we configure vault:
users/<username>/*
and group/<group>/*
to write arbitrary secrets# patch-basic-annotations.yaml
spec:
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-helloworld: "secrets/helloworld"
vault.hashicorp.com/role: "myapp"
Kubernetes service accounts are at the heart of this. Would would assign identities to users/services via attaching a service account e.g. <namespace>/service-<service-name>
or <namespace>/user-<username>
There is currently a proposal for using SOPS for secret management #29.
Do we use separate namespaces for users?
In addition to CoC I'd like to adopt explicit inclusivity guidelines like the ones from Kubeflow
We have a new release process documented here: https://www.nebari.dev/docs/community/maintainers/release-process-branching-strategy
We can have the documentation page (at nebari.dev) as the source of truth and move undocumented details from the governance repo to the official docs. :)
...
Status | Open for comments 💬 |
---|---|
Author(s) | @costrouc |
Date Created | 12/08/2023 |
Date Last updated | 12/08/2023 |
Decision deadline | 22/08/2023 |
Support gitops staging/development/production deployments
This RFD is constructed from issue nebari-dev/nebari#924. We need to have the ability to easily deploy several nebari clusters to respresent dev/staging/production etc within a gitops model. Whatever solution we adopt it should be backwards compatible and easy to adopt.
There are several benifits:
I propose using folders for the different nebari deployments. The current folder structure is:
.github/workflows/nebari-ops.yaml
stages/...
nebari-config.yaml
For backwards compatibility we keep this structure and add new namespaced ones based on the filename extension.
For example nebari-config.dev.yaml
would imply the following files are written
.github/workflows/nebari-ops.dev.yaml
dev/stages/...
The github/gitlab workflows will be templated to watch and trigger only on updates to the specific files for that environment. This approach is independent of git branching.
dev -> staging -> prod
is not always how changes flow. It is also hard to compare production vs. dev side by side without diffs.
This would provide an easy way for users to have different deployments on the same git repository.
This change would not affect any existing nebari deployments as far as I am aware and would be backwards compatible.
Gitlab doesn't support multiple files for CI it wants a single entrypoint .gitlab-ci.yml
. Pipelines would allow us to do this but then the separate stages will have to all write to the same gitlab-ci.yml file. This is solvable.
Status | Open for comments 💬 |
---|---|
Author(s) | @iameskild |
Date Created | 2023-01-15 |
Date Last updated | 2023-02-06 |
Decision deadline | 2023-02-13 |
See relevant discussion:
SOPS is a command-line tool for encrypting and decrypting secrets on your local machine.
In the context of Nebari, SOPS can potentially solve the following high-level issues:
Starting point: a Nebari admin has a new secret some of their users may need (such as credentials for external data source). They have the appropriate cloud credentials available.
Item 1. and 2. from the workflow outlined above can be performed directly using the cloud provider CLI (aws kms create-key
) and the SOPS CLI (sops --encrypt <file>
).
To make it easier for Nebari admins, I propose we add a new CLI command, nebari secret
to handle items 1. and 2. This might look something like:
# requires cloud credentials
nebari secret create-kms-key -c nebari-config.yaml --name <kms-name>
.sops.yaml
configuration file to store the KMS and creation_rules
.# encrypt secrets stored as a file
nebari secret encrypt --name <secret-name> --file <path/to/file>
# or from a literal string
nebari secret encrypt --name <secret-name> --literal <tOkeN>
# a decrypt command can be included as well
nebari secret decrypt --name <secret-name>
encrypt
command encrypts the secret and stores the encrypted secret in the designated location in the repo (./secrets.yaml
).decrypt
command decrypts the secret and prints it stdout.Items 3. and 4. from the workflow outlined above refers to how to get these secrets included in the Nebari cluster so that they can be used by those who need them.
There exists this SOPS terraform provider which can decrypt these encrypted secrets during the deployment. To grab these secrets and use them, we can create a secrets
module in stage/07-kubernetes-services
that returns the output (i.e. secret) that can be used to create kubernetes_secrets
as such:
secret.yaml
:data "sops_file" "secrets" {
source_file = "/path/to/secrets.yaml"
}
output "my-password" {
value = data.sops_file.demo-secret.data["password"]
sensitive = true
}
resource "kubernetes_secret" "k8s-secret" {
metadata {
name = "sops-demo-secret"
}
data = {
username = module.sops.my-password
}
}
At this point, the kubernetes secrets exist (encoded, NOT encrypted) on the Nebari cluster.
Including secrets in the KubeSpawner's c.extra_pod_config
) (in 03-profiles.py
) will allow us to mount those secrets to the JupyterLab's user pod, thereby making them useable by the people.
c.extra_pod_config = {
# as environment variables
"containers": [
"env": {}
]
# to pull images from private registries
"image_pull_secret": {}
# as mounted files
"volumes": [
"secret": {}
]
}
How these secrets are configured on the pod (as a file, env var, etc.), and which Keycloak groups have access to these secrests (if we want to add some basic "role-based access"), can be configured in the nebari-config.yaml
.
Something like this:
secrets:
- name: <my-secret>
type: file
keycloak_group_access:
- admin
- name: <my-second_secret>
type: image_pull_secret
...
To accomplish this, we will need to add another callable that is used in the `c.kube_spawner_overrides in 03-profile.py:render_profiles.
There are many specifics that can be modified, such as how users are granted access or how the secrets that are consumed by the deployment.
As for a different usage of SOPS, I can think of one more. That would be to create the kubernetes secret from the encrypted file directly and then have the users decrypt the secret in their JupyterLab pod. This would eliminate the need for the sops-terraform-provider
above.
It might be possible to create tiered- secret files that are then associated to the keycloak groups again. This would introduce multiple KMS-keys.
The question that's hard to answer then becomes how to safely and conveniently disperse the KMS key to those who need to access the secrets.
Access to secrets they may need to access job specific resources.
Given that SOPS is a GitOps tool, it's important to ensure that admins don't accidentally commit plain text secret files in their repos. Adding more strict filters in the .gitignore
will help a little but there's always a chance for mistakes.
Status | Open for comments 💬 |
---|---|
Author(s) | @viniciusdc |
Date Created | 13-03-2023 |
Date Last updated | 13-03-2023 |
Decision deadline | --/--/-- |
Nebari heavily depends on terraform to handle all of our IaC needs. While HCL (the .tf
files) is a great language for describing infrastructure, it is not the best language for writing code where multiple ecosystems are involved. We can see such cases where adding a simple new feature requires us to sometimes re-write the same piece of code multiple times in HCL (e.g the variables that are used across different modules)
Our main code that handles most of the execution of the terraform binaries is already written in python (a subprocess is responsible to run terraform plan
and terraform apply
), as well as almost all of our interactions within the already deployed cluster during testing is also done in python. Due the complexity of our ecosystem having such situations where we need to write a lot of HCL code to handle the edge cases that we have is not only time consuming but also error prone. In this RFD I would like to suggest moving our infrastructure code to python using terraformpy to make it easier to maintain and extend.
There are multiple benefits to this change:
Right now to write a simple new variable, we need to do something like this:
# in the variables.tf file in the main.tf root directory
variable "my_var" {
type = string
default = "my_value"
}
-----------------------
# in the main.tf file in the main.tf root directory
module "my_module" {
source = "./my_module"
my_var = var.my_var
}
-----------------------
# in the main.tf file in the my_module directory
variable "my_var" {
type = string
}
# in the variables.tf file in the my_module directory
variable "my_var" {
type = string
}
And we also need to make sure we are passing it over to input_vars.py. This is a lot of code to write for a simple variable that we need to pass over to a module. (image when we need to pass outputs to different stages)
With python we would instead have a function that received the vars as input and passes it over to the correct module under its hood. This would make the code much easier to maintain and extend. For example:
from terraformpy import Module
from .vars import my_var
def pass_vars_to_module(my_var):
Module(
source="./my_module",
my_var=my_var
)
That's it, of course, this example is very simple and do not take in consideration the full complexity of the codebase, but I think it would be a good starting point to see how we can simplify the codebase.
We can document how to access Plausible analytics for Nebari.
Status | Accepted ✅ |
---|---|
Author(s) | @costrouc |
Date Created | 03-28-2023 |
Date Last updated | 03-28-2023 |
Decision deadline | 04-15-2023 |
Extension Mechanism for Nebari
Over the past 3 years we have consistently run into the issue where extending and customizing Nebari has been a hard task. Several approaches have been added:
terraform_overrides
and helm_overrides
keyword to allow for arbitrary overrides of helm values.helm_extensions
in stage 8 which allow for the addition of arbitrary helm chartstf_extensions
which integrate oauth2 and ingress to deploy a single docker imageDespite these features we still have needs from users and we are not addressing them all. Additionally we have issues when we want to add a new services it typically has to be directly added to the core of Nebari. We want to solve this by making extensions first class in Nebari.
I see quite a few benifits from this proposal:
Overall I propose we adopt pluggy. Pluggy has been adopted by many major projects including: datasette, conda, (TODO list more). Pluggy would allow us to expose a plugin interface and "install" extensions via setuptools entrypoints. Making extension installation as easy as pip install ...
Usage from a high level user standpoint
pip install nebari
pip install nebari-ext-clearml
pip install nebari-ext-helm
pip install nebari-ext-cost
Once a user installs the extensions we can view the installed extensions via:
$ nebari extensions list
Name Description
---------------------------------------------------------------------------
nebari-ext-clearml "ClearML integration into nebari"
nebari-ext-helm "Helm extensions"
....
Within nebari we will expose several plugins:
A plugin interface for arbitrary additional typer
commands. All commands will be passed in the nebari config along with all specified command line arguments from the user. Conda has a similar approach with typer for their system.
nebari cost
class Stage:
name: str
description: str
priority: str # defaults to value of name
def validate(self, config: schema.NeabriConfig):
"""Perform additional validation of the nebari configuration specific to stage
"""
def render(self, config: schema.NebariConfig) -> typing.Union[typing.Dict[str, bytes], pathlib.Path]:
"""Given a set configuration render a set of files
Returns
------------
typing.Union[typing.Dict[str, bytes], pathlib.Path]
Returns either a directory to copy over files or a dictionary of keys mapping to file bytes
"""
...
def deploy(self, directory: pathlib.Path, stages: typing.Dict[str, typing.Any]) -> typing.Any:
"Deploy all resources within the stage
"
...
def destroy(self, directory: pathlib.Path):
"Destroy all resources within the stage"
...
Nebari will use pluggy within its core and separate each stage into a pluggy Stage
. Each stage will keep it's original name.
As far as plugin/extension systems go I am only aware of two major ones within the python ecosystem:
This will encourage the practice of extending nebari via extensions instead of direct PRs to the core.
It is possible to make this transition seamless to the user without changing behavior.
I feel confident in the approach since I have seen other project use pluggy succefully for similar work.
Right now we are handling permissions by hand - would be ideal to have a declarative way to handle this
Proposal: adopt Peribolos
Status | Draft 🚧 |
---|---|
Author(s) | Adam-D-Lewis |
Date Created | 02-03-2023 |
Date Last updated | 02-03-2023 |
Decision deadline | ? |
The current method of running Argo workflows from with Jupyterlab is not particularly user friendly. We'd like to have a beginner friendly way of running simple Argo Workflows even if this method has limitations making it not appropriate for more complex/large workflows.
Many users have asked for ways to run/schedule workflows. This would fill many of those needs.
from kubespawner import KubeSpawner
import json
class MySpawner(KubeSpawner):
def pre_spawn_start(self, user, spawner_options):
# Get the JWT token from the authentication server
token = self.user_options.get('token', {}).get('id_token', '')
# Decode the JWT token to obtain the OIDC claims
decoded_token = json.loads(self.api.jwt.decode(token)['payload'])
# Extract the OIDC groups from the claims
groups = decoded_token.get('groups', [])
# Modify the notebook server configuration based on the OIDC groups
if 'group1' in groups:
self.user_options['profile'] = 'group1_profile'
# Call the parent pre_spawn_start method to perform any additional modifications
super().pre_spawn_start(user, spawner_options)
import nebari_workflows as wf
from wf.hera import Task, Workflow, set_global_host, set_global_token, set_global_verify_ssl, GlobalConfig, get_global_verify_ssl
# maybe make a widget like the dask cluster one
wf.settings(
conda_environment='', # uses same as user submitting it by default
instance_type='', # uses same as user submitting it by default
)
with Workflow("two-tasks") as w: # this uses a service with the global token and host
Task("a", p, [{"m": "hello"}], node_selectors={"beta.kubernetes.io/instance-type": "n1-standard-4"})
Task("b", p, [{"m": "hello"}], node_selectors={"beta.kubernetes.io/instance-type": "n1-standard-8"})
wf.submit(w)
Here
Here's what I've done so far
dev
get_pod
permissionsconda run -n myEnv
and all the user directory and shared directories.
So deviations from that are still untested.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.