Coder Social home page Coder Social logo

iterative / cml Goto Github PK

View Code? Open in Web Editor NEW
4.0K 51.0 329.0 16.9 MB

♾️ CML - Continuous Machine Learning | CI/CD for ML

Home Page: http://cml.dev

License: Apache License 2.0

JavaScript 97.69% Dockerfile 2.20% HCL 0.11%
machine-learning data-science cicd ci-cd github-actions gitlab-ci developer-tools continuous-integration continuous-delivery ci

cml's Introduction

GHA npm

What is CML? Continuous Machine Learning (CML) is an open-source CLI tool for implementing continuous integration & delivery (CI/CD) with a focus on MLOps. Use it to automate development workflows — including machine provisioning, model training and evaluation, comparing ML experiments across project history, and monitoring changing datasets.

CML can help train and evaluate models — and then generate a visual report with results and metrics — automatically on every pull request.

An example report for a neural style transfer model.

CML principles:

  • GitFlow for data science. Use GitLab or GitHub to manage ML experiments, track who trained ML models or modified data and when. Codify data and models with DVC instead of pushing to a Git repo.
  • Auto reports for ML experiments. Auto-generate reports with metrics and plots in each Git pull request. Rigorous engineering practices help your team make informed, data-driven decisions.
  • No additional services. Build your own ML platform using GitLab, Bitbucket, or GitHub. Optionally, use cloud storage as well as either self-hosted or cloud runners (such as AWS EC2 or Azure). No databases, services or complex setup needed.

❓ Need help? Just want to chat about continuous integration for ML? Visit our Discord channel!

⏯️ Check out our YouTube video series for hands-on MLOps tutorials using CML!

Table of Contents

  1. Setup (GitLab, GitHub, Bitbucket)
  2. Usage
  3. Getting started (tutorial)
  4. Using CML with DVC
  5. Advanced Setup (Self-hosted, local package)
  6. Example projects

Setup

You'll need a GitLab, GitHub, or Bitbucket account to begin. Users may wish to familiarize themselves with Github Actions or GitLab CI/CD. Here, will discuss the GitHub use case.

GitLab

Please see our docs on CML with GitLab CI/CD and in particular the personal access token requirement.

Bitbucket

Please see our docs on CML with Bitbucket Cloud.

GitHub

The key file in any CML project is .github/workflows/cml.yaml:

name: your-workflow-name
on: [push]
jobs:
  run:
    runs-on: ubuntu-latest
    # optionally use a convenient Ubuntu LTS + DVC + CML image
    # container: ghcr.io/iterative/cml:0-dvc2-base1
    steps:
      - uses: actions/checkout@v3
      # may need to setup NodeJS & Python3 on e.g. self-hosted
      # - uses: actions/setup-node@v3
      #   with:
      #     node-version: '16'
      # - uses: actions/setup-python@v4
      #   with:
      #     python-version: '3.x'
      - uses: iterative/setup-cml@v1
      - name: Train model
        run: |
          # Your ML workflow goes here
          pip install -r requirements.txt
          python train.py
      - name: Write CML report
        env:
          REPO_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          # Post reports as comments in GitHub PRs
          cat results.txt >> report.md
          cml comment create report.md

Usage

We helpfully provide CML and other useful libraries pre-installed on our custom Docker images. In the above example, uncommenting the field container: ghcr.io/iterative/cml:0-dvc2-base1) will make the runner pull the CML Docker image. The image already has NodeJS, Python 3, DVC and CML set up on an Ubuntu LTS base for convenience.

CML Functions

CML provides a number of functions to help package the outputs of ML workflows (including numeric data and visualizations about model performance) into a CML report.

Below is a table of CML functions for writing markdown reports and delivering those reports to your CI system.

Function Description Example Inputs
cml runner launch Launch a runner locally or hosted by a cloud provider See Arguments
cml comment create Return CML report as a comment in your GitLab/GitHub workflow <path to report> --head-sha <sha>
cml check create Return CML report as a check in GitHub <path to report> --head-sha <sha>
cml pr create Commit the given files to a new branch and create a pull request <path>...
cml tensorboard connect Return a link to a Tensorboard.dev page --logdir <path to logs> --title <experiment title> --md

CML Reports

The cml comment create command can be used to post reports. CML reports are written in markdown (GitHub, GitLab, or Bitbucket flavors). That means they can contain images, tables, formatted text, HTML blocks, code snippets and more — really, what you put in a CML report is up to you. Some examples:

🗒️ Text Write to your report using whatever method you prefer. For example, copy the contents of a text file containing the results of ML model training:

cat results.txt >> report.md

🖼️ Images Display images using the markdown or HTML. Note that if an image is an output of your ML workflow (i.e., it is produced by your workflow), it can be uploaded and included automaticlly to your CML report. For example, if graph.png is output by python train.py, run:

echo "![](./graph.png)" >> report.md
cml comment create report.md

Getting Started

  1. Fork our example project repository.

⚠️ Note that if you are using GitLab, you will need to create a Personal Access Token for this example to work.

⚠️ The following steps can all be done in the GitHub browser interface. However, to follow along with the commands, we recommend cloning your fork to your local workstation:

git clone https://github.com/<your-username>/example_cml
  1. To create a CML workflow, copy the following into a new file, .github/workflows/cml.yaml:
name: model-training
on: [push]
jobs:
  run:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-python@v4
      - uses: iterative/setup-cml@v1
      - name: Train model
        env:
          REPO_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          pip install -r requirements.txt
          python train.py

          cat metrics.txt >> report.md
          echo "![](./plot.png)" >> report.md
          cml comment create report.md
  1. In your text editor of choice, edit line 16 of train.py to depth = 5.

  2. Commit and push the changes:

git checkout -b experiment
git add . && git commit -m "modify forest depth"
git push origin experiment
  1. In GitHub, open up a pull request to compare the experiment branch to main.

Shortly, you should see a comment from github-actions appear in the pull request with your CML report. This is a result of the cml send-comment function in your workflow.

This is the outline of the CML workflow:

  • you push changes to your GitHub repository,
  • the workflow in your .github/workflows/cml.yaml file gets run, and
  • a report is generated and posted to GitHub.

CML functions let you display relevant results from the workflow — such as model performance metrics and visualizations — in GitHub checks and comments. What kind of workflow you want to run, and want to put in your CML report, is up to you.

Using CML with DVC

In many ML projects, data isn't stored in a Git repository, but needs to be downloaded from external sources. DVC is a common way to bring data to your CML runner. DVC also lets you visualize how metrics differ between commits to make reports like this:

The .github/workflows/cml.yaml file used to create this report is:

name: model-training
on: [push]
jobs:
  run:
    runs-on: ubuntu-latest
    container: ghcr.io/iterative/cml:0-dvc2-base1
    steps:
      - uses: actions/checkout@v3
      - name: Train model
        env:
          REPO_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        run: |
          # Install requirements
          pip install -r requirements.txt

          # Pull data & run-cache from S3 and reproduce pipeline
          dvc pull data --run-cache
          dvc repro

          # Report metrics
          echo "## Metrics" >> report.md
          git fetch --prune
          dvc metrics diff main --show-md >> report.md

          # Publish confusion matrix diff
          echo "## Plots" >> report.md
          echo "### Class confusions" >> report.md
          dvc plots diff --target classes.csv --template confusion -x actual -y predicted --show-vega main > vega.json
          vl2png vega.json -s 1.5 > confusion_plot.png
          echo "![](./confusion_plot.png)" >> report.md

          # Publish regularization function diff
          echo "### Effects of regularization" >> report.md
          dvc plots diff --target estimators.csv -x Regularization --show-vega main > vega.json
          vl2png vega.json -s 1.5 > plot.png
          echo "![](./plot.png)" >> report.md

          cml comment create report.md

⚠️ If you're using DVC with cloud storage, take note of environment variables for your storage format.

Configuring Cloud Storage Providers

There are many supported could storage providers. Here are a few examples for some of the most frequently used providers:

S3 and S3-compatible storage (Minio, DigitalOcean Spaces, IBM Cloud Object Storage...)
# Github
env:
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  AWS_SESSION_TOKEN: ${{ secrets.AWS_SESSION_TOKEN }}

👉 AWS_SESSION_TOKEN is optional.

👉 AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY can also be used by cml runner to launch EC2 instances. See [Environment Variables].

Azure
env:
  AZURE_STORAGE_CONNECTION_STRING:
    ${{ secrets.AZURE_STORAGE_CONNECTION_STRING }}
  AZURE_STORAGE_CONTAINER_NAME: ${{ secrets.AZURE_STORAGE_CONTAINER_NAME }}
Aliyun
env:
  OSS_BUCKET: ${{ secrets.OSS_BUCKET }}
  OSS_ACCESS_KEY_ID: ${{ secrets.OSS_ACCESS_KEY_ID }}
  OSS_ACCESS_KEY_SECRET: ${{ secrets.OSS_ACCESS_KEY_SECRET }}
  OSS_ENDPOINT: ${{ secrets.OSS_ENDPOINT }}
Google Storage

⚠️ Normally, GOOGLE_APPLICATION_CREDENTIALS is the path of the json file containing the credentials. However in the action this secret variable is the contents of the file. Copy the json contents and add it as a secret.

env:
  GOOGLE_APPLICATION_CREDENTIALS: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS }}
Google Drive

⚠️ After configuring your Google Drive credentials you will find a json file at your_project_path/.dvc/tmp/gdrive-user-credentials.json. Copy its contents and add it as a secret variable.

env:
  GDRIVE_CREDENTIALS_DATA: ${{ secrets.GDRIVE_CREDENTIALS_DATA }}

Advanced Setup

Self-hosted (On-premise or Cloud) Runners

GitHub Actions are run on GitHub-hosted runners by default. However, there are many great reasons to use your own runners: to take advantage of GPUs, orchestrate your team's shared computing resources, or train in the cloud.

☝️ Tip! Check out the official GitHub documentation to get started setting up your own self-hosted runner.

Allocating Cloud Compute Resources with CML

When a workflow requires computational resources (such as GPUs), CML can automatically allocate cloud instances using cml runner. You can spin up instances on AWS, Azure, GCP, or Kubernetes.

For example, the following workflow deploys a g4dn.xlarge instance on AWS EC2 and trains a model on the instance. After the job runs, the instance automatically shuts down.

You might notice that this workflow is quite similar to the basic use case above. The only addition is cml runner and a few environment variables for passing your cloud service credentials to the workflow.

Note that cml runner will also automatically restart your jobs (whether from a GitHub Actions 35-day workflow timeout or a AWS EC2 spot instance interruption).

name: Train-in-the-cloud
on: [push]
jobs:
  deploy-runner:
    runs-on: ubuntu-latest
    steps:
      - uses: iterative/setup-cml@v1
      - uses: actions/checkout@v3
      - name: Deploy runner on EC2
        env:
          REPO_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        run: |
          cml runner launch \
            --cloud=aws \
            --cloud-region=us-west \
            --cloud-type=g4dn.xlarge \
            --labels=cml-gpu
  train-model:
    needs: deploy-runner
    runs-on: [self-hosted, cml-gpu]
    timeout-minutes: 50400 # 35 days
    container:
      image: ghcr.io/iterative/cml:0-dvc2-base1-gpu
      options: --gpus all
    steps:
      - uses: actions/checkout@v3
      - name: Train model
        env:
          REPO_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
        run: |
          pip install -r requirements.txt
          python train.py

          cat metrics.txt > report.md
          cml comment create report.md

In the workflow above, the deploy-runner step launches an EC2 g4dn.xlarge instance in the us-west region. The model-training step then runs on the newly-launched instance. See [Environment Variables] below for details on the secrets required.

🎉 Note that jobs can use any Docker container! To use functions such as cml send-comment from a job, the only requirement is to have CML installed.

Docker Images

The CML Docker image (ghcr.io/iterative/cml or iterativeai/cml) comes loaded with Python, CUDA, git, node and other essentials for full-stack data science. Different versions of these essentials are available from different image tags. The tag convention is {CML_VER}-dvc{DVC_VER}-base{BASE_VER}{-gpu}:

{BASE_VER} Software included (-gpu)
0 Ubuntu 18.04, Python 2.7 (CUDA 10.1, CuDNN 7)
1 Ubuntu 20.04, Python 3.8 (CUDA 11.2, CuDNN 8)

For example, iterativeai/cml:0-dvc2-base1-gpu, or ghcr.io/iterative/cml:0-dvc2-base1.

Arguments

The cml runner launch function accepts the following arguments:

  --labels                                  One or more user-defined labels for
                                            this runner (delimited with commas)
                                                       [string] [default: "cml"]
  --idle-timeout                            Time to wait for jobs before
                                            shutting down (e.g. "5min"). Use
                                            "never" to disable
                                                 [string] [default: "5 minutes"]
  --name                                    Name displayed in the repository
                                            once registered
                                                    [string] [default: cml-{ID}]
  --no-retry                                Do not restart workflow terminated
                                            due to instance disposal or GitHub
                                            Actions timeout            [boolean]
  --single                                  Exit after running a single job
                                                                       [boolean]
  --reuse                                   Don't launch a new runner if an
                                            existing one has the same name or
                                            overlapping labels         [boolean]
  --reuse-idle                              Creates a new runner only if the
                                            matching labels don't exist or are
                                            already busy               [boolean]
  --docker-volumes                          Docker volumes, only supported in
                                            GitLab         [array] [default: []]
  --cloud                                   Cloud to deploy the runner
                         [string] [choices: "aws", "azure", "gcp", "kubernetes"]
  --cloud-region                            Region where the instance is
                                            deployed. Choices: [us-east,
                                            us-west, eu-west, eu-north]. Also
                                            accepts native cloud regions
                                                   [string] [default: "us-west"]
  --cloud-type                              Instance type. Choices: [m, l, xl].
                                            Also supports native types like i.e.
                                            t2.micro                    [string]
  --cloud-permission-set                    Specifies the instance profile in
                                            AWS or instance service account in
                                            GCP           [string] [default: ""]
  --cloud-metadata                          Key Value pairs to associate
                                            cml-runner instance on the provider
                                            i.e. tags/labels "key=value"
                                                           [array] [default: []]
  --cloud-gpu                               GPU type. Choices: k80, v100, or
                                            native types e.g. nvidia-tesla-t4
                                                                        [string]
  --cloud-hdd-size                          HDD size in GB              [number]
  --cloud-ssh-private                       Custom private RSA SSH key. If not
                                            provided an automatically generated
                                            throwaway key will be used  [string]
  --cloud-spot                              Request a spot instance    [boolean]
  --cloud-spot-price                        Maximum spot instance bidding price
                                            in USD. Defaults to the current spot
                                            bidding price [number] [default: -1]
  --cloud-startup-script                    Run the provided Base64-encoded
                                            Linux shell script during the
                                            instance initialization     [string]
  --cloud-aws-security-group                Specifies the security group in AWS
                                                          [string] [default: ""]
  --cloud-aws-subnet,                       Specifies the subnet to use within
  --cloud-aws-subnet-id                     AWS           [string] [default: ""]

Environment Variables

⚠️ You will need to create a personal access token (PAT) with repository read/write access and workflow privileges. In the example workflow, this token is stored as PERSONAL_ACCESS_TOKEN.

ℹ️ If using the --cloud option, you will also need to provide access credentials of your cloud compute resources as secrets. In the above example, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (with privileges to create & destroy EC2 instances) are required.

For AWS, the same credentials can also be used for configuring cloud storage.

Proxy support

CML support proxy via known environment variables http_proxy and https_proxy.

On-premise (Local) Runners

This means using on-premise machines as self-hosted runners. The cml runner launch function is used to set up a local self-hosted runner. On a local machine or on-premise GPU cluster, install CML as a package and then run:

cml runner launch \
  --repo=$your_project_repository_url \
  --token=$PERSONAL_ACCESS_TOKEN \
  --labels="local,runner" \
  --idle-timeout=180

The machine will listen for workflows from your project repository.

Local Package

In the examples above, CML is installed by the setup-cml action, or comes pre-installed in a custom Docker image pulled by a CI runner. You can also install CML as a package:

npm install --location=global @dvcorg/cml

You can use cml without node by downloading the correct standalone binary for your system from the asset section of the releases.

You may need to install additional dependencies to use DVC plots and Vega-Lite CLI commands:

sudo apt-get install -y libcairo2-dev libpango1.0-dev libjpeg-dev libgif-dev \
                        librsvg2-dev libfontconfig-dev
npm install -g vega-cli vega-lite

CML and Vega-Lite package installation require the NodeJS package manager (npm) which ships with NodeJS. Installation instructions are below.

Install NodeJS

  • GitHub: This is probably not necessary when using GitHub's default containers or one of CML's Docker containers. Self-hosted runners may need to use a set up action to install NodeJS:
uses: actions/setup-node@v3
  with:
    node-version: '16'
  • GitLab: Requires direct installation.
curl -sL https://deb.nodesource.com/setup_16.x | bash
apt-get update
apt-get install -y nodejs

See Also

These are some example projects using CML.

🔑 needs a PAT.

⚠️ Maintenance ⚠️

  • ~2023-07 Nvidia has dropped container CUDA images with 10.x/cudnn7 and 11.2.1, CML images will be updated accrodingly

cml's People

Contributors

0x2b3bfa0 avatar casperdcl avatar courentin avatar dacbd avatar davidgortega avatar deepyaman avatar dependabot[bot] avatar dmpetrov avatar duijf avatar elleobrien avatar francesco086 avatar github-actions[bot] avatar h2oa avatar iterative-olivaw avatar jamesmowatt avatar jorgeorpinel avatar josemaia avatar ludelafo avatar magdapoppins avatar nipierre avatar omesser avatar pandyaved98 avatar restyled-commits avatar restyled-io[bot] avatar samknightgit avatar shcheklein avatar skn0tt avatar snyk-bot avatar tasdomas avatar vincent-leonardo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cml's Issues

dvc pull: cannot specify a target

It supports only true/false while users might need to pull a particular data file like

$ dvc pull images 

or

$ dvc pull users/ cities/ companies.csv

No tags by default

User can set tags prefixes. to have interbranch experiements list and also reports in Gitlab, however nothing of this will happen if the user does not setup the prefix accordingly.

  • Create one more workflow parameter/env-variable tag_prefix for tagging the commits that we
    create. By default it is empty that means - no tags.
  • Mention in the GitLab documentation that tag_prefix has to be defined.
  • Mention in GH docs than you can add tag_prefix

Better logging

Use a proper logging library

  • where/what/how raising errors & messages #606
  • change every console.log into proper logger calls

@0x2b3bfa0 moved the rest of items to a separate issue, as @casperdcl suggested in the past weekly meeting.

Click to see the original text...

nice to have

(put in separate issue)

  • wrap specific CI vendor capabilities
  • heartbeats (in openmetrics format?)
  • file configurable
  • integration with studio

Edit:
Coming back to this we should now, with the 0.3.0 release, attack this.
The proposal would be use winston and a configurable file. We can also collect runner heartbeat using openmetrics format

[ci skip] flag shows up in commit message when it wasn't called

I replicated the experiment in the Wiki (GitHub version). When I made my first commit, the commit messages are showing up as dvc repro [ci skip]. I wouldn't expect to see [ci skip] if I didn't include a flag for that, and in fact, I'm sure the CI ran! It might help me avoid confusion if we avoid printing that flag in commit messages except when it is explicitly called by the user.

image

Better Report

Metrics is more important than file data. Has to be changed the order.

Additional issues could be:

  • Last experiments as a list
  • Metrics could be also collapsible to reduce the space
  • warning if current branch is comparing with itself branch == rev

e2e testing

To avoid manual testing we should have e2e testing, some ideas:

  • create a repo during CI testing
  • use external repos and checks to make this one fail on build

DVC Report --> CML Report

Since we are deciding to call this tool CML, and not DVC CML, should we rename the reports CML Reports?

Unable to suppress dvc pull

I've tried DVC_PULL: false and DVC_PULL: "". In any of the cases, it tries to pull data. What should I put?

      - name: dvc_action_run
        env:
          ....
          DVC_PULL: ""

Support remote SSH, HTTP and HDFS

SSH is out of supported remotes since actually dvc ssh is backed by sftp with paramiko.

Strategy of adding the PEM key was wrong since its actually handled and located by dvc in the config file.
Open questions are:

  • Should we allow multiple SSH remotes?
  • Since all the remotes works with ENV variables... Should dvc support SSH, HTTP and HDFS credentials with ENV variables as well?

Settings file

Encapsulate all the settings inside a file.

const DVC_TITLE = 'DVC Report';
const DVC_TAG_PREFIX = 'dvc_';
const MAX_CHARS = 65000;
const METRICS_FORMAT = '0[.][0000000]'; 
const {
    baseline = 'origin/master',
    metrics_format = '0[.][0000000]',
    dvc_pull = true
  } = process.env;
  const repro_targets = getInputArray('repro_targets', ['Dvcfile']);
  const metrics_diff_targets = getInputArray('metrics_diff_targets');
  .demandOption('output')
  .alias('o', 'output')
  .default('diff_target', '')
  .default('metrics_diff_targets', '')
  .array('metrics_diff_targets')
  .default('a_rev', 'HEAD~1')
  .default('b_rev', 'HEAD')
  .help('h')
  .alias('h', 'help').argv;

GitLab CI Tags

I'm trying out the system on GitLab CI now and it all works very easily, except getting tags to generate after each run. I had to create an environmental variable, tag_prefix.

It doesn't seem like tag_prefix is an ideal mechanism for coding whether or not the user wants DVC reports generated- at least, the variable name wouldn't signal to me that I need to assign it to allow tags. Is there a better way that I'm missing? I would think that by default, we'd want tags enabled?

Preserve branch gh-runner

via #55 our gh runner is removed, to be able to finish the issue we need separate repo and docker hub repos.

@dmpetrov, may work leave the branch in the repo and use a tag in Docker hub?

  • dvcorg/dvc-cml:custom-runner-latest
  • dvcorg/dvc-cml-gpu:custom-runner-latest

test remotes

It's fundamental to test all the available remotes.
If the remote is out of scope throw not implemented

Locating DVC Report

I've been able to make changes to my code, and then git commit & git push to initiate model retraining in the mnist example from the Wiki. For some reason, though, I'm not seeing reports visible.

Here's a screenshot from a case where I made a new branch mybranch, changed the learning rate, and pushed. The CI ran, but no sign of a report. Any ideas?

Screen Shot 2020-03-25 at 11 49 52 AM

Extract self-hosted gpu tags code into a separate repo and docker files

The approach with unified GH-GL tags looks really appealing from the GPU optimization point of view. But it complicates the basic solution a lot. This code with the customize gh-runner needs to be extracted to a separate project and docker files.

Also, the readme file needs to be changed correspondingly.

Create cml NPM package

It might be not easy for users to customize our docker image which contains CML code. It is more flexible to have our code as an NPM package.

Generate report as an artefact

In GH reports are accesible through checks and or releases but in GL if tags are not generated GL does not have any reports.
Including the output of dvcreport as an artefact would mitigate this issue

gdrive stuck on runner

I am using GDrive for remote storage. This is great on my local machine, but when I go to the runner, it seems to be stuck forever "Pulling from DVC remote".

I don't have proof but these seems like an authentication issue? My bet is that on my local machine, the first time I try to access GDrive as a remote I am given a link to visit in my browser, and then I get a validation code that I copy and paste back into the CLI. I'm guessing we are simply not getting past this authentication stage.

I have followed the instructions on the README for cml for GDrive (to my understanding); I copied and pasted the contents of .dvc/tmp/gdrive-user-credentials.json into the value field for the Secret "GDRIVE_USER_CREDENTIALS_DATA".

The repo is here if needed: https://github.com/iterative/mnist_classifier

@DavidGOrtega what do you think?

add .dockerignore

add .dockerignore to not add node_modules, this may reduce the size

Parse default remote properly

Currently, the remote type is checked in a not consistent way by finding string patterns like s3:// in dvc remote list outputs. That might be a problem when multiple remotes are defined.

In fact, dvc remote default returns the default remote name which can be properly resolved to URL in the dvc remote list output by a simple pattern.

Also, it makes sense to throw an error and exit with a proper message if dvc pull is required but the corresponded setting are not.

promisify exec sometimes rejects

Sometimes exec rejects the error instead of returning the error so throw_err is useless.
Refactor this may be also a good chance to review easy-git

Metrics of experiments with different tech implementation

This is a discussion point, not really an issue. I'm thinking about how metrics are displayed:

Screen Shot 2020-03-26 at 3 28 14 PM

I definitely want to know that I'm comparing two experiments in which hyperparameters of my model (here, the maximum depth of a random forest classifier max_depth) changed. But, whereas it makes sense to have a "diff" presented for the accuracy metric, I'm not so sure it matters to have a diff present for the hyperparameters. It's not a number we're trying to optimize (unlike accuracy diffs) and visually, it makes the display more cluttered.

I might suggest having a separate table for comparing hyperparameters that doesn't present diffs, just a side-by-side comparison. And then a table for comparing the output metrics, where I do care about the diff. Would this be challenging to implement? Maybe, for each distinct metric file, its own table? And then somewhere in project preferences a user could specify if we want diffs.

Another way of thinking about this is that if I had a spreadsheet of experiments I was trying to compare, I would lay it out this way:

experiment id parameterA parameterB parameterC accuracy
1bac226 24 5 140 0.899
f90k153 24 2 140 0.9111

And then perhaps highlight the row containing the best experiment (assuming that we can specify somehwere if we want + or - for the metric). If you want the diff explicitly calculated, maybe put it in its own field below the table.

Metrics not available

image

Update the Mnist demo to output floats instead of strings that are not processed by DVC

be able to skip push

skip_push that defaults to true.
Skips all the push process including dvc and git.
Report still has to happen

public access to a dvc-cml project?

@dmpetrov and I have been talking about how we'll build tutorials for dvc-cml. One idea, which I've been building in a repo, is a project where anyone can make a fork and then submit a PR to see the workflow in action.

However, I've found this note on the Settings/Secrets page:

Secrets are not passed to workflows that are triggered by a pull request from a fork. Learn more.

If I understand correctly, this means that if someone in the public/outside DVC cloned our repo and attempted to make a PR, dvc repro might be triggered BUT the runner would not be able to access credentials, such as the Google Drive credentials needed to push/pull project artifacts. Does this sound correct?

If it's an issue, seems like we could simply put the credentials in a config file in the repo- I think, with GDrive, this is often alright?

Double jobs

Two jobs are triggered because of:

   on: [push, pull_request]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.