Coder Social home page Coder Social logo

mlflow_bike_sharing's Introduction

Directory Structure

.
├── README.md
├── config  <- any configuration files
├── data
│   ├── dc_volumes <- docker compose valumes
│   ├── experiments <- experiments config yaml files
│   ├── processed <- data after all preprocessing has been done
│   └── raw <- original unmodified data acting as source of truth and provenance
├── docker <- docker image(s) for running project inside container(s)
├── models  <- compiled model .pkl and other model's files
└── src
    ├── config <- configure from pipline
    ├── dataset <- data prepare and/or preprocess
    ├── feauture <- impotant features visualization 
    ├── mlflow_client <- set/run mlflow clint 
    ├── pipelines <- scripts of pipelines
    ├── test <- smoke test for mlfow, mlfow_api and fast_api
    ├── train <- train model stage code
    └── utils <- auxiliary functions and utils

Run demo

Just run all steps prepare, get model and run api endpoints

make venv
source ./venv/bin/activate
make demo

1. Get data

Download Bike Sharing Dataset

make dataset

2. Build and run docker containers

Build and run containers from docker-compose.yml

make start

3. Run dvc pipline

Run dvc pipeline with all stepc to dvc.yaml

make dvcrun

Step 4: Automate pipelines (DAG) execution

1) Prepare configs

Run stage:

dvc stage add -n prepare_configs \
        -d src/pipelines/prepare_configs.py \
        -d config/config.yml \
        -o data/experiments/base_config.yml \
        -o data/experiments/model_select_config.yml \
        -o data/experiments/prepare_dataset_config.yml \
        -o data/experiments/split_train_test_config.yml \
        -o data/experiments/train_config.yml \
        python src/pipelines/prepare_configs.py --config=config/config.yml

Reproduce stage: dvc repro prepare_configs

2) Prepare dataset

Run stage:

dvc stage add -n prepare_dataset \
        -d src/pipelines/prepare_dataset.py \
        -d data/experiments/prepare_dataset_config.yml \
        -d data/raw/hour.csv \
        -o data/processed/hour.csv \
        python src/pipelines/prepare_dataset.py --config=data/experiments/prepare_dataset_config.yml

Reproduce stage: dvc repro prepare_dataset

2) Split train/test datasets

Run stage:

dvc stage add -n split_dataset \
    -d src/pipelines/split_train_test.py \
    -d data/experiments/split_train_test_config.yml \
    -d data/processed/hour.csv \
    -o data/processed/x_train_bike.csv \
    -o data/processed/x_test_bike.csv \
    -o data/processed/y_train_bike.csv \
    -o data/processed/y_test_bike.csv \
    python src/pipelines/split_train_test.py \
    --config=data/experiments/split_train_test_config.yml

this stage:

  1. creates csv files x_train_bike.csv,x_test_bike.csv,y_train_bike.csv and y_test_bike.csv in folder data/processed

Reproduce stage: dvc repro split_dataset

3) Train model

Run stage:

dvc stage add -n train \
    -d src/pipelines/train.py \
    -d data/experiments/train_config.yml \
    -d data/processed/x_train_bike.csv \
    -d data/processed/x_test_bike.csv \
    -d data/processed/y_train_bike.csv \
    -d data/processed/y_test_bike.csv \
    python src/pipelines/train.py --config=data/experiments/train_config.yml --base_config=config/config.yml

this stage:

  1. trains and save model

Reproduce stage: dvc repro train

3) Get best model

Run stage:

dvc stage add -n model_select \
    -d src/pipelines/model_select.py \
    -d data/experiments/model_select_config.yml \
    -d config/config.yml \
    -o models/model.pkl \
    python src/pipelines/model_select.py --config=data/experiments/model_select_config.yml --base_config=config/config.yml

this stage:

  1. trains and save model

Reproduce stage: dvc repro model_select

REST API check

  1. Check fast api
curl --silent --show-error 'http://0.0.0.0:5005/invocations' -H 'Content-Type: application/json' -d '{
    "columns": ["season", "year", "month", "hour_of_day", "is_holiday", "weekday", "is_workingday", "weather_situation", "temperature", "feels_like_temperature", "humidity", "windspeed"],
    "data": [[1, 0, 1, 0, 1, 6, 0, 1, 0.24, 0.2671, 0.81, 0.0000]]
}'
  1. Check mlflow api
curl --silent --show-error 'http://localhost:5001/prediction' -H 'Content-Type: application/json' -d '[{
  "season": 1, "year": 0, "month": 1, "hour_of_day": 0, "is_holiday": 1, "weekday": 0,
  "is_workingday": 0, "weather_situation": 1, "temperature": 0.24,
  "feels_like_temperature": 0.2671, "humidity": 0.81, "windspeed": 0.0000
}]'

mlflow_bike_sharing's People

Contributors

koskelainen avatar

Watchers

James Cloos avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.