Coder Social home page Coder Social logo

skandermoalla / pytoych-benchmark Goto Github PK

View Code? Open in Web Editor NEW
2.0 1.0 0.0 460 KB

A toy pytorch benchmark serving as an example project started from the CLAIRE python ML research template.

Home Page: https://github.com/CLAIRE-Labo/python-ml-research-template

License: MIT License

Shell 53.13% Dockerfile 18.66% Python 28.22%

pytoych-benchmark's Introduction

A Toy PyTorch Benchmark

Overview

This is a toy PyTorch benchmark project created using CLAIRE's Python Machine Learning Project Template.

Runtime performance for multiple devices. cuda > mps > cpu Runtime with different devices training a CNN on MNIST. Check reproducibility-scripts/runtime-benchmark.yaml to reproduce. See curves on Weights&Biases

It is meant to show how a project started from the template looks like. Most of the following README and the instructions to set the environment were generated by the template.

Getting Started

Code and development environment

We support the following methods and platforms for installing the project dependencies and running the code.

  • Docker/OCI-container for AMD64 machines (+ NVIDIA GPUs): This option works for machines with AMD64 CPUs and NVIDIA GPUs. E.g. Linux machines (EPFL HaaS servers, VMs on cloud providers), Windows machines with WSL, and clusters running OCI-compliant containers, like the EPFL Run:ai (Kubernetes) clusters.

    Follow the instructions in installation/docker-amd64-cuda/README.md to install the environment then get back here for the rest of the instructions to run the experiments.

    We ran our experiments on an 80GB NVIDIA A100 GPU and AMD EPYC 7543 CPUs.

  • Conda for osx-arm64 This option works for macOS machines with Apple Silicon and can leverage MPS acceleration.

    Follow the instructions in installation/conda-osx-arm64-mps/README.md to install the environment then get back here for the rest of the instructions to run the experiments.

    We ran our experiments on an Apple M2 MacBook Air with 10 GPU cores and 24GB of memory.

Data

Refer to data/README.md.

Logging and tracking experiments

We use Weights & Biases to log and track our experiments. If you're logged in, your default entity will be used (a fixed entity is not set in the config), and you can set another entity with the WANDB_ENTITY environment variable. Otherwise, the runs will be anonymous (you don't need to be logged in).

Reproduction and Experimentation

Reproducing our results

We provide scripts to reproduce our work in the reproducibility-scripts/ directory. It has a README at its root describing which scripts reproduce which experiments.

Experiment with different configurations

The default configuration for each script is stored in the configs/ directory. They are managed by Hydra. You can experiment with different configurations by passing the relevant arguments. You can get examples of how to do so in the reproducibility-scripts/ directory.

Using trained models and experimenting with results

We share our Weights and Biases runs in this W&B project.

Moreover, we make our trained models available. You can follow the instructions in outputs/README.md to download and use them.

Repository structure

Below, we give a description of the main files and directories in this repository.

 └─── src/                         # Source code.
    └── pytoych_benchmark          # Our package.
        ├── configs/               # Hydra configuration files.
        ├── models/mnist.py        # A CNN model for MNIST.
        ├── mnist.py               # Trains and evaluates a model on MNIST.
        └── template_experiment.py # A template experiment.

Contributing

We use pre-commit hooks to ensure high-quality code. Make sure it's installed on the system where you're developing (it is in the dependencies of the project, but you may be editing the code from outside the development environment. If you have conda you can install it in your base environment, otherwise, you can install it with brew). Install the pre-commit hooks with

# When in the PROJECT_ROOT.
pre-commit install --install-hooks

Then every time you commit, the pre-commit hooks will be triggered. You can also trigger them manually with:

pre-commit run --all-files

pytoych-benchmark's People

Contributors

skandermoalla avatar

Stargazers

 avatar Theo Hermann avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.