Coder Social home page Coder Social logo

idris-hackathon's People

Contributors

eiffl avatar kimchitsigai avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

idris-hackathon's Issues

Testing, Benchmarking, and Optimizing Horovod collectives on Jean-Zay

This issue is to track the developments needed to finalize and validate the modified version of Horovod we developed. This overarching goal will encapsulate several smaller issues.

Goal

By the end of the hackweek, having a tested code with an associated Pull Request to https://github.com/horovod/horovod which can fully support our needs for Mesh TensorFlow.

Context

With @kimchitsigai and @mypey we worked on some modifications to Horovod that can support multiple communicators. A description of what we did can be found here: DifferentiableUniverseInitiative/horovod#2
In parallel, a different proposal for supporting multiple groups of devices was proposed here horovod/horovod#2839
In the end, probably one of these 2 implementation will be merged, but we can try to find which one works the best for our purposes.

Participants

The main participants to this task are:

Tasks

  • : Run profiling on 3D demo FFT #2
  • : Identify and solve the origin of deadlocks in our modified horovod: DifferentiableUniverseInitiative/horovod#5
  • : Test alternative implementation proposed here horovod/horovod#2839
  • : Test scaling of distributed operations for large tensors
  • : Depending on the results above, maybe implement additional specialized collectives for expensive Mesh TensorFlow bottlenecks

Benchmarking 3D FFTs with NVIDIA Nsight Systems

We have a small benchmark script that should allow us to test the scaling of distributed 3D FFTs with Mesh TensorFlow. But with @kimchitsigai we are so far running into issues when trying to get a profiler trace with nsys profile.

The script itself is located here and we try to run it with the following SLURM job:

#!/bin/bash
#SBATCH --job-name=fft_benchmark     # nom du job
##SBATCH --partition=gpu_p2          # de-commente pour la partition gpu_p2
#SBATCH --ntasks=8                   # nombre total de tache MPI (= nombre total de GPU)
#SBATCH --ntasks-per-node=4          # nombre de tache MPI par noeud (= nombre de GPU par noeud)
#SBATCH --gres=gpu:4                 # nombre de GPU par nœud (max 8 avec gpu_p2)
#SBATCH --cpus-per-task=10           # nombre de coeurs CPU par tache (un quart du noeud ici)
##SBATCH --cpus-per-task=3           # nombre de coeurs CPU par tache (pour gpu_p2 : 1/8 du noeud)
# /!\ Attention, "multithread" fait reference a l'hyperthreading dans la terminologie Slurm
#SBATCH --hint=nomultithread         # hyperthreading desactive
#SBATCH --time=00:10:00              # temps d'execution maximum demande (HH:MM:SS)
#SBATCH --output=fft_benchmark%j.out # nom du fichier de sortie
#SBATCH --error=fft_benchmark%j.out  # nom du fichier d'erreur (ici commun avec la sortie)
#SBATCH -A ftb@gpu                   # specify the project
#SBATCH --qos=qos_gpu-dev            # using the dev queue, as this is only for profiling

# nettoyage des modules charges en interactif et herites par defaut
module purge

# chargement des modules
module load tensorflow-gpu/py3/2.4.1+nccl-2.8.3-1

# echo des commandes lancees
set -x

# execution du code avec binding via bind_gpu.sh : 1 GPU pour 1 tache MPI.
srun --unbuffered --mpi=pmi2 -o fft_%t.log nsys profile --stats=true -t nvtx,cuda,mpi -o result-%q{SLURM_TASK_PID} python -u fft_benchmark.py --mesh_shape="b1:2,b2:4" --layout="nx:b1,tny:b1,ny:b2,tnz:b2"

unfortunately, this crashes for some reason before being able to return the full trace. And we are not sure why

Implementation of Horovod backend in Mesh TensorFlow

This issue is to track the developments needed to finalize and validate the Mesh TensorFlow implementation relying on horovod for the backend. This overarching goal will encapsulate several smaller issues.

Goal

By the end of the hackweek, submit a Pull Request to https://github.com/tensorflow/mesh with our new implemenation for GPU clusters

Participants

The main participants to this task are:

Tasks

Progress made on these subtasks can be reported here.

Improving support for distributed operations in FlowPM

This issue is to track the developments needed on FlowPM to make the best of the new Horovod Mesh Implemenation. This overarching goal will encapsulate several smaller issues.

Goal

By the end of the hackweek, submit a Pull Request to https://github.com/DifferentiableUniverseInitiative/flowpm with tested and benchmarked implementations of N-body and ray-tracing simulations.

Participants

The main participants to this task are:;

Tasks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.