differentiableuniverseinitiative / idris-hackathon Goto Github PK
View Code? Open in Web Editor NEWRepository for hosting material and discussions for the 2021 IDRIS GPU hackathon
License: MIT License
Repository for hosting material and discussions for the 2021 IDRIS GPU hackathon
License: MIT License
This issue is to track the developments needed to finalize and validate the modified version of Horovod we developed. This overarching goal will encapsulate several smaller issues.
By the end of the hackweek, having a tested code with an associated Pull Request to https://github.com/horovod/horovod which can fully support our needs for Mesh TensorFlow.
With @kimchitsigai and @mypey we worked on some modifications to Horovod that can support multiple communicators. A description of what we did can be found here: DifferentiableUniverseInitiative/horovod#2
In parallel, a different proposal for supporting multiple groups of devices was proposed here horovod/horovod#2839
In the end, probably one of these 2 implementation will be merged, but we can try to find which one works the best for our purposes.
The main participants to this task are:
I am starting to add scripts and examples, and document the procedure to get setup on Jean-Zay in this fille https://github.com/DifferentiableUniverseInitiative/IDRIS-hackathon/blob/main/GETTING_STARTED.md
@kimchitsigai feel free to add/make suggestions if you see things that would be useful to document here to help people get started on the machine and/or with horovod
We have a small benchmark script that should allow us to test the scaling of distributed 3D FFTs with Mesh TensorFlow. But with @kimchitsigai we are so far running into issues when trying to get a profiler trace with nsys profile
.
The script itself is located here and we try to run it with the following SLURM job:
#!/bin/bash
#SBATCH --job-name=fft_benchmark # nom du job
##SBATCH --partition=gpu_p2 # de-commente pour la partition gpu_p2
#SBATCH --ntasks=8 # nombre total de tache MPI (= nombre total de GPU)
#SBATCH --ntasks-per-node=4 # nombre de tache MPI par noeud (= nombre de GPU par noeud)
#SBATCH --gres=gpu:4 # nombre de GPU par nœud (max 8 avec gpu_p2)
#SBATCH --cpus-per-task=10 # nombre de coeurs CPU par tache (un quart du noeud ici)
##SBATCH --cpus-per-task=3 # nombre de coeurs CPU par tache (pour gpu_p2 : 1/8 du noeud)
# /!\ Attention, "multithread" fait reference a l'hyperthreading dans la terminologie Slurm
#SBATCH --hint=nomultithread # hyperthreading desactive
#SBATCH --time=00:10:00 # temps d'execution maximum demande (HH:MM:SS)
#SBATCH --output=fft_benchmark%j.out # nom du fichier de sortie
#SBATCH --error=fft_benchmark%j.out # nom du fichier d'erreur (ici commun avec la sortie)
#SBATCH -A ftb@gpu # specify the project
#SBATCH --qos=qos_gpu-dev # using the dev queue, as this is only for profiling
# nettoyage des modules charges en interactif et herites par defaut
module purge
# chargement des modules
module load tensorflow-gpu/py3/2.4.1+nccl-2.8.3-1
# echo des commandes lancees
set -x
# execution du code avec binding via bind_gpu.sh : 1 GPU pour 1 tache MPI.
srun --unbuffered --mpi=pmi2 -o fft_%t.log nsys profile --stats=true -t nvtx,cuda,mpi -o result-%q{SLURM_TASK_PID} python -u fft_benchmark.py --mesh_shape="b1:2,b2:4" --layout="nx:b1,tny:b1,ny:b2,tnz:b2"
unfortunately, this crashes for some reason before being able to return the full trace. And we are not sure why
This issue is to track the developments needed to finalize and validate the Mesh TensorFlow implementation relying on horovod for the backend. This overarching goal will encapsulate several smaller issues.
By the end of the hackweek, submit a Pull Request to https://github.com/tensorflow/mesh with our new implemenation for GPU clusters
The main participants to this task are:
Progress made on these subtasks can be reported here.
This issue is to track the developments needed on FlowPM to make the best of the new Horovod Mesh Implemenation. This overarching goal will encapsulate several smaller issues.
By the end of the hackweek, submit a Pull Request to https://github.com/DifferentiableUniverseInitiative/flowpm with tested and benchmarked implementations of N-body and ray-tracing simulations.
The main participants to this task are:;
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.