Comments (5)
Does it hang in the middle of a run? Or right away?
My guess is it's an issue with the size of the gp object, which grows as more points get added to the training set. I set mem-per-cpu=5000 in all my otf runs and haven't seen any issues yet, but most of my jobs for extended systems have at most ~500 atomic environments in the training set.
from flare.
It's not a problem of the GP dataset. The job hang after adding the first batch of atoms (~ 5-10) for training.
from flare.
I've started running into this, and am working on isolating the cause. It appears the job will hang the first time predict_on_structure_par is called in otf.py, suggesting that concurrent.futures.ProcessPoolExecutor() might be failing.
It appears the issue also shows up for small otf jobs. One way to test this is to run test_otf_h2 with otf.par=True. On my machine, the test hangs after the first round of hyperparameter optimization.
Curiously, this wasn't an issue in previous versions of the code. If you return to the July 26 version (the last time I ran a large batch of parallel otf jobs) with the command
git checkout d3fa524
and then run test_otf_h2 with otf.par=True, the test works fine. My current suspicion is that having multiple files open is somehow interfering with concurrent.Futures, but that's just a guess at the moment.
from flare.
I experienced some severe holdups thanks to this. The problem was ultimately solved after some laborious, patient debugging with @nw13slx. The ultimate thing that fixed it for me was adding in both
`#SBATCH --mem-per-cpu=6000
ulimit -s unlimited`
to the header of my batch scripts as Lixin noticed at the top of this thread; ulimit -s unlimited was not sufficient.
This bug was rather pesky and resulted in indefinite hangs when using the worker pool.
It appears that for training sets of a certain size, without specifying the amount of memory to be allocated to each CPU, the worker pools hang.
For instance, with a smaller training set (~100 atoms), the worker pool would open and close successfully, but around ~400 it would never complete a single worker in the pool's function call.
Along the way, @nw13slx also caught a bug related to not closing the worker pool properly, which is fixed in the PR for #103 . This further motivates more memory and time profiling within the code at some point.
from flare.
Possibly helpful Odyssey tip for checking memory usage from the quick start guide:
You can view the runtime and memory usage for a past job with
sacct -j JOBID --format=JobID,JobName,ReqMem,MaxRSS,Elapsed
where JOBID is the numeric job ID of a past job:sacct -j 531306 --format=JobID,JobName,ReqMem,MaxRSS,Elapsed
JobID JobName ReqMeM MaxRSS Elapsed
531306 sbatch 00:02:03
531306.batch batch 750000K 513564K 00:02:03
531306.0 true 916K 00:00:00
The .batch portion of the job is usually what you're looking for, but the output may vary. This job had a maximum memory footprint of about 500MB, and took a little over two minutes to run.
Probably large GPs require >100 MB (which is the default memory per cpu).
from flare.
Related Issues (20)
- The `flare` workflow in CI is throwing deprecation warnings
- Can't run the tutorial example HOT 1
- Problem running otf_train.yaml (error message sparse_gp.py) HOT 4
- raise RuntimeError("Failed to retrieve any thermo_style-output") HOT 3
- print step information twice in myotf_thermo.txt HOT 1
- No change when restarting OTF with different training parameters HOT 2
- Active learning hanging there though the cores are still taken up HOT 2
- Compiling LAMMPS with KOKKOS Library Failing with FLARE HOT 6
- MemoryError calling GaussianProcess HOT 1
- Wrong `QE` DFT calculator name in GoogleColab tutorial? HOT 2
- Building with classic intel compilers fails: null pointer
- Coogle colab tutorial is missing HOT 2
- Time of Update GP (s) increases a lot as more DFT calls HOT 1
- otf run hangs at the very first step
- Trajectory extraction HOT 7
- OTF SGP models with force_only=True, not working with OTF.from_checkpint() HOT 2
- Output file inquire HOT 3
- Is the specific coarse-grain version available? HOT 1
- seg fault when predicting local uncertainty with trained sgp calculator HOT 4
- Potential Parallelism for Hyperparameter Tuning
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from flare.