Coder Social home page Coder Social logo

Comments (5)

guipenedo avatar guipenedo commented on September 25, 2024

Hi, I am not sure this would be trivial as you'd also have to recheck what was written to each output file/when the buffer was last flushed and this would potentially break some processing where you actually need to process the entire data when resuming a failed job (for example, when computing the signatures of a file for deduplication).
Can you give some more details on your particular slurm limitations? Is there a limit on the size of a job array, nb of jobs running simultaneously or the actual total number of jobs (including those waiting) on the cluster? One possible workaround would be to make each slurm job in the array run multiple datatrove tasks, this way you could still have many total tasks (and thus small amount of data per task - better resuming) without increasing the total slurm job amount

from datatrove.

marianna13 avatar marianna13 commented on September 25, 2024

Hey Guilherme,

Yes on my cluster I have a limit on how many jobs I can run in total therefore I encounter this issue while processing large datasets. How to make each slurm job datatrove to run multiple datatrove jobs? I tried specifying ntasks-per-node in sbatch args but I don't think it's a right logic (datatrove checks ARRAY slurm variables and doesn't care about number of tasks per node).

from datatrove.

guipenedo avatar guipenedo commented on September 25, 2024

This option isn't present in the current code. I have added support for it here (untested): #153 let me know if it works for you/solves your problem

from datatrove.

marianna13 avatar marianna13 commented on September 25, 2024

Hey Guilherme,
I tried tasks_per_job=10 (i.e. 1 nodes with 10 parallel datatrove tasks) but it makes the whole thing very slow (way slower than 1 task per job). Is there any reason why it might be the case I mean yeah it's expected it should 10 times slower bc we allocate 10 less resorces per task but it's rather x100 slower)?

from datatrove.

shizhediao avatar shizhediao commented on September 25, 2024

Hi, I am using tasks_per_job=10 but find the tasks are executed sequentially, which is too slow. I expect them to be finished parallelly.

Are there any suggestions? @guipenedo
Thanks!

from datatrove.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.