Coder Social home page Coder Social logo

Comments (4)

skhanna03 avatar skhanna03 commented on August 18, 2024

Hi, did you find a solution to your issue yet? If so, can you please explain?

from brainmagick.

NeuSpeech avatar NeuSpeech commented on August 18, 2024

Hi, I didn't solve it. Did you evaluated the model successfully? If so, can you please help me with it?

from brainmagick.

Prateek-VIT avatar Prateek-VIT commented on August 18, 2024

Hi, I managed to run this and figured I'd give my findings here:
I do not know whether this is the intended or correct way for non cluster PC but here's my findings.

TLDR;

here's my grid1.py

from ._explorers import ClipExplorer


# Decorator class defines what metrics to track and other metadata.
@ClipExplorer
def explorer(launcher):
	# Methods `slurm_` and `bind_` are in-place.
    #launcher.slurm_(gpus=4)  # All XPs scheduled with `launcher` will use 4 gpus.
    launcher.bind_({
        'norm.max_scale': 20, 
        'dset.n_recordings': 4,
        'model': 'clip_conv',
        'optim.batch_size' : 16,
        })  # set common params.

    more_subjects = {'dset.n_recordings': 8}
    selections = ['brennan2019']
    with launcher.job_array():
        for selection in selections:
            # The `bind()` method returns a sub-launcher with different params.
            # This won't affect the original launcher.
            sub = launcher.bind({'dset.selections': [selection]})
            # You schedule experiments by calling the (sub)-launcher.
            sub()
            sub({'optim.batch_size': 4})  #Experiment 1: does batch size influence CLIP ?
            # Following XP is just to get a noise level baseline
            sub({'optim.max_batches': 1, 'optim.epochs': 1, 'test.wer_random': True}) #
            # # Variations with different input speech-related representations.
            sub({'dset.features': ['MelSpectrum']}) #
            sub({'dset.features': ['MelSpectrum'], 'feature_model': 'deep_mel'})  # DeepMel
            # Then we train a regression model.
            ssub = sub.bind({'optim.loss': 'mse', 'dset.features': ['MelSpectrum']})
            ssub()
            sub(more_subjects) #what if we trained on some more subjects instead?

then the commands I ran

dora grid grid1 --dry_run --init
dora run -f fef36047
dora run -f 73b7576e
dora run -f 05b89043
dora run -f 07765053
dora run -f 6e5515ef
dora run -f 186cff1a
dora grid grid1
dora grid grid1 --dry_run --init
python -m scripts.run_eval_probs grid_name="grid1"

Reason and explanation:

so once you run dora grid grid1 --dry_run --init it will output a table:
image
(this is what mine looked like initially.
I then ran the signatures using the dora run -f <sig> command where you replace<sig> with the signature you see in your grid table.

Something to note: Inside grid1.py
the sub() function seems to bascially be like extra arguments for overriding whatever default args you set, (defined in the /bm/conf/config.yaml which is then overrided again by the launcher.bind() in the grid1.py which is finally overrided by thesub())

This should fill up your grid table to look like this
image
(you get this by running dora grid grid1 --dry_run --init)

Note: here is the bug

so now what I've noticed is when you run dora grid --dry_run with the --dry_run argument it doesn't pull in the checkpoints.th file from what I presume is /outputs/xps/<sig> folder for the particular experiment into the folder it is searching in (/outputs/grid/grid1/)

The solution I found is to run without the --dry_run argument once, it will fail and ask you if you're running on a slurm cluster.
image
you can ignore this error for running an evaluation.

More importantly, what happened now was it just pulled in the checkpoints.th file from the xps folder into the grids folder like it should've in the first place and now you can run the python -m scripts.run_eval_probs grid_name="grid1"

Do note, if the signature has already been evaluated once, i.e already exists in the outputs/eval/ folder then it will not run the eval for it again. (You can still get the results for it though)

from brainmagick.

kingjr avatar kingjr commented on August 18, 2024

The present repo is designed to be run on a slurm cluster.

I would re-direct you to https://github.com/facebookresearch/dora for dora API locally.

from brainmagick.

Related Issues (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.