Coder Social home page Coder Social logo

Comments (3)

yoshitomo-matsubara avatar yoshitomo-matsubara commented on May 24, 2024

Hi @AhmedHussKhalifa
Thank you for the words!

It's a good question; Ray Tune looks like a good option for hyperparameter tuning in general, but I feel it is difficult for torchdistill to officially support the package (or integrate it to existing example code either) at least right now because it will first need to support Ray.
On top of that, I'd like to have 1 yaml config file -> 1 trial (fixed hyperparameters), that will enable others to reproduce the reported results shortly and keep the training log file compact. So at this time, my recommendation is to create multiple yaml config files (a set of hyperparams -> 1 yaml config file) and run it by shell script or distribute the jobs to different nodes if you are using HPC.

from torchdistill.

AhmedHussKhalifa avatar AhmedHussKhalifa commented on May 24, 2024

Hey,
Thank you for ur reply.
I have a question, If I want to increase the batch size, one way to achieve it, is to decrease the complexity of how you are running the teacher, Right?

I was thinking of gernerting the logits from the teacher model and save them. I know in the training process we needs random sampling, so I will save them as one pickel file that load these vectors by customized dataloader and another reponsable for images loading.

I would like to have your input in the previous modification.

from torchdistill.

yoshitomo-matsubara avatar yoshitomo-matsubara commented on May 24, 2024

Hi @AhmedHussKhalifa

I have a question, If I want to increase the batch size, one way to achieve it, is to decrease the complexity of how you are running the teacher, Right?

Unless you hit the limit of your computing resource e.g., GPU memory, RAM, etc, it is not always necessary to save complexity of extracting teacher's output(s).
One way to save the complexity without decreasing batch size is to increase grad_accum_step .
e.g., grad_accum_step: 2 means gradients will be accumulated for 2 iterations, and then an optimizer will update its parameters.

I know in the training process we needs random sampling, so I will save them as one pickel file that load these vectors by customized dataloader and another reponsable for images loading.

Does it mean saving teacher's output given an input? If so, it's already implemented in torchdistill.
By specifying a cache directory inyour yaml file like cache_dir: './cache/' (or other dir path), the outputs from teacher model will be saved at the first epoch.
From the second epoch, the saved outputs will be loaded instead of running the teacher model. Note that this approach is not effective when you use some data augmentation approach e.g., random crop, horizontal flip, etc when transforming inputs as the saved outputs are associated with the corresponding input indices defined in Dataset module.

FYI, torchdistill's default pipeline will apply torch.no_grad for teacher model unless it has any updatable parameters during training

from torchdistill.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.