Comments (3)
Hi @AhmedHussKhalifa
Thank you for the words!
It's a good question; Ray Tune looks like a good option for hyperparameter tuning in general, but I feel it is difficult for torchdistill to officially support the package (or integrate it to existing example code either) at least right now because it will first need to support Ray.
On top of that, I'd like to have 1 yaml config file -> 1 trial (fixed hyperparameters), that will enable others to reproduce the reported results shortly and keep the training log file compact. So at this time, my recommendation is to create multiple yaml config files (a set of hyperparams -> 1 yaml config file) and run it by shell script or distribute the jobs to different nodes if you are using HPC.
from torchdistill.
Hey,
Thank you for ur reply.
I have a question, If I want to increase the batch size, one way to achieve it, is to decrease the complexity of how you are running the teacher, Right?
I was thinking of gernerting the logits from the teacher model and save them. I know in the training process we needs random sampling, so I will save them as one pickel file that load these vectors by customized dataloader and another reponsable for images loading.
I would like to have your input in the previous modification.
from torchdistill.
I have a question, If I want to increase the batch size, one way to achieve it, is to decrease the complexity of how you are running the teacher, Right?
Unless you hit the limit of your computing resource e.g., GPU memory, RAM, etc, it is not always necessary to save complexity of extracting teacher's output(s).
One way to save the complexity without decreasing batch size is to increase grad_accum_step
.
e.g., grad_accum_step: 2
means gradients will be accumulated for 2 iterations, and then an optimizer will update its parameters.
I know in the training process we needs random sampling, so I will save them as one pickel file that load these vectors by customized dataloader and another reponsable for images loading.
Does it mean saving teacher's output given an input? If so, it's already implemented in torchdistill.
By specifying a cache directory inyour yaml file like cache_dir
: './cache/' (or other dir path), the outputs from teacher model will be saved at the first epoch.
From the second epoch, the saved outputs will be loaded instead of running the teacher model. Note that this approach is not effective when you use some data augmentation approach e.g., random crop, horizontal flip, etc when transforming inputs as the saved outputs are associated with the corresponding input indices defined in Dataset module.
FYI, torchdistill's default pipeline will apply torch.no_grad
for teacher model unless it has any updatable parameters during training
from torchdistill.
Related Issues (20)
- Affinity Loss usage HOT 2
- It seems some bug in `split_dataset` HOT 1
- Distilling Knowledge from a image classification model with sigmoid function and binary cross entropy HOT 3
- Bug. Bad implement. HOT 2
- Combine two distillation losses HOT 9
- Similarity Preserving KD HOT 2
- How to train my own COCO dataset for object detection? HOT 1
- Why using `log_softmax` instead of `softmax`? HOT 1
- ValueError: batchmean is not a valid value for reduction HOT 1
- Disagreement betweeen the log and configuration of kd-resnet18_from_resnet34 HOT 1
- Use different models as Teacher/Student HOT 1
- Custom Data HOT 1
- Where is trained model? HOT 1
- Not a bug but a discrepency between the log and config file for kd-resnet18_from_resnet34 HOT 1
- How should I use Torchdistill? HOT 1
- [BUG] Not supported to Nvidia 4090 HOT 1
- I tried with this script also, only single nproc seems to be working. Do i need to define any additional enviornment variables like RANK or LocaL HOST HOT 1
- [BUG] fp16 causes AssertionError: No inf checks were recorded for this optimizer HOT 4
- [BUG] Missing Link in Readme HOT 1
- [BUG]ImportError: cannot import name 'import_dependencies' from 'torchdistill.common.main_util' HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from torchdistill.