Coder Social home page Coder Social logo

Comments (3)

GeorgeCazenavette avatar GeorgeCazenavette commented on August 19, 2024

Hello :)

So this tiny ConvNet has been precedent for dataset distillation works for some time now precisely because of how tiny it is.

Our method (and most others) involve a bi-level optimization that is very costly both in terms of time and memory.

We simply do not have the computing capabilities (nor the time) to use a large model like WideResNet50 as a backbone.

Hopefully someone will come up with a method that isn't so costly so we can then use realistic models to distill our datasets :)

Naively, I would guess that distilling on a larger model would distill better data for that larger model, which would potentially be more useful overall than data distilled for a small model.

It's hard to say whether or this data would transfer better to other models as well; cross-architecture generalization is still a big shortcoming of most distillation methods.

TLDR: Hard to say since we literally can't test it, but looking forward to when someone comes up with a more efficient method that allow us to try :)

from mtt-distillation.

imesu2378 avatar imesu2378 commented on August 19, 2024

Thanks for the detailed response! I too hope for more efficient methods in the future as I'm also working with 3090 gpus...

Just another question about Figure 8 Ablation study regarding the use of ZCA. You've mentioned in your paper that expert models trained without ZCA normalization take significantly longer to converge. Thus, need a larger value of T+.
So when someone tries to use your method on a expert model trained for 200 epochs it would still work? And if so the max start epoch would also have to scale to about 160? Then how about the sythetic steps is 20 still enough?

Thank you for your generous replies.

from mtt-distillation.

GeorgeCazenavette avatar GeorgeCazenavette commented on August 19, 2024

The point of the T+ parameter is to ensure that we only look at large enough meaningful updates.

It's almost just a heuristic. We chose an initial value by observing when the performance of the expert models stop increasing in by large amounts.

As long as the updates (changes in the weights) are large enough and meaningful (i.e., not just jittering around the final resting point), then the method should still work.

As far as the number of synthetic steps, this is largely decoupled from T+. This just describes how many updates we should make on the synthetic data to equate to M epochs of training on the real data. Because of our learnable learning-rate, our method is somewhat robust to different values of N and M (Figure 5). Without this adaptive learning rate, matching N synthetic steps to M real epochs would be a near impossible task.

Hope this helps!

from mtt-distillation.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.