Coder Social home page Coder Social logo

samsungsailmontreal / ghn3 Goto Github PK

View Code? Open in Web Editor NEW
26.0 4.0 3.0 1.2 MB

Code for "Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?" [ICML 2023]

Home Page: https://arxiv.org/abs/2303.04143

License: MIT License

Python 8.57% Shell 91.43%
hypernetworks imagenet large-scale transformers computational-graphs graphs deep-learning pytorch

ghn3's Introduction

Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?

ICML 2023 link to the poster, slides and video presentation, tweet

Boris Knyazev, Doha Hwang, Simon Lacoste-Julien

Paper: https://arxiv.org/abs/2303.04143, https://openreview.net/forum?id=7UXf8dAz5T

Introduction

Updates

  • [July 11, 2023] Sync changes with the recent PPUDA/GHN-2 updates,
  • [June 26, 2023] Training and eval on CIFAR-10 and eval on the DeepNets splits are supported.
  • [June 22, 2023] Major code refactoring.
    • Distributed (DDP) training and eval scripts added (see Experiments below).
  • [June 6, 2023] GHN-3 code improved (see ghn3/nn.py), more examples added (see ghn_single_model.py).
  • [Apr 11, 2023] Cleaned up graph construction, sanity check for all PyTorch models.
  • [Apr 4, 2023] Slightly updated graph construction for ViT to be consistent with our paper.
    • Made four variants of our GHN-3 available: ghn3tm8, ghn3sm8, ghn3lm8, ghn3xlm16 (see updated ghn_all_pytorch.ipynb). ghn3tm8 takes just 27MB so it is efficient to use in low-memory cases.

This work extends the previous work Parameter Prediction for Unseen Deep Architectures that introduced improved Graph HyperNetworks (GHN-2). Here, we scale up GHN-2 and release our best performing model GHN-3-XL/m16 as well as smaller variants. Our GHN-3 can be used as a good initialization for many large ImageNet models.

Below are a few figures showcasing our results (see our paper for details).

Our code has only a few dependencies and is easy to use as shown below with PyTorch examples.

Please feel free to open a GitHub issue to ask questions or report bugs. Pull requests are also welcome.

Installation

# If you didn't install ppuda before:
pip install git+https://github.com/facebookresearch/ppuda.git

# If you had ppuda installed before, you need to reinstall it, because it was updated recently:
pip install git+https://github.com/facebookresearch/ppuda.git --no-deps --upgrade --force-reinstall

pip install torch>=1.12.1 torchvision>=0.13.1  # [optional] update torch in case it's old

pip install huggingface_hub joblib # to load pretrained GHNs

pip install timm  # [optional] to use fancy optimizers like LAMB

Imagenet

For training and evaluation on ImageNet, one needs to setup ImageNet as in scripts/imagenet_setup.sh.

Training and evaluation on CIFAR-10 is now also supported, however, it was not presented in the paper.

Usage

import torchvision
from ghn3 import from_pretrained

ghn = from_pretrained()  # default is 'ghn3xlm16.pt', other variants are: 'ghn3tm8.pt', 'ghn3sm8.pt', 'ghn3lm8.pt'

model = torchvision.models.resnet50()  # can be any torchvision model
model = ghn(model)

# That's it, the ResNet-50 is initialized with our GHN-3.

GHN-3 is stored in HuggingFace at https://huggingface.co/SamsungSAILMontreal/ghn3/tree/main. As the largest model (ghn3xlm16.pt) takes about 2.5GB, it takes a while to download the model during the first call of ghn = from_pretrained().

See ghn_single_model.py for the examples of fine-tuning the model or GHN. Also see ghn_all_pytorch.ipynb where we show how to predict parameters for all PyTorch models.

Experiments

These scripts allow for training and evaluation of PyTorch models and GHN-3 and comparing to the baselines as in our paper. Training will be automatically run using DistributedDataParallel on all GPUs available if the script is run using torchrun. See command examples in train_ghn_ddp.py and train_ddp.py.

  • For training GHN-3 see commands in train_ghn_ddp.py.
  • For evaluating GHN-3 see commands in eval_ghn.py (make sure to use --split torch for evaluation on PyTorch).
  • For training PyTorch models with and without GHN-3 initialization see commands in train_ddp.py.
  • For evaluating PyTorch models see commands in eval.py.

License

This code is licensed under MIT license and is based on https://github.com/facebookresearch/ppuda that is also licensed under MIT license.

Citation

@inproceedings{knyazev2023canwescale,
  title={Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?},
  author={Knyazev, Boris and Hwang, Doha and Lacoste-Julien, Simon},
  booktitle={International Conference on Machine Learning},
  year={2023}
}

ghn3's People

Contributors

bknyaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ghn3's Issues

Need additional detail regarding few-shot learning experiment

Can you explain the few-shot setting whose result are reported in
Table 7. Transfer learning from ImageNet to few-shot CIFAR-10
and CIFAR-100 with 1000 training labels with 3 networks: ResNet50 (R-50), ConvNext-B (C-B) and Swin-T (S-T)

How many shots in the training set as well as in the test set or query set.

Questions about the paper

Thank you for your interesting research. I have some questions regarding the paper:

  1. I'm curious about the adaptability of GHNs to other standard-sized datasets, particularly in different tasks such as image segmentation. The Penn-Fudan dataset discussed in your paper seems relatively small. Could you share your thoughts on this?
  2. Do you remember the accuracy of the models in DeepNets-1M achieved during the training of GHNs model? How are they compared to the predicted parameter model accuracy with/without fine-tuning?

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.