Comments (4)
Hi, there are existing methods which demonstrate that one does not need to retrain an equivariant version of ResNet (or other large pretrained models) to obtain pretrained equivariant ResNet, rather you can "adapt" a pretrained ResNet to be equivariant to a certain group with architecture-agnostic equivariance methods, such as canonicalization.
Please feel free to check out: Equivariant Adaptation of Large Pretrained Model, NeurIPS 2023. Although it shows great results for discrete groups in the image domain and continuous groups in point clouds and other tasks, there are a few challenges in adapting pretrained image models for continuous groups, which is a work in progress.
from escnn.
Hi Siba,
Thank you for your answer. That's a fascinating work!
From what I understood, instead of training an equivariant Resnet from scratch, you preceded a ResNet50 with your canonicalization network and then fine-tuned the ResNet while training that canonicalization module. Is that correct?
Do you have code examples of how did you make that work? I'm particularly interested in the network you implemented with the escnn library.
Is it similar to the CNN under this notebook? How many layers - and in general, which hyperparameters- suited you well?
from escnn.
Hi, thank you for taking a look at the paper!
Yes, indeed. Note that you don't need to fine-tune per se (as we show in the case of the Segment-Anything Model, you can only train the equivariant canonicalization network to learn the identity orientation with prior regularization), and you would need a regularization loss to align the outputs of the canonicalization network and (pre-trained) dataset orientation.
We are planning a release of our user-friendly library before the end of February. We are adding examples and tutorials for people to get started with canonicalization. I will let you know once we release the library. A schematic of the pipeline is described in Figure 2.
Yes, the canonicalization networks are similar to the notebook you have linked. We give a small detail of hyperparameter tuning in Appendix Section B. We tune for different values of the number of layers, kernel sizes, dropout (switching off dropout generally helped), and learning rates. Anyways, the canonicalization networks are extensively small compared to the actual pretrained model under consideration, which makes it lucrative (some parameter sizes are highlighted in Table 3).
from escnn.
I pretrained some equivariant ResNets on ImageNet-1k. The models and weights can be found here.
The canonicalization approach is appealing since it can be applied to any pre-trained method. I haven't had a chance to compare against it yet, but I'm curious if there is any performance gap.
from escnn.
Related Issues (20)
- wide-resnet N=6 not equivariant
- Instance Norm as normalization? HOT 4
- What is the intuition behind conditioning the kernel size on the number of rotations in the example script? HOT 2
- check_equivariance test failed HOT 2
- escnn's conv, BN, relu is not equivariant? HOT 3
- Utility functions to save and load instances of Group and Representations HOT 2
- Missing indexing dimensions in GeometricTensors HOT 2
- Batched equivariant maps basis expansion (?) HOT 4
- Improved invariant feature extraction - Improved group pooling
- Migrate unit tests to pytest? HOT 4
- question about the 3D rotation order and the Fourier transformation matrix
- Information bottlenecks without warnings.
- Add LayerNorm layer HOT 2
- Add Unitary Group HOT 1
- Usage equivariant MLP HOT 4
- Multi-gpu training degrades performance HOT 1
- Can escnn be used to process one-dimensional data? HOT 1
- 【Passive Rotation】 HOT 3
- Planar symmetries with staggered grids HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from escnn.