cemanil / lnets Goto Github PK
View Code? Open in Web Editor NEWLipschitz Neural Networks described in "Sorting Out Lipschitz Function Approximation" (ICML 2019).
Lipschitz Neural Networks described in "Sorting Out Lipschitz Function Approximation" (ICML 2019).
When I use the hardcoded LeNet structure and train it with the provided config (lenet_bjorck) file I am not achieving the Lipschitz Constant of 1 even after orthonomal finetuning (some of the singular values are a little over 1).
Should I do anything different?
When I try to use the fully_convolutional_2d architecture, what should the
"l_correction_constant" be to calculate the "l_constant_per_layer"?
can somebody please explain how to run the codes and what is config in train_classifier()
Given an input of size (B, d), the behavior of GroupSort(d//2) and MaxMin(d//2) should be the same. However, a simple test will show that they do not produce the same results. In fact, in the code for MaxMin, the forward function simply concatenates the maxes and mins, thereby not having the interleaved max-min-max-... structure that is given in GroupSort. A bit confused because I'm expecting them to have the same behavior (with GroupSort being the correct implementation).
EDIT: Perhaps it might be okay since the output of MaxMIn is simply a permutation of the output of GroupSort for group size of 2. If a linear layer follows, this is essentially just permuting the rows of the weight matrix. Could someone confirm this?
Hi,
Using your implementation of group sort I cannot get the check_group_sorted function to return 1.
I'm providing a tensor of shape (NCHW) and with a group sort of (2, axis=1).
Stepping through the implementation with a toy problem I can see that the channel layer gets sorted correctly. But the '''np.diff()''' in '''check_group_sorted''' seems to diff on a wrong channel.
Steps to reproduce
import torch
from activations import GroupSort
tensor = torch.arange(16).roll(5).reshape(1, 4, 2, 2)
gs = GroupSort(2, axis=1)
gs(tensor)
If I select axis+1 in the diff it seems to be diffing the sorted values.
Hi again,
I was able to reconstruct Figure 9 and the corresponding lines in Figure 8 (not exactly but closely)
(MaxMin Hinge 0.1, MaxMin Hinge 0.3, ReLU Hinge 0.1 and ReLU Hinge 0.3) by using fc_classification_l_inf_margin.json
and changing margin
and activation
appropriately.
I am currently trying to figure out how to reproduce the rest. Should 'ReLU Standard' and 'MaxMin Standard' be computed by using fc_classification_l_inf.json
and setting the correct activation function?
I assume the two lines 'PGD eps 0.1' and 'PGD eps 0.15' (mentioned in "We also compared to PGD training"?) are computed by "train_pgd.py"? Which config file should be used for this experiment?
Best regards,
Verena
Hi again,
I am currently trying to run python ./lnets/tasks/dualnets/mains/train_dual.py ./lnets/tasks/dualnets/configs/high_dimensional_cone_experiment.json
. The only thing I changed in the config are: "save_best": true
, "cuda": true
and "visualize": true
.
Since this did not create any visualizations I looked into the code and was confused by this function:
def save_1_or_2_dim_dualnet_visualizations(model, dim, figures_dir, config, epoch=None, loss=None,
after_training=False):
if not after_training:
if dim == 2:
save_2d_dualnet_visualizations(model, figures_dir, config, epoch, loss)
if dim == 1:
save_1d_dualnet_visualizations(model, figures_dir, config, epoch, loss)
else:
if config.distrib1.dim == 2:
save_2d_dualnet_visualizations(model, figures_dir, config, after_training=True)
if config.distrib1.dim == 1:
save_1d_dualnet_visualizations(model, figures_dir, config, after_training=True)
There is dim
and config.distrib1.dim
. At first I thought that the call to save_1_or_2_dim_dualnet_visualizations
in train_dual.py was wrong and one should pass 1 or 2 instead of config.distrib1.dim
. After trying this, I realized that actually dim
and config.distrib1.dim
should be the same and this is passed redundantly.
I assume plotting is only possible for 1D and 2D distributions (I actually think this was mentioned somewhere in the paper but I don't remember where)? (This confusion might sound stupid, to clarify: I thought that there was two types of plots - 1D and 2D - to visualize higher dimensional distributions or some of their metrics)
If I am correct and dim
is always the same as config.distrib1.dim
and config
is passed anyways I can fix this and create a pull request.
Best regards
Verena
Hi,
your README states "All the experiments were performed using PyTorch version 0.4" which made me think you used 0.4.0 but I think it should be 0.4.1 since for example from torch.utils.data import Subset, DataLoader
causes an error using torch==0.4.0. torch.utils.data.dataset.Subset
was moved to torch.utils.data.Subset
in v0.4.1. It's working now for me but it might save others time if you could clarify this.
Thanks and best regards,
Verena
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.