working in progress.
EMMN is a new stochastic gradient-based optimization algorithm designed to provide the speed of adaptive methods while maintaining the generalization ability of non adaptive methods.
In this algorithm, we first decompose the gradient into two parts, one is the momentum component and the other is the residual component (i.e. gradient - momentum). Then, the residual component is normalized by the moving estimation of the standard deviation. Finally, these two components are added together and is used to update parameters. This is illustrated bellow.
Figure 1 : Visualization of EMMA algorithm.
Since this algorithm normalize gradient as if the point of the estimated moving average (i.e. momentum) is the center (i.e. zero), momentum is not affected by the normalization. Hence, momentum is invariant. This explicitly mitigate the problem of diagonal correlated trajectory, a latent property of adaptive optimization algorithm, which is identified and explained in the paper. Since gradient is divided by running estimated standard deviation at each time step, the fluctuation of the parameter moving has the similar radius from the estimated center for each element. This enables soft-homoscedastic gradient sampling which brings informative momentum estimation.
In this study, we theoretically derived the following two conjectures as the properties of the adaptive optimization algorithms.
- Diagonal correlated trajectory
- Soft-homoscedastic gradient sampling
We empirically showed the existence of these two properties, as shown in Figure 2 for diagonal correlated trajectory, Figure 5 and table 2 for soft-homoscedastic gradient sampling.
Our study suggest diagonal correlated trajectory causes high-variance, thus lower generalization ability as show in Figure 4. We theoretically show soft-homoscedastic gradient sampling has advantage in the paper. Please refer the paper for the detail.
Our motivation is to cancel the diagonal correlation of parameters trajectory while keeping soft-homoscedastic gradient sampling.
SLSBoost is sub-algorithm for gradient-based optimization algorithm to speed up the learning process.
Based on our studies on the trajectories of the learning parameters, we hypothesize that high diagonal correlations of the trajectories of the learning parameters lead to higher straight line stability of the trajectories, and that high straight line stability is a direct cause of the improvement in learning speed.
Our experiments have shown that by increasing straight line stability without increasing diagonal correlation, it is possible to improve training speed while minimizing generalized performance degradation.
SLSBoost algorithm boosts straight line stability of the parameter trajectories without increasing diagonal correlation of the trajectories. The algorithm is implemented experimentally in this pytorch implementation of the EMMN.
This section is a work in progress.
EMMN can be easily replace from SGD momentum by converting learning rate from SGD to EMMN using the bellow equation.
Where α is leraning rate, βsgd is momentum factor of SGD. For example, if βsgd = 0.95 and αsgd = 0.15, learning rate of EMMN is:
We recommend setting β of EMMN to 0.95 for the first time, and then adjusting it between 0.9 and 0.99 if necessary.
If you need to further speed up the learning process, try SLSBoost. We haven't fully experimented with it yet, but for now we would recommend something around sls_boost=(0.05, 0.995) or (0.3, 0.95) or in between.