A Python package of accelerated firstorder algorithms for solving relativelysmooth convex optimization problems
minimize { f(x) + P(x)  x in C }
with a reference function h(x), where C is a closed convex set and
 h(x) is convex and essentially smooth on C;
 f(x) is convex and differentiable, and Lsmooth relative to h(x), that is, f(x)L*h(x) is convex;
 P(x) is convex and closed (lower semicontinuous).
Implemented algorithms in HRX2018:
 BPG(Bregman proximal gradient) method with line search option
 ABPG (Accelerated BPG) method
 ABPGexpo (ABPG with exponent adaption)
 ABPGgain (ABPG with gain adaption)
 ABDA (Accelerated Bregman dual averaging) method
Additional algorithms for solving DOptimal Experiment Design problems:
 D_opt_FW (basic FrankWolfe method)
 D_opt_FW_away (FrankWolfe method with away steps)
Clone or fork from GitHub. Or install from PyPI:
pip install accbpg
Example: generate a random instance of Doptimal design problem and solve it using two different methods.
import accbpg
# generate a random instance of Doptimal design problem of size 80 by 200
f, h, L, x0 = accbpg.D_opt_design(80, 200)
# solve the problem instance using BPG with line search
x1, F1, G1, T1 = accbpg.BPG(f, h, L, x0, maxitrs=1000, verbskip=100)
# solve it again using ABPG with gamma=2
x2, F2, G2, T2 = accbpg.ABPG(f, h, L, x0, gamma=2, maxitrs=1000, verbskip=100)
# solve it again using adaptive variant of ABPG with gamma=2
x3, F3, G3, _, _, T3 = accbpg.ABPG_gain(f, h, L, x0, gamma=2, maxitrs=1000, verbskip=100)
Doptimal experiment design problems can be constructed from files (LIBSVM format) directly using
f, h, L, X0 = accbpg.D_opt_libsvm(filename)
All algorithms can work with customized functions f(x) and h(x), and an example is given in this Python file.

A complete example with visualization is given in this Jupyter Notebook.

All examples in HRX2018 can be found in the ipynb directory.

Comparisons with the FrankWolfe method can be found in ipynb/ABPGvsFW.