title | permalink | nav_order |
---|---|---|
๐ก Home |
/ |
1 |
The course is a summary of state-of-the-art results and approaches in solving applied optimization problems. Despite the focus on applications, the course contains the necessary set of theoretical foundations to understand why and how given methods work.
Classes are taken online twice a week for an hour and a half. In the lecture session a brief theoretical introduction to the topic is discussed, in the practical interactive session students solve problems on the topic on their own with Q&A.
Introductory session. ๐ Notes. ๐ผ Video
๐ฆ Lecture | ๐ Seminar |
---|---|
Brief recap of matrix calculus. ๐ presentation ๐ notes ๐ผ video |
Examples of matrix and vector derivatives. ๐ผ video ๐ code |
Idea of automatic differentiation. ๐ presentation ๐ notes ๐ code ๐ผ video |
Work with automatic differentiation libraries - jax, pytorch, autograd. ๐ผ video๐ code |
๐ฆ Lecture | ๐ Seminar |
---|---|
Markowitz portfolio theory ๐ presentation ๐ notes ๐ผ video๐ code |
Building a portfolio based on a real-world data. ๐ผ video ๐ code |
๐ฆ Lecture | ๐ Seminar |
---|---|
Applications of linear programming. ๐ presentation ๐ notes ๐ผ video๐ code |
LP applications exercises: selecting TED talks as LP, production planning. ๐ผ video๐ code |
๐ฆ Lecture | ๐ Seminar |
---|---|
Zero order methods: simulated annealing, evolutionary algorithms, genetic algorithm. Idea of Nelder Mead algorithm. ๐ code ML models hyperparameter search with nevergrad ๐ code and optuna ๐ code. ๐ presentation ๐ notes ๐ผ video |
ML models Hyperparameter search with optuna and keras. ๐ผ video๐ code |
๐ฆ Lecture | ๐ Seminar |
---|---|
Newton method. ๐ code Quasi-Newton methods. ๐ code ๐ presentation ๐ notes ๐ผ video |
Implementation of the damped Newton method. Finding the analytical center of a set. Convergence study. Comparison with other methods. Benchmarking of quasi-Newtonian methods. ๐ผ video๐ code |
๐ฆ Lecture | ๐ Seminar |
---|---|
Stochastic gradient descent method. Batches, epochs, schedulers. Nesterov Momentum and Polyak Momentum. Accelerated gradient method. Adaptive stochastic methods. Adam, RMSProp, AdaDelta. ๐ code ๐ presentation ๐ notes ๐ผ video |
A convergence study of the SGD. Hyperparameter tuning. Convergence study of accelerated methods in neural network training. Convergence study of adaptive methods in neural network training. ๐ผ video๐ code |
๐ฆ Lecture |
---|
The landscape of the loss function of a neural network. Neural network fine-tuning aka transfer learning. Neural style transfer. ๐ code Using GANs to train density distribution on the plane. Generating new pokemons using deep neural networks. ๐ code Visualizing the projection of the loss function of a neural network on a straight line and a plane. ๐ code ๐ผ video |