Exercises of the Laboratory of Computational Physics Mod. B course A.Y. 2021/2022.
Group: Amjadi Bahador, Attar Aidin, Joulaei Vijouyeh Roya, and Roshana Mojtaba
Prof. Marco Baiesi
Optimization techniques for Machine Learning:
- Vanilla Gradient Descent (GD);
- Gradient Descent with Momentum (MGD);
- Nesterov accelerated Gradient (NAG);
- RMSProp
- Adams
Deep Neural Networks
Use of DNNs for binary classification in 2D with Keras:
- Behaviour of the DNN training and validation curves as a function of the number of samples used;
- Estimation of the best hyper-parameters using a GridSearchCV algorithm;
- Behaviour of the DNN training and validation curves as a function of the rescaling and the inititial weigths
- Repeating the procedure with a more complex distribution
Convolutional Neural Networks
Use of CNNs for pattern recognition with Keras:
- Behaviour of the CNN training and validation curves as a function of the parameters such as the amplitude A;
- Behaviour of the curves varying the regularization factor and function
- Finding the best model for our task.
XGBoost trees
Use of XGBoost trees for pattern recognition and binary classification:
- Comparison with the results of CNN;
- Feature extraction using 'tsfresh';
- Combination of the feature extraction with a CNN;
- Behaviour of the accuracy as a function of the parameters used
- Study of the best simple XGBoost model keeping a good accuracy.
DBSCAN and t-SNE
Use of DBSCAN and t-SNE for visualization and clustering:
- The role of dimensions;
- The role of perplexity;
- Tuning of the parameters in DBSCAN algorithm for clustering
- t-SNE for clustering