Tuning Neural Networks - Recap
Key Takeaways
The key takeaways from this section include:
Tuning Neural Networks
- Validation and test sets are used when iteratively building deep neural networks
- Like traditional machine learning models, we need to watch out for the bias variance trade-off when building deep learning models
- Examples of alternatives for gradient descent are: RMSprop, Adam, Gradient Descent with Momentum, etc.
- Hyperparameter tuning is of crucial importance when working with deep learning models, as setting the parameters right can lead to great improvements in model performance
Regularization
- Several regularization techniques can help us limit overfitting: L1 Regularization, L2 Regularization, Dropout Regularization, etc.
Normalization
- Training of deep neural networks can be sped up by using normalized inputs
- Normalized inputs can also help mitigate a common issue of vanishing or exploding gradients
TensorBoard
- We can use TensorBoard to help us with the evaluation of models
- We can use TensorBoard to experiment with the neural network