版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/you1314520me/article/details/80687250
本文新地址点击:此处
Note
This is my personal summary after studying the course, Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization, which belongs to Deep Learning Specialization. and the copyright belongs to deeplearning.ai.
My personal notes
week 1 : practical-aspects-of-deep-learning
- 01_setting-up-your-machine-learning-application
- 02_regularizing-your-neural-network
- 03_setting-up-your-optimization-problem
week 2: optimization-algorithms
- 01_mini-batch-gradient-descent
- 02_understanding-mini-batch-gradient-descent
- 03_exponentially-weighted-averages
- 04_understanding-exponentially-weighted-averages
- 05_bias-correction-in-exponentially-weighted-averages
- 06_gradient-descent-with-momentum
- 07_rmsprop
- 08_adam-optimization-algorithm
- 09_learning-rate-decay
- 10_the-problem-of-local-optima
week 3: hyperparameter-tuning-batch-normalization-and-programming-frameworks
- 01_hyperparameter-tuning
- 02_batch-normalization
- 03_multi-class-classification
- 04_introduction-to-programming-frameworks
My personal programming assignments
week1: practical-aspects-of-deep-learning
week2: optimization-algorithms
week3: hyperparameter-tuning-batch-normalization-and-programming-frameworks