Catalog of Andrew Ng Deep Learning Course Notes

In the first course, you will learn how to build neural networks (including a deep neural network), and how to train them on data. At the end of this course, you will use a deep neural network to identify cats.
Then in the second course, we will use three weeks. You'll get hands-on with deep learning and learn how to rigorously build a neural network and actually make it perform well, so you'll learn about hyperparameter tuning, regularization, diagnosing bias and variance, and some advanced optimization algorithms like the Momentum and Adam algorithms, Like black magic in how you build your network. The second course has only three weeks of study time.
In the third course, we will use two weeks to learn how to structure your machine learning project. It turns out that strategies for building machine learning systems reverse the mistakes of deep learning. The material for Lesson 3 is relatively unique, and I will share it with you. All the hot domains we've learned build and refine many deep learning problems. Most of these popular materials today are not taught in their deep learning classes by most universities. I think it will help you to make the deep learning system work better.
In the fourth course, we will refer to Convolutional Neural Networks (CNN(s)), which are often used in the image domain, and you will learn how to build such a model in the fourth course.
Finally, in the fifth course, you will learn about sequence models and how to apply them to natural language processing, among other problems. The sequence model includes models such as recurrent neural network (RNN), and the full name is long short-term memory network (LSTM). In Lesson 5 you will learn what period is and be able to apply it to natural language processing (NLP) problems.

class Week content
first lesson week1
1.1 What is a neural network
1.2 Supervised learning with neural networks
1.3 Why deep learning will rise
first lesson week2
1 Binary Classification
2 Logistic Regression
3 Logistic Regression Loss Function
4 Gradient Descent Method
5 Derivative
6 Calculation Graph
7 Using Computational Graph Derivation
8 Gradient Descent Method in Logistic Regression
9 Gradient Descent Method for m Samples
10 Vectorization
11 Vectorization Update Multiple examples
12 Vectorized logistic regression
13 Gradient output of vectorized logistic regression
14 Broadcasting in python
15 (optional) Explanation of logistic loss factor
first lesson week3
1 Overview of Neural Networks
2 Representation of Neural Networks
3 Computing the Output of Neural Networks
4 Vectorization of Multiple Samples
5 Explanation of Vectorization Implementation
6 Activation Functions
7 Why Nonlinear Activation Functions
8 Derivatives of Activation Functions
9 Gradient Descent for Neural Networks
10 (optional) Intuitive understanding of backpropagation
11 Random initialization
first lesson week4 1 Deep Neural Networks
2 Forward and Backpropagation
3 Forward Propagation in Deep Networks
4 Dimensions of the Kernel Matrix
5 Why Use Deep Representations
6 Building Deep Neural Network Blocks
7 Parameters vs Hyperparameters
8 What does this have to do with the brain
Second lesson week1-week3 1Train-Validation-Test Set2
Bias Variance3
Machine Learning Basics4
Regularization5
Why Regularization Can Reduce Overfitting6
Dropout Regularization7
Understanding Dropout
8 Other Regularization Methods9
Normalizing Input10
Gradient Vanishing and Gradient
Explosion11 Weight initialization of neural network
12 Gradient value approximation
13 Gradient test
14 Notes on gradient test implementation
2.1 Mini-batch gradient descent method
2.2 Understanding Mini-batch gradient descent method
2.3 Exponential weighted average
2.4 Understanding exponential weighted average
2.5 Exponential weighted average deviation Correction
2.6 Momentum Gradient Descent
2.7 RMSprop
2.8 Adam Optimization Algorithm
2.9 Learning Rate Decay
2.10 Local Optimum Problem
3.1 Debugging and Processing
3.2 Selecting the Appropriate Range for Hyperparameters
3.3 Practice of Hyperparameter Training Pandas vs Caviar
3.4 Activation Function of Regularized Network
3.5 Batch Fitting Norm into Neural Networks
3.6 Why Batch Norm Works
3.7 Batch Norm at Test Time
3.8 Softmax Regression
3.9 Training a Softmax Classifier
3.10 Deep Learning Framework
Lesson Three week1-week2 1为什么是ML策略
2正交化
3单一数字评估指标
4满足和优化指标
5训练-开发-测试集的划分
6训练集开发集测试集的大小
7什么时候改变开发和测试集和指标
8为什么是人的表现
9可避免偏差
10理解人的表现
11超过人的表现
12改善你的模型的表现
2.1进行误差分析
2.2清除标记错误的数据
2.3快速搭建你的第一个系统并进行迭代
2.4在不同的划分上进行训练并测试
2.5不匹配数据划分的偏差和方差
2.6解决数据不匹配
2.7迁移学习
2.8多任务学习
2.9什么是端到端的深度学习
2.10是否要使用端到端的深度学习
第四课 week1 1 介绍
2 卷积运算
3 其他的边缘检测
4 padding
5 卷积步长
6 三维卷积
7 单层卷积网络
8 简单的卷积网络示例
9 池化层
10 卷积神经网络示例
11 为什么使用卷积
第四课 week2 1 为什么要进行实例探究
2 经典网络
3 残差网络
4 残差网络为什么有用
5 网络中的网络以及1×1的卷积
6 谷歌Inception网络简介
7 Inception网络
8 使用开源的实现方案
9 迁移学习
10 数据扩充/数据增强
11 计算机视觉现状
第四课 week3 1 目标定位
2 特征点检测
3 目标检测
4 卷积的滑动窗口实现
5 Bounding Box预测
6 交并比
7 非极大值抑制
8 Anchor box
9 YOLO算法
10 候选区域(选修)
第四课 week4 1 什么是人脸识别
2 One-shot学习
3 Siamese网络
4 Triplet损失
5 面部验证与二分类
6 什么是神经风格迁移
7 什么是深度卷积网络
8 代价函数
9 内容代价函数
10 风格代价函数
11 一维到三维的推广
第五课 week1 1.1为什么选择序列模型
1.2数学符号
1.3循环神经网络
1.4通过时间的反向传播
1.5不同类型的循环神经网络
1.6语言模型和序列生成
1.7新序列采样
1.8带有神经网络的梯度消失
1.9 GRU单元
1.10长短期记忆
1.11双向神经网络
1.12深层循环神经网络
第五课 week2 2.1词汇表征
2.2使用词嵌入
2.3词嵌入的特性
2.4嵌入矩阵
2.5学习词嵌入
2.6 Word2Vec
2.7负采样
2.8 GloVe词向量
2.9情绪分类
2.10词嵌入除偏

Guess you like

Origin blog.csdn.net/m0_52118763/article/details/128448812