Andrew Ng "deep learning" - a post-test - a second course (Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization) -Week 1 - Practical aspects of deep learning (first week of the test - depth study of practice)

Week 1 Quiz - Practical aspects of deep learning (the first week of the test - the depth Practice Learning)

\ 1. If you have 10,000,000 examples, how would you split the train / dev / test set? (If you have 10,000,000 samples, how would you divide training / development / test sets?)

[] 98% train. 1% dev. 1% test (training set of 98%, 1% developed set, test set 1%)

answer

Correct

 

. \ 2 The dev and test set should: (development and testing set should)

[] Come from the same distribution (from the same distribution)

answer

Correct

 

\ 3.If your Neural Network model seems to have high variance, what of the following would be promising things to try? (If your neural network model seems to have a high variance, following which it is possible to try to solve the problem?)

[] Add regularization (add regularization)

[] Get more training data (for more training data)

answer

All right

 

\ 4. You are working on an automated check-out kiosk for a supermarket, and are building a classifier for apples, bananas and oranges. Suppose your classifier obtains a training set error of 0.5%, and a dev set error of 7%. Which of the following are promising things to try to improve your classifier? (Check all that apply.) (automatic checkout kiosk you work in a supermarket, which is making apples, bananas and oranges make the classifier. suppose your classifier there are 7% of error of 0.5% on the error, and the development of sets on the training set. Which of the following is promising to try to improve the classification performance of the classifier you?)

[] Increase the regularization parameter lambda (increase regularization parameter lambda)

[] Get more training data (for more training data)

answer

All right

 

\ 5. What is weight decay? (What is the weight decay?)

[] A regularization technique (such as L2 regularization) that results in gradient descent shrinking the weights on every iteration. (Regularization techniques (e.g. regularized L2) leads at each iteration the gradient descent contraction weight.)

answer

Correct

 

\ 6. What happens when you increase the regularization hyperparameter lambda? (What happens when you increase the regularization parameters exceed lambda?)

[] Weights are pushed toward becoming smaller (closer to 0) (weight become smaller (closer to 0))

answer

Correct

 

. \ 7 With the inverted dropout technique, at test time: (at test time using dropout)

[] You do not apply dropout (do not randomly eliminate units) and do not keep the 1 / keep_prob factor in the calculations used in training (not random elimination of nodes, and do not calculated using the reserved 1 / keep_prob factor in training)

answer

Correct

 

\ 8 Increasing the parameter keep_prob from (say) 0.5 to 0.6 will likely cause the following:. (Check the two that apply) (parameter keep_prob from (for example) increased to 0.5 0.6 may cause the following cases)

[] Reducing the regularization effect (regularization weakened)

[] Causing the neural network to end up with a lower training set error (the neural network will behave better at the end on the training set.)

answer

All right

 

\ 9. Which of these techniques are useful for reducing variance (reducing overfitting)? (Check all that apply.) (Which of the following techniques can be used to reduce the variance (reduction of over-fitting))

【 】Dropout

[] L2 regularization (L2 regularization)

[] Data augmentation (data enhancement)

answer

All right

 

\ 10. Why do we normalize the inputs x? (Why should we normalized input x?)

[] It makes the cost function faster to optimize (the cost function to optimize it more quickly)

answer

Correct

 

 



Week 1 Code Assignments:

✧Course 2 - improve DNN - the first week of the test - the practice of deep learning

assignment1_1:Initialization)

https://github.com/phoenixash520/CS230-Code-assignments

assignment1_2:Regularization

https://github.com/phoenixash520/CS230-Code-assignments

assignment1_3:Gradient Checking

https://github.com/phoenixash520/CS230-Code-assignments

Guess you like

Origin www.cnblogs.com/phoenixash/p/12092355.html