Machine Learning Review 5

Machine Learning Review

1 - Below is the code you saw in the course, in which case would you use the binary cross entropy loss function?

model.compile(loss=BinaryCrossentropy())

Image Name

A. Regression tasks (tasks that predict a number)
B. BinaryCrossentropy() should not be used for any tasks
C. Classification tasks with 3 or more classes (categories)
D. Binary classification (classifications with exactly 2 classes )

Answer: D

2 - Which line in the code below performs the parameter update

Image Name

model = Sequential([

Dense(units=25, activation='sigmoid’),

Dense(units=15, activation='sigmoid’),

Dense(units=1, activation='sigmoid’)

])

model.compile(loss=BinaryCrossentropy())

model.fit(X,y,epochs=100)

A. model.fit(X,y,epochs=100)
B. None of them are executed
C. model = Sequential([...])
D. model.compile(loss=BinaryCrossentropy())

Answer: A

3 - Which of the following activation functions is most commonly used for the hidden layer of a neural network?

Image Name

A. Most hidden layers do not use activation functions
B. Linear
C. ReLU
D. Sigmoid

Answer: C

4 - For the house price prediction task, which activation function would you choose (choose two).

Image Name

A. ReLU
B. Sigmoid
C. linear答案:BC

5 - A neural network with many layers but no activation function (in the hidden layers) is ineffective; that's why we should use a linear activation function in each hidden layer.

A. wrong
B. right

Answer: wrong

6 - For a multi-classification task with 4 possible outputs, the sum of all activations adds up to 1. For a multi-class classification task with 3 possible outputs, the sum of all activations should add to...?

Image Name

A. 1
B. Less than 1
C. Greater than 1
D. Will vary with the value of output x

Answer: A

7 - 对于多分类,交叉熵损失被用于训练模型。如果输出有4个可能的类别,对于一个特定的训练例子,该例子的真实类别是第3类(y=3),那么交叉熵损失简化为什么?

Image Name

A. \(-\log \left(a_{3}\right)\)

B. \(\frac{-\log \left(a_{1}\right)+-\log \left(a_{2}\right)+-\log \left(a_{3}\right)+-\log \left(a_{4}\right)}{4}\)

C. \(z_3\)

D. \(\frac{z_3}{z_1 + z_2 + z_4}\)

答案:A

8 - 对于多分类,实现softmax回归的推荐方法是在损失函数中设置 from_logits=True,同时在模型的输出层中定义... ...?

Image Name

A. 一个Linear激活
B. 一个Softmax激活

答案:A

9 - Adam优化器是推荐的优化器,用于寻找模型的最佳参数。如何在TensorFlow中使用Adam优化器?

Image Name

A. 对model.compile()的调用会自动选择最佳优化器,不管是梯度下降、Adam还是其他。所以不需要手动选择优化器

B. Adam优化器只对Softmax输出起作用。因此,如果一个神经网络有一个Softmax输出层,TensorFlow将自动选择Adam优化器

C. 在调用model.compile时,设置优化器=tf.keras.optimizers.Adam(learning_rate=1e-3)

D. 对model.compile()的调用默认使用Adam优化器

答案:C

10 - 课程讲座中涉及到一种不同的层类型,该层的每个单一神经元都不看输入到该层的所有输入向量的值。讲座中讨论的这个层类型的名称是什么?

Image Name

A. Convolution layer
B. Fully connected layer
C. Image layer
D. One-dimensional or two-dimensional layer (according to output dimension)

Answer: A

Guess you like

Origin blog.csdn.net/cfy2401926342/article/details/131468271