CNN与RNN

CNN

1、激活函数(Activation Functions)

操作                                                                                   描述

tf.nn.relu(features, name=None)                                             relu激活函数

tf.nn.elu(features, name=None)                                              elu激活函数

tf.nn.dropout(x, keep_prob, noise_shape=None)                       计算dropout,keep_prob为keep概率

tf.sigmoid(x, name=None)                                                     y = 1 / (1 + exp(-x))

tf.tanh(x, name=None)                                                          双曲线切线激活函数

2、卷积函数(Convolution)

操作                                                                                   描述

tf.nn.conv2d(input, filter, strides, padding)                               在给定的4D input与 filter下计算2D卷积

input的shape为 [batch, height, width, in_channels]

3、池化函数(Pooling)

操作                                                                                   描述

tf.nn.max_pool(value, ksize, strides, padding)                          最大值方法池化

tf.nn.avg_pool(value, ksize, strides, padding)                           平均方式池化

4、损失函数(Losses)

操作                                                                                   描述

tf.nn.l2_loss(t, name=None)                                                   output = sum(t ** 2) / 2

5、分类函数(Classification)

操作                                                                                   描述

tf.nn.sigmoid_cross_entropy_with_logits                                 计算输入logits, targets的交叉熵

(logits, targets, name=None)                                   

tf.nn.softmax(logits, name=None)                                           计算softmax:softmax[i, j] = exp(logits[i, j]) / sum_j(exp(logits[i, j]))

tf.nn.log_softmax(logits, name=None)                                     logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i])))

tf.nn.softmax_cross_entropy_with_logits                                 计算logits和labels的softmax交叉熵

(logits, labels, name=None)        

logits, labels必须为相同的shape与数据类型

tf.nn.weighted_cross_entropy_with_logits                               与sigmoid_cross_entropy_with_logits()相似,但给正向样本损失加了

(logits, targets, pos_weight, name=None)                                权重pos_weight

RNN

tf.contrib.rnn.MultiRNNCell

对RNN单元按序列堆叠。接受参数为一个由RNN cell组成的list。

rnn_size代表一个rnn单元中隐层节点数量,layer_nums代表堆叠的rnn cell个数

tf.nn.dynamic_rnn

构建RNN,接受动态输入序列。返回RNN的输出以及最终状态的tensor。

dynamic_rnn与rnn的区别在于,dynamic_rnn对于不同的batch,可以接收不同的sequence_length。

例如,第一个batch是[batch_size,10],第二个batch是[batch_size,20]。而rnn只能接收定长的sequence_length。

def get_lstm_cell(rnn_size):

        lstm_cell=tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))

        return lstm_cell

   

#num_layers层的RNN单元

cell = tf.contrib.rnn.MultiRNNCell([get_lstm_cell(rnn_size) for _ in range(num_layers)])

#encoder_embed_input 的形状:[batch_size, , enbedding_size]

encoder_output,encoder_state=tf.nn.dynamic_rnn(cell,encoder_embed_input,sequence_length=source_sequence_length, dtype=tf.float32)

猜你喜欢

转载自www.cnblogs.com/yongfuxue/p/10095869.html