paddle (a)

I. Overview

A framework for machine learning, neural networks provides the main function, activation function of the depth of learning needs.

Basic concepts

Program

One is a model training program, executed by the actuator, default setting is to perform fluid.default_startup_program (), user description of the calculation are written for a Program. Fluid in the Program replace the traditional concept of the model in the frame, by performing sequentially, and cycle conditions are selected to perform supports three execution structure, so that any description of the complex models.

import paddle.fluid as fluid
import numpy as np

data = fluid.layers.data(name="input8", shape=[-1, 32,32], dtype="float32")
label = fluid.layers.data(name="label8", shape=[1], dtype="int")
fc_out = fluid.layers.fc(input=data, size=2)
predict = fluid.layers.softmax(input=fc_out)
result=fluid.layers.auc(input=predict, label=label)

place = fluid.CPUPlace()
exe = fluid.Executor(place)

exe.run(fluid.default_startup_program())
x = np.random.rand(3,32,32).astype("float32")
and = np.array ([1,0,1 ])
output= exe.run(feed={"input8": x,"label8": y},
                 fetch_list=[result[0]])
print(output)
View Code

Block is a conceptual high-level language variable scope, in a programming language, a pair of braces Block, which contains a series of instructions and the definition of local variables or operator. Control flow constructs in programming languages ​​in the if-else and depth for learning may be equivalent to:

As noted earlier, Fluid Block described in a set order, or select an object and perform loop Operator Operator operations: Tensor.

Operator defined mathematical operation includes a series operation, neural network operations, tensor operations, etc., packaged in paddle.fluid.layers, paddle.fluid.nets.

ParamAttr for setting parameters of a op.

import paddle.fluid as fluid
import numpy as np

x = fluid.layers.data(name='x', shape=[1], dtype='int64', lod_level=1)
emb = fluid.layers.embedding(input=x, size=(128, 100))  # embedding_0.w_0
emb = fluid.layers.Print(emb) # Tensor[embedding_0.tmp_0]

# default name
fc_none = fluid.layers.fc(input=emb, size=1)  # fc_0.w_0, fc_0.b_0
fc_none = fluid.layers.Print(fc_none)  # Tensor[fc_0.tmp_1]

fc_none1 = fluid.layers.fc(input=emb, size=1)  # fc_1.w_0, fc_1.b_0
fc_none1 = fluid.layers.Print(fc_none1)  # Tensor[fc_1.tmp_1]

# name in ParamAttr
w_param_attrs = fluid.ParamAttr(name="fc_weight", learning_rate=0.5, trainable=True)
print(w_param_attrs.name)  # fc_weight

# name == 'my_fc'
my_fc1 = fluid.layers.fc(input=emb, size=1, name='my_fc', param_attr=w_param_attrs) # fc_weight, my_fc.b_0
my_fc1 = fluid.layers.Print(my_fc1)  # Tensor[my_fc.tmp_1]

my_fc2 = fluid.layers.fc(input=emb, size=1, name='my_fc', param_attr=w_param_attrs) # fc_weight, my_fc.b_1
my_fc2 = fluid.layers.Print(my_fc2)  # Tensor[my_fc.tmp_3]

place = fluid.CPUPlace()
x_data = np.array([[1],[2],[3]]).astype("int64")
x_lodTensor = fluid.create_lod_tensor(x_data, [[1, 2]], place)
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
ret = exe.run(feed={'x': x_lodTensor}, fetch_list=[fc_none, fc_none1, my_fc1, my_fc2], return_numpy=False)
View Code

Second, the neural network

Convolution layer conv2d, conv3d

Parameters: convolution needs to be based sliding step (stride), the length of padding (padding), the convolution kernel window size (filter size), the number of packets (groups), coefficient of expansion (dilation rate) to determine how to calculate. introducing the first AlexNet groups, it can be understood as the original volume integral of several groups independently convolution calculation.

import paddle.fluid as fluid
import numpy as np
data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32')
param_attr = fluid.ParamAttr(name='conv2d.weight', initializer=fluid.initializer.Xavier(uniform=False), learning_rate=0.001)
res = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu", param_attr=param_attr)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
x = np.random.rand(1, 3, 32, 32).astype("float32")
output = exe.run(feed={"data": x}, fetch_list=[res])
print(output)
View Code

Pooling pool2d, pool3d

Pooling effect is to make the input characteristic and downsampling to reduce over-fitting. Reduce over-fitting is to reduce the size of the output result, it also reduces the number of parameters of the subsequent layer.

Before pooling layer generally only needs to FIG characteristics as input, parameters needed in addition to the specific operation of the pool is determined. In PaddlePaddle we likewise by setting the size of the pool, means, step, whether the global pool of whether to use cudnn, whether ceil function to calculate the output parameters of the pool to select a specific manner. PaddlePaddle in two-dimensional (pool2d) for a fixed length of the image feature, a three-dimensional convolution (pool3d), RoI pooled (roi_pool), and the sequence of the pool for sequences (sequence_pool), but also the reverse computing process of the pool, The following first describes the 2D / 3D pooled, and the pooled RoI, a sequence introduced again pooled.

exp mathematical operations, tanh, sqrt, abs, ceil, floor, sin, cos, square, reduce, matmul, less_than, sum, equal

Activation function
activation function nonlinear characteristics of the neural network is introduced to them.

relu, tanh, sigmoid, elu, relu6, pow, stanh, hard_sigmoid, swish, prelu, brelu, leaky_relu, soft_relu, thresholded_relu, maxout, logsigmoid, hard_shrink, softsign, softplus, tanh_shrink, softshrink, exp。

Loss function

square_error_cost,cross_entropy ,softmax_with_cross_entropy,sigmoid_cross_entropy_with_logits,nce , hsigmoid,rank_loss 和 margin_rank_loss。

Input and output data

fluid.layers.data build a network layer, and by executor.run (feed = ...) in the data read mode. Read and process data model training / prediction is carried out simultaneously.

Users can executor.run (fetch_list = [...], return_numpy = ...) manner fetch the desired output variable by setting return_numpy parameter whether the output data into numpy array. If return_numpy is False, LoDTensor type of the returned data.

Control Flow

Perform process control for the neural network

IfElse,While,Swith,DynamicRNN,staticRNN

import numpy as np
import paddle.fluid as fluid

x = fluid.layers.data(name='x', shape=[4, 1], dtype='float32', append_batch_size=False)
y = fluid.layers.data(name='y', shape=[4, 1], dtype='float32', append_batch_size=False)

x_d = np.array([[3], [1], [-2], [-3]]).astype(np.float32)
y_d = np.zeros((4, 1)).astype(np.float32)

# Comparison x, y of the element size, the output cond, cond is the shape of [4, 1], the data type of the 2-D tensor bool. 
# The input data x_d, y_d, it can be inferred that the data is in the cond [[to true], [to true], [to false], [to false]] 
cond = fluid.layers.greater_than (X, Y)
 # with other common OP the difference is that the return ie OP is an object of the IfElse OP 
ie = fluid.layers.IfElse (cond)

with ie.true_block():
    # In this block, depending on the condition cond obtain true dimension x corresponding condition data, and subtracting 10 
    out_1 = ie.input (x)
    out_1 = out_1 - 10
    ie.output(out_1)
with ie.false_block():
    # In this block, depending on the condition cond acquire x dimension corresponding condition is false data, plus 10 
    out_1 = ie.input (x)
    out_1 = out_1 + 10
    ie.output(out_1)

# Merged data in accordance with the two conditions cond block processing, output here is output, type List, a List is the element type Variable. 
IE = Output () #   [Array ([[-... 7 DTYPE = float32], [-9], [8. The], [7. The]],)]

# Output Variable List in the first acquisition out, and calculates all the elements and 
OUT = fluid.layers.reduce_sum (Output [0])

exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())

res = exe.run(fluid.default_main_program(), feed={"x":x_d, "y":y_d}, fetch_list=[out])
print(res)
View Code

Tensor

assign,cast,concat,sums,argsort,argmax,argmin,ones,zeros,reverse

 

Guess you like

Origin www.cnblogs.com/yangyang12138/p/12375580.html