TensorFlow 之 Custom layers

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Feynman1999/article/details/84933562

Custom layers

The full list of pre-existing layers can be seen in the documentation. It includes

  • Dense (a fully-connected layer)

  • Conv2D

  • LSTM

  • BatchNormalization

  • Dropout

  • and many others.

  • 主要介绍了自定义layerlayer-like things( composing existing layers,For example, each residual block in a resnet is a composition of convolutions, batch normalizations, and a shortcut.)

  • 注意init, buildcall的使用

具体见代码

code1

import tensorflow as tf 
tfe = tf.contrib.eager 

tf.enable_eager_execution()


layer1 = tf.keras.layers.Dense(100)
# The number of input dimensions is often unnecessary, as it can be inferred
# the first time the layer is used, but it can be provided if you want to 
# specify it manually, which is useful in some complex models.
layer1 = tf.keras.layers.Dense(10, input_shape=(None, 5))


# simplely call it __all__ ?
layer1(tf.zeros([10,5]))# 第一维是样本个数 第二维是input_shape

# layers have many useful methods.
# for example ,you can inspect all variables in a layer 
# by calling ** layer.variables **   In this case a fully-connected 
# layer will have variables for weights and biases. 
print(layer1.variables)


# The variables are also accessible through nice accessors
print(layer1.kernel,'\n\n',layer1.bias)


Code2

import tensorflow as tf 
tf.enable_eager_execution()

tfe = tf.contrib.eager

# the best way to implement your own layer is extending
# the tf.keras.Layer class and implementing: 
# *__init__, where you can do all input-independent initialization

# *build , where you know the shapes of the input tensors
# can do the rest of the initialization 

# *call , where you do the forward computation

'''
notice that you don't have to wati until build is called to
create your variables,you can also create them in __init__
However, the advantage of creating them in build is that it 
enables late variable creation based on the shape of the inputs
the layer will operate on.
On the other hand, creating variables in __init__ would mean that
shapes required to create the variables will need to be explicitly specified
'''

class MyDenseLayer(tf.keras.layers.Layer):
    def __init__(self, num_outputs):
        super(MyDenseLayer, self).__init__()
        self.num_outputs = num_outputs

    def build(self, input_shape):
        self.kernel = self.add_variable("kernel",
            shape=[input_shape[-1].value,self.num_outputs])

    def call(self, input):
        return tf.matmul(input, self.kernel)


layer1 = MyDenseLayer(10) 
# 给过输入后自动确定layer的variables的shape
print(layer1(tf.zeros([10,5])))
print(layer1.variables)
# 当然,不是必须这样 ,你可以直接在init里确定variables的shape

'''
Overall code is easier to read and maintain if it uses
standard layers whenever possible, as other readers 
will be familiar with the behavior of standard layers.
If you want to use a layer which is not present in 
tf.keras.layers or tf.contrib.layers, consider filing
a github issue or, even better, sending us a pull 
request!
'''

Code3

'''
Many interesting layer-like things in machine learning models
are implemented by composing existing layers. For example, each
residual block in a resnet is a composition of convolutions, 
batch normalizations, and a shortcut.

The main class used when creating a layer-like thing which contains
other layers is ** tf.keras.Model ** Implementing one is done by 
inheriting from tf.keras.Model.
'''
import tensorflow as tf 
tf.enable_eager_execution()

tfe = tf.contrib.eager

class ResnetIdentityBlock(tf.keras.Model):
    def __init__(self, kernel_size, filters):   
        super(ResnetIdentityBlock, self).__init__(name='')
        filters1 , filters2, filters3 = filters

        self.conv2a = tf.keras.layers.Conv2D(filters1, (1,1))
        self.bn2a = tf.keras.layers.BatchNormalization()

        self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')
        self.bn2b = tf.keras.layers.BatchNormalization()

        self.conv2c = tf.keras.layers.Conv2D(filters3, (1,1))
        self.bn2c = tf.keras.layers.BatchNormalization()

    def call(self, input_tensor , training = False):
        x = self.conv2a(input_tensor)
        print(x.numpy(),end='\n\n')
        x = self.bn2a(x, training = training)
        x = tf.nn.relu(x)

        x = self.conv2b(x)
        print(x.numpy(),end='\n\n')
        x = self.bn2b(x, training = training)
        x = tf.nn.relu(x)

        x = self.conv2c(x)
        print(x.numpy(),end='\n\n')
        x = self.bn2c(x, training = training)

        x += input_tensor # 维度相同
        return tf.nn.relu(x)

block = ResnetIdentityBlock(1,[2,2,1])
print(block(tf.zeros(
    [1,2,3,3]
)))

print(x.name for x in block.variables)


'''
Much of the time, however, models which compose many 
layers simply call one layer after the other. This can 
be done in very little code using tf.keras.Sequential
'''
my_seq = tf.keras.Sequential([
    tf.keras.layers.Conv2D(1,(1,1)),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Conv2D(2,1,padding='same'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Conv2D(3,(1,1)),
    tf.keras.layers.BatchNormalization()
])

my_seq(tf.zeros([1,2,3,3]))

猜你喜欢

转载自blog.csdn.net/Feynman1999/article/details/84933562