Interpretation of tensorflow layers source code, convolution articles

    Layers are high-level APIs encapsulated by tensorflow. The variables are created by layers themselves, and the calculation method is automatically performed by the layers. Compared with the nn layers of tensorflow, it is more convenient and practical, and can be used directly as a black box.

    Layers is the base class of all layers. There is a build variable in the base class to control whether this layer is created. It is initially false. Two important methods are build and call. Build stores the variables that need to be initialized in this layer, and call stores the calculation logic of this layer. Both methods need to be rewritten by subclasses. The __call__ method is rewritten in the base class. When __call__ is called, the class will first check whether the build variable is false. If it is false, it will execute the build() method to initialize the variable and set the build variable to True, so that No more variables will be initialized on the second call to __call__. After creating the variable, __call__ continues to execute the call() method.

    Above is the class diagram of the Conv module in layers. The parameters corresponding to each class are described as follows:

tf.layers.Conv1D() parameters:     filters: number of convolution kernels kernel_size: one-dimensional convolution length strides: one-dimensional convolution strides padding: 'valid' or 'same' data_format: 'channels_last' or 'channels_first' if it is first It will first be converted to last dilation_rate: 1 ?? activation:None If the activation function is None, there is no activation function use_bias:True Whether to use the initialization method of the bias kernel_initializer:kernel variable, the bias_initializer:bias initialization method will be used in the migration learning, The default is all 0 kernel_regularizer: kernel regularization coefficient bias_regularizer: bias regularization coefficient trainable: whether kernel and bias can be trained method: __call__(inputs): inputs [bs,length,in_channel] return [bs,length after convolution,filters ] Variable: self.kernel [kernel_wight,in_channel,filters] self,bais [filters]


    
    
    
    
    
    
    
    
    
    
    
    

    
        
        

        
         That is, a convolution kernel shares a bias tf.layers.Conv2D() parameter: filters: number of convolution kernels kernel_size: (convolution kernel height, convolution kernel width) strides: (translation height, translation width) method: __call__( inputs): inputs [bs, height, length, in_channel] return [bs, height after convolution, width after convolution, filters] variable: self.kernel [kernel_height, kernel_wight, in_channel, filters] self, bais [filters] also It is a convolution kernel that shares a bias tf.layers.Conv3D() parameter: filters: the number of convolution kernels kernel_size: (the number of convolution kernel frames, the height of the convolution kernel, the width of the convolution kernel) strides: (the number of translation frames, translation height, translation width) method: __call__(inputs): inputs [bs, depth, height, length, in_channel] return [bs, frame number after convolution, height after convolution, width after convolution, filters] variable:



    
    
    

    
        
        

    
    




    
    
    

    
        
        

    self.kernel [kernel_depth,kernel_height,kernel_wight,in_channel,filters]
    self,bais [filters] that is, a convolution kernel shares a bias

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324656293&siteId=291194637