weldon pool的keras版本

在看多示例学习资料的时候,发现有的模型使用一种叫weldon pooling laye的池化层。简单来说,就是max pooling的基础上再加上min pooling,即MaxMin pooling,其实就是从特征图中选出最大的K个特征值和最小的K个特征值,K是一个超参数。

源码

下面是weldon pooling的Keras版本复现,源码来自于github上一个开源项目。
项目地址:https://github.com/trislaz/AutomaticWSI/tree/peter/python/nn
我将其中weldon pooling的源码摘出来,用作学习记录。就整个代码来说不难理解其逻辑,weldon pooling是通过继承keras的Layer类来实现的,再加上TensorFlow的一些张量算子。

def WeldonPooling(x_i, k):
    """Performs a Weldon Pooling, that is: selects the K-highest and the K-lowest activations. 
    
    Parameters
    ----------
    x_i : list
        list of  activation to pool.
    k : int
       number of highest and lowest statistics to pool. 
    
    Returns
    -------
    list
        Pulled summary statistics of length 2*R       
    """
    max_x_i = KMaxPooling(k=k)(x_i)

    neg_x_i = KMinPooling(k=k)(x_i)

    x_i = Concatenate(axis=-1)([max_x_i, neg_x_i])

    return x_i

KMaxPooling实现的逻辑很简单,主要是通过TensorFlow的tf.nn.top_k函数来实现。

class KMaxPooling(Layer):
    """
    K-max pooling layer that extracts the k-highest activations from a sequence (2nd dimension).
    TensorFlow backend.
    """
    def __init__(self, k=1, **kwargs):
        super().__init__(**kwargs)
        self.input_spec = InputSpec(ndim=3)
        self.k = k

    def compute_output_shape(self, input_shape):
        return (input_shape[0], (input_shape[2] * self.k))

    def call(self, inputs):
        
        # swap last two dimensions since top_k will be applied along the last dimension
        shifted_input = tf.transpose(inputs, [0, 2, 1])
        
        # extract top_k, returns two tensors [values, indices]
        top_k, top_k_indices = tf.nn.top_k(shifted_input, k=self.k, sorted=True, name=None)
        
        # return flattened output
        return Flatten()(top_k)

KMinPooling的实现逻辑也很简单,将特征值×-1,然后使用TensorFlow的tf.nn.top_k函数来获取K个最大值(由于×-1了,其实是最小值),最后再×-1将最大值变为最小值。

class KMinPooling(Layer):
    """
    K-min pooling layer that extracts the k-lowest activations from a sequence (2nd dimension).
    TensorFlow backend.
    
    """
    def __init__(self, k=1, **kwargs):
        super().__init__(**kwargs)
        self.input_spec = InputSpec(ndim=3)
        self.k = k

    def compute_output_shape(self, input_shape):
        return (input_shape[0], (input_shape[2] * self.k))

    def call(self, inputs):
        
        # swap last two dimensions since top_k will be applied along the last dimension
        shifted_input = tf.transpose(inputs, [0, 2, 1])
        neg_shifted_input = tf.scalar_mul(tf.constant(-1, dtype="float32"), shifted_input)

        # extract top_k, returns two tensors [values, indices]
        top_k = tf.nn.top_k(neg_shifted_input, k=self.k, sorted=True, name=None)[0]
        top_k = tf.scalar_mul(tf.constant(-1, dtype="float32"), top_k)
        
        # return flattened output
        return Flatten()(top_k)

猜你喜欢

转载自blog.csdn.net/weixin_41693877/article/details/105990195
今日推荐