GCN entry

Reference links :https://www.zhihu.com/question/54504471/answer/611222866

1 Laplace matrix

  1. Reference Links: http://bbs.cvmart.net/articles/281/cong-cnn-dao-gcn-de-lian-xi-yu-qu-bie-gcn-cong-ru-men-dao-jing-fang -tong-qi
  2. L = D - A, A is the adjacency matrix, D is a diagonal matrix of vertex degrees, L is a Laplacian matrix

   

Category 1.1 Laplace matrix

FIG 3 convolution parameters

   

  1. Reference Links: http://bbs.cvmart.net/articles/281/cong-cnn-dao-gcn-de-lian-xi-yu-qu-bie-gcn-cong-ru-men-dao-jing-fang -tong-qi
  2. Section 1 of that, drawing convolution formula , convolution training parameter map in the formula diagonal matrix, GCN training in two versions, the first version of the simple thinking but because of its shortcomings is no longer used, now mostly use the second version

       

Figure 3.1 The first generation convolution training

    1. Direct diagonal matrix of parameter values ​​as the diagonal
    2. It represents the activation function
    3. It represents the input vector
  1. advantage
    1. Found intermediate of formula is an exploded removing Laplacian matrix equation, then no longer needed in the calculation formulas decomposition, the Laplacian matrix can be used directly, reducing the amount of calculation
    2. Fewer parameters

   

Figure 3.2 The second generation convolution training

    1. Will be converted, and wherein any of the parameters, are required to be initialized, i.e., the edge weight
    2. By means of the Laplacian matrix U and features simplification, to give
        1. L represents a Laplace matrix
        2. K represents the apex of the order, the apex of neighbors
        3. x represents the input characteristics, in order to understand the image to a grayscale for example
          1. FIG grid structure, i.e., the images are arranged, does not require human design because the shape of the picture is the case, each box is a vertex of FIG.
          2. Wherein the pixel value of each vertex, i.e., only in the general view of the structure, the value of a feature vector vertices
          3. It represents the edge weight, but also the need to train and optimize network parameters

   

   

3.3 convolution illustrated in FIG.

   

  1.  

   

  1.  

   

  1. GCN each convolution of all vertices have completed the operation illustrated

The effect of 4 GCN classification

   

  1. FIG following structure

  2. Input PageID, IP, UA, DeviceID, UserID, by convolution of the intermediate node features, that is the result of the classification
  3. Compared with the relatively GBDT, better

5 topology of a convolutional network

  1. Reference Links : https://mp.weixin.qq.com/s/356WvVn1Tz0axsKd8LJW4Q
  2. Topology

    1. CNN and the like, each layer is superposed stacked together, through the result of the convolution obtained by activating the function (ReLU, Sigmoid, etc.) transmitted next layer
    2. Not every vertex must be recalculated convolution new features, but close to the center of the selected vertices
    3. In the figure, the structure is not shown in FIG, nor is the right side represents the weight, but the feature vector corresponding to each vertex in the image, corresponding to each vertex is a scalar, i.e., the pixel value
    4. In the parameters of FIG. 3 convolution mentioned equation parameters FIG convolution, the convolution formula FIG second generation in use when turning into a clearer representation of formula
      1.    

        1. It represents normalization factor
        2. H represents a layer of a feature vector of each vertex, dimension NxF, N denotes the number of vertices, F represents a dimension of a feature vector
        3. W represents the edge weight
        4. Qualitative understanding of the formula
          1. Select a vertex V, to determine the neighborhood, if the number is less than the number of neighborhood, the makeup dumb vertex, if exceeded, for deleting a vertex
          2. Obtaining feature vectors neighborhood vertices, the vertex V of the remaining edges (weight) are added and then multiplied, to give a new dimension feature vector
          3. Preventing large scale changes, the results obtained were normalized

           

    5. Figure convolution features
      1.    

Guess you like

Origin www.cnblogs.com/megachen/p/11492647.html
Recommended