Laminar flow predictions around random 2D shapes! Realization of Graph Neural Network Based on Flying Paddle

Background of the project

In recent years, the field of fast flow field prediction has been dominated by pixel-based Convolution Neural Networks (CNN). When CFD is coupled with a CNN-based neural network model, data from the grid must be interpolated on a Cartesian grid and then projected back to the grid. However, the inherent geometric representation of uniform Cartesian grids is poor, the associated computational cost is large, and it is not suitable for fast flow field prediction. Different from CNN, Graph Convolution Neural Network (GCNN) can be directly applied to the triangular mesh of entity fitting, so that it can be easily coupled with CFD solver to solve the above problems. This project chooses to reproduce the paper "Graph neural networks for laminar flow prediction around random two-dimensional shapes" based on the GCNN structure to verify the ability of the flying paddle framework to realize laminar flow prediction around 2D obstacles based on the GCNN model.

Original paper

https://hal.archives-ouvertes.fr/hal-03432662/document

source code

https://github.com/cfl-minds/gnn_laminar_flow

Development environment and implementation process

development environment

This paper relies on the 2.4 version of the flying paddle framework to realize the graph convolutional neural network for laminar flow prediction around 2D obstacles. You can complete the installation by visiting the installation documentation on the Flying Paddle official website. For details, see the link: https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/conda/macos-conda.html

Implementation process

The basic building blocks of graph convolutional neural networks are convolutional blocks. The convolutional block consists of a two-step graph convolution layer and a two-step smoothing layer. In the paper, the feature matrices on nodes and edges are denoted by , respectively, where Nv and NE are the number of nodes and edges, and dV and dE are the dimensions of feature vectors on nodes and edges. Convolutional layers propagate node-level messages to edges, then aggregate new edge features and update node features according to the following rules. 
where v1 and v2 are two nodes connected by edge e, and N(v) is the set of adjacent edges surrounding node v. The convolution kernels fe and fv are fully connected neural networks with a single hidden layer, and the number of neurons in the hidden layer is set to 128.
The node feature update code is as follows:

def update_node_features(node_features, grad_P1, num_filters, initializer, message_fn):
    message_input = paddle.concat([node_features, grad_P1], axis=1)
    updated = message_fn(message_input)
    return updated

The edge feature update code is as follows:

def update_symmetry_edge_features(node_features, edges, edge_features, edge_feat_dim, initializer, message_fn):
    n_nodes = paddle.to_tensor(node_features.shape[0])
    n_features = paddle.to_tensor(node_features.shape[1])
    reshaped = paddle.reshape(node_features[edges], shape=[-1, 2 * n_features])
    symmetric = 0.5 * paddle.add(reshaped[:, 0:n_features], reshaped[:, n_features:2 * n_features])
    asymmetric = 0.5 * paddle.abs(paddle.subtract(reshaped[:, 0:n_features],
    reshaped[:, n_features:2 * n_features]))
    inputs = paddle.concat([symmetric, asymmetric, edge_features], axis=1)
    messages = message_fn(inputs)  # n_edges, n_output
    n_edges = edges.shape[0]
    updates = paddle.slice(
        paddle.add(paddle.index_add_(paddle.zeros([n_edges, messages.shape[1]]), edges[:, 0], 0, messages),
        paddle.index_add_(paddle.zeros([n_edges, messages.shape[1]]), edges[:, 1], 0, messages)),
        axes=[0, 1], starts=[0, 0], ends=[n_nodes, messages.shape[1]])
    return messages, updates

In the edge convolution step, symmetric node features
are prioritized over x1 and x2 to maintain permutation invariance. The summation in the node convolution step is also invariant to the arrangement of adjacent edges. A smoothing layer performs an averaging operation on the output graph. The averaging kernel implemented on triangular meshes is divided into two steps:

The motivation for adding this layer is not for message propagation, but to reduce the spatial variability of node features. It depresses the feature maps of convolutional layers by compensating adjacent node features. The smoothing layer code is as follows:

class EdgeSmoothing(nn.Layer):
    def __init__(self):
        super(EdgeSmoothing, self).__init__()
    def forward(self, to_concat, node_features, edges, count):
        n_nodes = paddle.to_tensor(node_features.shape[0])
        flow_on_edge = paddle.mean(node_features[edges], axis=1)
        aggre_flow = paddle.add(paddle.index_add_(paddle.zeros([edges.shape[0], flow_on_edge.shape[1]]), edges[:, 0], 0,flow_on_edge[:, :]),
        paddle.index_add_(paddle.zeros([edges.shape[0], flow_on_edge.shape[1]]), edges[:, 1], 0,flow_on_edge[:, :]))
        return paddle.concat([to_concat, paddle.divide(aggre_flow[:n_nodes, :], count)], axis=1)

The following figure shows the network architecture diagram used in the paper, which consists of a graph convolution layer and a smoothing layer. The input consists of three images, and 8 convolutional blocks/smoothing layers are stacked to form a graph convolutional neural network, followed by a 1×1 convolution as the output layer. An important part of the architecture is the skip connections from the input graph to the convolutional blocks. After each smoothing layer, the coordinates of the nodes will be joined to the node features. These skip connections provide spatial information for the edge convolution step in Eq.

Figure 1 Network architecture diagram

The network architecture code is as follows:

class InvariantEdgeModel(nn.Layer):
    def __init__(self, edge_feature_dims, num_filters, initializer):
        super(InvariantEdgeModel, self).__init__()
        self.edge_feat_dims = edge_feature_dims
        self.num_filters = num_filters
        self.initializer = initializer
        self.layer0 = InvariantEdgeConv(self.edge_feat_dims[0], self.num_filters[0], self.initializer)
        self.layer1 = InvariantEdgeConv(self.edge_feat_dims[1], self.num_filters[1], self.initializer)
        self.layer2 = InvariantEdgeConv(self.edge_feat_dims[2], self.num_filters[2], self.initializer)
        self.layer3 = InvariantEdgeConv(self.edge_feat_dims[3], self.num_filters[3], self.initializer)
        self.layer4 = InvariantEdgeConv(self.edge_feat_dims[4], self.num_filters[4], self.initializer)
        self.layer5 = InvariantEdgeConv(self.edge_feat_dims[5], self.num_filters[5], self.initializer)
        self.layer6 = InvariantEdgeConv(self.edge_feat_dims[6], self.num_filters[6], self.initializer)
        self.layer7 = InvariantEdgeConv(self.edge_feat_dims[7], self.num_filters[7], self.initializer)
        self.layer8 = nn.Linear(10, 3)
        self.smoothLayer = EdgeSmoothing()
    def forward(self, node_input, edges, edge_input, smoothing_weights):
        new_node_features_0, new_edge_features_0 = self.layer0(node_input, edge_input, edges)
        smoothed_0 = self.smoothLayer(node_input[:, 0:2], new_node_features_0, edges, smoothing_weights)
        new_node_features_1, new_edge_features_1 = self.layer1(smoothed_0, new_edge_features_0, edges)
        smoothed_1 = self.smoothLayer(node_input[:, 0:2], new_node_features_1, edges, smoothing_weights)
        new_node_features_2, new_edge_features_2 = self.layer2(smoothed_1, new_edge_features_1, edges)
        smoothed_2 = self.smoothLayer(node_input[:, 0:2], new_node_features_2, edges, smoothing_weights)
        new_node_features_3, new_edge_features_3 = self.layer3(smoothed_2, new_edge_features_2, edges)
        smoothed_3 = self.smoothLayer(node_input[:, 0:2], new_node_features_3, edges, smoothing_weights)
        new_node_features_4, new_edge_features_4 = self.layer4(smoothed_3, new_edge_features_3, edges)
        smoothed_4 = self.smoothLayer(node_input[:, 0:2], new_node_features_4, edges, smoothing_weights)
        new_node_features_5, new_edge_features_5 = self.layer5(smoothed_4, new_edge_features_4, edges)
        smoothed_5 = self.smoothLayer(node_input[:, 0:2], new_node_features_5, edges, smoothing_weights)
        new_node_features_6, new_edge_features_6 = self.layer6(smoothed_5, new_edge_features_5, edges)
        smoothed_6 = self.smoothLayer(node_input[:, 0:2], new_node_features_6, edges, smoothing_weights)
        new_node_features_7, new_edge_features_7 = self.layer7(smoothed_6, new_edge_features_6, edges)
        smoothed_7 = self.smoothLayer(node_input[:, 0:2], new_node_features_7, edges, smoothing_weights)
        node_outputs = self.layer8(smoothed_7[:, 0:])
        return node_outputs

project results


In order to demonstrate the effect of recurrence, we use the recurrence model to predict the cylindrical flow field, and the results are as follows:

Figure 2 Comparison chart of prediction effect

The left side is the real flow field used in the original paper, and the right side is the flow field predicted by our reproduced model. It can be seen that the predicted value (right) we obtained is basically consistent with the real value (left), and the model accuracy is very good. The MAE of our reproduced model in the experimental results is 0.0046, which is also very close to the original paper's result of 0.0043, which verifies the ability of the flying paddle framework to realize laminar flow prediction around 2D obstacles based on this model.

experience

Baidu Flying Paddle's thesis recurrence competition provided our team with valuable learning and growth opportunities. This competition not only gave us an in-depth understanding of the field of flow field prediction, but also exercised our teamwork and problem-solving skills. Looking back on this game now, there are many things worthy of praise. First, Flying Paddle's strong event organization capabilities make the competition organization standardized and orderly. From the pre-project publicity, team registration, pre-match explanation, in-match Q&A, and result submission, the project is arranged in an orderly manner, and each team knows what to do at each stage. Second, during the competition, technicians from the Flying Paddle Scientific Computing Team provided detailed answers to questions. The competition requires us to read the paper carefully and use the flying paddle to reproduce it according to the reference code provided by the paper. This process not only requires us to have a deep understanding of the deep learning model, but also requires us to be familiar with the flying paddle framework. As a novice, it is inevitable to encounter various technical problems. Every time I find a paddle technician, I can always get patient and meticulous answers. In addition, the official will regularly track the progress of the reappearance, and solve problems for the players immediately if there are any problems. Thirdly, participating in the paper reproduction competition of Flying Paddle also opened up a wider field of vision for us. Through this competition, we have the opportunity to come into contact with many excellent papers in the field of AI for Science. In the process of reproducing the practice, we deeply studied the methods and techniques of these papers, deepened our understanding of this field, and learned about the latest developments and applications in academia. Finally, I would like to sincerely thank all the organizers and staff of the Baidu Flying Paddle team. Their hard work and professional support made this event possible. Special thanks also to Lu Lin, Wang Lu, and Kong Detian who participated in the competition together, and to every member of our team for their hard work and dedication. In the future, we will continue to maintain a learning attitude, continue to explore and innovate, and strive to contribute to the development of this field.

Past recommendation

{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4067628/blog/10083532