[Report] When "wireless communication" meets "graph neural network" - simple understanding

Please indicate the source when reprinting: Senior Xiaofeng's Big Bang Life [xfxuezhang.cn]

Course report, made casually, for reference only~

B station video: https://www.bilibili.com/video/BV1tM4y1v7t4/

The following format defaults to: put the picture first, then put the text

        The previous students introduced the application of traditional methods and classic deep learning methods in wireless communication. Finally, here I will introduce a relatively new concept - graph neural network. Next, I will briefly introduce the graph neural network and how wireless communication and graph neural network can be used in combination.

        The following will start from the following four aspects.

        Due to the complexity of the wireless communication environment, such as random channel fading and interference, and the nonlinearity caused by inevitable hardware damage, the mathematical model of the wireless communication system sometimes cannot accurately reflect the actual situation. In addition, with the popularity of large-scale wireless communication schemes (such as massive MIMO), the corresponding mathematical models become more complex, and the computational complexity of related optimization algorithms also becomes higher.

        In addition, due to the powerful representation ability and low inference complexity of various neural network models. Therefore, machine learning ML techniques, especially deep learning (DL) techniques have achieved great success in other fields such as natural language processing (NLP) and computer vision (CV).

        Based on the above two points, in recent years, more and more researchers have begun to use machine learning methods to study challenging problems in wireless communication, such as radio resource management, channel estimation, joint source channel coding, etc., and have achieved good results. Performance improvements.

        With the increasing demand for higher quality of service (QoS) and the explosive growth of big data, wireless technology based on machine learning and deep learning has gradually become the mainstream of the sixth generation of wireless communication and surpassed traditional model-based design paradigm.

        There are two different approaches to developing deep learning based methods.

        The first is a data-driven approach that directly learns the optimal input-output mapping for a problem by using neural networks instead of traditional building blocks.

        The second is a model-driven approach, where neural networks are used to replace some of the strategies in classical algorithms.

        For both paradigms, a fundamental design component is the underlying neural architecture, which governs training and generalization performance.

        Early attempts took a plug-and-play approach and employed neural architectures inherited from applications such as computer vision, such as multilayer perceptrons or convolutional neural networks.

        Although these classic architectures achieve near-optimal performance and fast execution on small-scale wireless networks; when the number of users becomes large, the performance suffers severe degradation.

        More specifically.

        Neural networks usually require a large number of training samples. However, in general wireless communication systems, training samples are usually difficult to collect. At the same time, too many training samples will increase the memory consumption and time consumption of the training process.

        Second, for most existing works, the structure of the neural network model is highly dependent on the size of the system, such as the number of antennas/users. Moreover, 5G and above networks are usually characterized by densely deployed access points, hundreds of clients, and dynamically changing number of clients and signal-to-noise ratio, which makes it ineffective to apply MLP-based or CNN-based methods.

        Finally, most existing machine learning-based wireless algorithms are centralized, which may suffer from high signaling costs, low scalability and flexibility, computational limitations, and single points of failure.

        Therefore, these neural network models lack generalization ability and cannot be used in situations where the system scale varies. We need to find some new ways to deal with these problems.

        To improve scalability and generalization, a promising direction is to design neural network architectures specialized for wireless networks. A recent attempt is to use graph neural networks, which can exploit graph topology information of wireless networks. GNN-based methods have achieved good results in applications such as resource management, end-to-end communication, and MIMO detection.

        For example, for the beamforming problem, a GNN trained on a network of 50 users achieves near-optimal performance on a larger network of 1000 users.

Because they can be executed in parallel, GNNs are computationally efficient and are so far the only method that can find a near-optimal beamformer for thousands of users in milliseconds.

        In addition, GNN utilizes graph information more effectively than other neural network models, especially graph topology information, which can reduce the number of training samples required and improve model performance.

        Second, gnn can handle input maps of different sizes. In addition, the operation of GNNs is naturally decentralized, which is attractive for large-scale wireless communication systems.

        Moreover, GNN is the most powerful DMP (Dynamic Markov Decision Process) algorithm, that is to say, as long as the appropriate learnable weights are selected, GNN can represent any DMP algorithm. This also proves that it is reasonable to use GNN in wireless networks from the perspective of algorithms.

        Therefore, applying GNN to wireless communication technology is a very promising direction.

        Since graphs are so powerful, what are graphs?

        A graph is a structure used to represent a certain relationship between points and points. It is a very long and classic data structure. Graphs can be used to model many structures, such as representing molecular structures, text, social networks, and graph generation.

        Taking a picture as an example, a regular picture is composed of neatly arranged pixels. If some nodes are extracted, a graph with a higher degree of abstraction is formed, and the topological relationship between the nodes is maintained.

        A graph is represented by a collection of nodes and a collection of edges.

        In the context of wireless communication, a node may represent a user, an AP, an antenna, a base station, an edge device, etc., and a node feature is an attribute of the node (eg, the importance of the node).

        An edge can be represented as a communication link, an interfering link, or some connection mode, etc., and an edge feature is an attribute of the edge (eg, channel state information).

        Graph Neural Network (GNN) is a machine learning model for processing graph data. It is based on traditional neural networks and models and analyzes graph data by considering the connection relationship between nodes. Its basic idea is to iteratively update the node representation, aggregate the neighbor information of the node, and then use the aggregation result as the new node representation. Therefore, the representation of nodes will gradually incorporate broader contextual information, thereby improving the ability to understand and represent nodes.

        The GNN model consists of one or more layers, and the training of the GNN model mainly includes aggregation operations and combination operations. In the aggregation step, the aggregation function is used to aggregate the feature vectors of incoming neighboring vertices from the previous GNN layer for each target vertex. In the combine step, the combine function transforms the aggregated feature vector for each vertex using neural network operations. The final embedding vectors can be fed into other machine learning techniques (e.g., multi-layer perception and reinforcement learning) for further node-level analysis, such as device on/off prediction. Or edge-level analysis, such as predicting link congestion State. In addition, the embedding vectors of all nodes can also be summarized into a graph-level embedding vector, which is used for graph-level tasks, such as throughput prediction of the network.

        According to rough statistics, in recent years, the number of GNN articles in Google Scholar has increased exponentially, and it is estimated that it may reach 3636 articles in 2023.

        A few related relatively new works are introduced below.

        The first is the use of graph neural networks for channel tracking in massive MIMO networks.

        Aiming at large-scale multiple-input multiple-output networks in high-mobility scenarios, a GNN-based online CSI prediction scheme for massive MIMO time-varying channels is proposed. Initial channel estimation is first achieved with a small number of pilots. The resulting channel data are then represented graphically, and the spatial correlation of the channels is described by weights along the edges of the graph. And a GNN-based channel tracking framework is designed, which includes an encoder, a core network and a decoder.

        The proposed system is then evaluated in simulation experiments. In addition to the parameters of the GNN framework in the table, the number of antennas at the BS is set to 32, the channel attenuation obeys the complex Gaussian distribution, the number of paths is 20, the direction of arrival obeys the uniform distribution on [−π, π], and the sampling time is 0.02ms , the carrier frequency is 3 GHz, and the antenna spacing is λ/2.

        The final experiments show that the GNN-based method achieves the best results in all comparative experiments, especially in the high SNR region. In the mobile scene, it has always maintained an advantage.

        This is the use of graph neural networks for user scheduling tasks.

        Reconfigurable smart surfaces (RIS) can intelligently manipulate the phase of incident electromagnetic waves to improve the wireless propagation environment between base stations (BSs) and users. The study focuses on the scheduling aspects of RIS-assisted multi-user downlink networks. Graph neural networks with permutation invariance and equal variance properties can be used to appropriately schedule users and design RIS configurations to achieve high overall throughput while considering fairness among users.

        The study proposes a three-stage GNN framework.

        In the first phase, a GNN applied to all potential users but with a very short pilot can produce an optimized schedule while taking user priorities into account.

        In the second stage, the second GNN is only applied to intended users to generate optimized RIS configurations.

        Finally, the overall low-dimensional effective channel under the optimized RIS configuration is re-estimated to design the BS beamformer.

        in the experimental section. Consider a RIS-assisted multi-user MISO network with 8 BS antennas, 128 RIS reflectors and 32 users. The link channel follows Rayleigh fading, and the reflection channel follows Rician fading.

        The proposed data-driven method is compared with traditional channel estimation based methods. Given the estimated channel, a greedy scheduling method is used to optimize the beamforming matrix and reflection coefficients through block coordinate descent to achieve weight and rate maximization.

        Simulation results show that the proposed algorithm can achieve better performance than traditional methods with significantly reduced pilot overhead, maximize network utility, and eliminate the need for explicit channel estimation compared to traditional channel estimation-based methods estimate.

        GNNs can also be conveniently combined with other deep learning methods. In this work, graph neural networks are combined with deep reinforcement learning for the prediction of resource allocation in 5G networks.

        The increasing complexity of mobile networks in 5G and beyond, as well as the large number of devices and new use cases these networks need to support, make the already complex problem of resource allocation in wireless networks one of the greatest challenges. Combining a deep Q-network extension with a graph neural network can better model the expected reward function. Among them, the goal of GNN is to learn how to best approximate the Q function in RL.

        Research mainly focuses on the user association (UA) problem, that is, which connectivity provider (e.g. base station) a user should connect to in order to maximize the utility function of the global system.

        Through simulation experiments, it can be seen that the proposed method outperforms both the baseline method and the Q-learning method, and provides higher system utility and lower user rejection rate.

        Since the development of GNN itself has not been a few years, the research based on GNN in various fields is still relatively new, so there are many future development directions of GNN itself and the development direction in the wireless communication scenario. Here are a few simple examples.

        The first is a new wireless application of GNNs: GNN4Com focuses on wireless resource allocation, but its potential in other wireless applications is still underutilized, especially in physical layer communication. Since traditional methods cannot extend CSI feedback to massive MIMO systems, GNNs can be an excellent choice independent of the number of antennas. While GNN4Com usually focuses on a specific network layer, cross-layer optimization is crucial to improve the performance of the overall communication system. Therefore, we need to investigate how to incorporate GNNs in cross-layer optimization. In addition, there are also many studies on how to optimize Com to better accelerate GNN, such as communication loss and retransmission mechanism.

        The second point is an effective GNN training strategy: the neural network training phase usually requires a lot of time and resources. Although most of GNN4Com's current work trains GNNs in offline mode, there is still a cost in training, which may become a bottleneck for practical applications. The solution can use multi-device computing resources, adopt federated learning and edge learning to collaboratively train the GNN model. Use techniques such as data augmentation and contrastive learning to reduce the amount of training data required for GNNs. Improving the training efficiency of GNN in wireless networks is an important direction in the future.

        The third is the wireless mechanism or protocol of GNN reasoning: due to factors such as wireless channel fading and noise, the accuracy of distributed GNN reasoning will inevitably be affected. In order to implement GNNs, robust wireless mechanisms/protocols need to be developed, such as power control, adaptive modulation and coding, etc. Furthermore, sharing hidden layer embedding vectors among neighbor nodes requires controlling synchronization and transmission delays. Therefore, developing efficient synchronization protocols and resource allocation is also important to realize GNNs in wireless communication.

        The fourth is privacy protection during GNN training and reasoning: privacy protection is one of the core issues in wireless communication, and both training and reasoning of GNN may lead to privacy leakage. Because during the training process, the training samples come from system log files or the user's local data, which easily exposes the user's private information. You can use noisy samples for training, or use a federated learning scheme to preserve user privacy. In the inference stage, decentralized GNN inference relies on information exchange, which also leads to privacy issues. Privacy-preserving communication technologies can be developed. In addition, there is a need to achieve decentralized control of GNNs while protecting user privacy.

        Here are the relevant references.

        thank you all

Guess you like

Origin blog.csdn.net/sxf1061700625/article/details/131059107