Miscellaneous Talks on Communication in the Development of Online Games

Overview

At present, most mainstream game service frameworks use tcp to communicate with each other. With the maturity of the rudp solution, there are more and more solutions that use udp connection between the client and the server. This article will introduce some communication schemes and some optimization methods used in the server business from the shallower to the deeper.
Insert picture description here

The article is roughly divided into the following sections:

  • The use and optimization of TCP on the server
  • Use UDP-based protocol to optimize game delay. This section will mainly introduce some of the use of KCP and the adjustment of UDP parameters
  • Some personal experience about network connection accumulated in the development
  • Digression: The rise of KCP in the open source world

Some of the more common basic knowledge has been introduced in many articles on the Internet. This article will not go into details due to space limitations.

The use and optimization of TCP on the server

TCP is the most commonly used communication method for game servers, and its stability and reliability are beyond doubt. From the early LAN battle games Red Alert, CS, etc. to the various types of page games that are crawling all over the floor, it is indispensable. Many developers may think that as long as I learn the basic operation of sockets, combined with the popular epoll or iocp models, I can boast to others that I have made a server framework, but in fact, the development details of tcp are far more than that simple.

TCP from the three-way handshake to the congestion control during the communication period to the four waves of the closing process. Basically, each stage of each state switch has various parameters for developers to adjust (here only for development under linux). In fact, for the optimization of the connection establishment process and the intermediate communication process, we mainly realize it by adjusting some kernel parameters. The author below shows the tcp parameters that can be adjusted under Debian.

Tcp related system kernel parameter adjustment under linux

Some basic general tuning parameters, it is recommended to remember what they are used for, such as tcprmem/tcpwmem is to adjust the tcp default read and write cache, tcptwreuse reuse time_wait connection and so on. More complicated parameters can be googled or read the tcp source code of linux. Execute ls /proc/sys/net/ipv4/tcp_* under the shell, you can see:
Insert picture description here
There are roughly dozens of parameters above, and students who have not touched this content may find it very big. In fact, most of the parameters I don’t know what they are used for, but if various problems arise during the use of tcp, we need to learn how to check which parameters can correspond to the problem. to further understand.

Some basic general tuning parameters, it is recommended to remember what they are used for, such as tcprmem/tcpwmem is to adjust the tcp default read and write cache, tcptwreuse reuse time_wait connection and so on. More complicated parameters can be googled or read the tcp source code of linux.

Fortunately, most of the default parameters can already meet our various needs and do not need to be adjusted.

For online products, the general strategy is to try not to change it if there is no problem. Don’t change it if you don’t know the purpose of the parameter. I only recommend turning on one parameter:

net.core.defaultqdisc: fq
net.ipv4.tcpcongestion_control: bbr

Google's bbr algorithm supports unilateral opening, which will replace the native tcp congestion strategy, which is helpful for improving the efficiency of tcp communication.

Adjusting parameters is a behavior that affects the whole body. Try to compare and observe on the test machine first, and then do the AB test online. After all, it will be troublesome if there is an operational accident.

As mentioned earlier, we can basically optimize by adjusting the kernel parameters except for the disconnection part. Disconnecting is a seemingly inconspicuous operation. In fact, there is a lot of knowledge in it. Let’s do a little bit about disconnection. Expand the description.

Disconnect problem

First throw a question:

The server executes send(socket, msg) and then close(socket) on the socket of a certain client. What will happen? Can the other client receive the msg? Will this behavior affect the server?

(If you can answer the above questions, you can skip this chapter directly.)

Usually developers use close to close the socket. Its default behavior is to perform graceful closing actions in the background as much as possible, but this behavior itself is not controllable. Although the socket is released, the program loses control over the connection, and the rest of the life cycle Entrusted to the operating system kernel for maintenance, so assuming that our network is in a very congested state, the system may reclaim the connection at any time.

Here we can look at the description in the following content:

https://docs.microsoft.com/en-us/windows/win32/winsock/graceful-shutdown-linger-options-and-socket-closure-2

So after close, we don't know when msg has reached the opposite end or not. Close is equivalent to telling the kernel that I have run out of this socket, and you will handle the follow-up work. I don't care.

But as an excellent server-side framework developer, we should firmly grasp this process in our own hands: Although I shut you down, I will try my best to ensure that the unsent messages are sent to you. If I really do If I can't finish sending, I will take the initiative to quit the cleanup work and release the connection.

TCP provides a SOLINGER option to allow developers to take over this close behavior. This option needs to specify a timeout period. The system will try to send unsent data within the specified time. When the timeout period is exceeded, the system will clean up the socket. Some of the claims circulating on the Internet are cleaning up sendbuff, but the results of the author's test under wsl are not the same. The performance is that the tcp connection is taken over by the kernel and still tries to send the remaining packets, and then goes to the timewait state, but no matter what, we determine the timeout here. The behavior is still uncertain, and this setting will bring a problem: close becomes a blocking call (regardless of whether the socket itself is blocking or non-blocking), it will block the thread.

So is there a better way to control this shutdown behavior?

The following describes the graceful shutdown process with controllable behavior. If you read the above link about gracefull-shutdown, this process is also introduced in the link:

The server initiates shutdown (write) instead of close, then waits for the read to terminate, and finally executes close. Closing in this way can ensure that the peer can receive all the data sent by the server before closing before receiving the connection closing request (fin).
Insert picture description here

According to the figure, the above process has actually gone through four waves of tcp completely. Note that when the last ack is sent, the connection enters the time_wait state.

When the program initiates shutdown, it means that the program takes the initiative to take over the subsequent state switching. As long as the process does not act, the shutdown end will always maintain the finwait1 state.

Of course, in any case, we must do a good job of exception handling on the active closing end, especially when the active closing end is the server end. Because the network environment is uncontrollable, each step of the shutdown process may have packet sending or receiving interruption, resulting in an intermediate state, such as the client deliberately stuck there, the program does not exit, does not call close or shutdown, or even the client crashes prematurely . In this case, you need to do a timeout cleaning up, and the socket will be forced to close after the shutdown (wr) starts for a period of time.

Note that the close at this time only releases the socket, which does not mean that the connection does not exist. In the case of timeout, the socket may have various state connections such as timewait, finwait1 and finwait_2. The linux kernel has adjustable parameters for these states. (Refer to google for details), we can also set a longer timeout period, after which it will trigger active reset to clean up the connection (nginx processing method), so that the active party will not leave any connection status, which is the cleanest way to clean up .

However, a reset triggered by a timeout may cause data stringing. We can look at the picture below for why we will join the station.
Insert picture description here
The msg of the opposite end has not reached the active party, and the active party triggers a timeout to reset, and then the opposite end establishes a new connection. Assuming it is still the quadruple of ip1:port1 and ip2:port, the message with seq=10 reaches the active party after the connection is established. If the active party’s current packet-receiving buffer only receives the message with the seq of 3 from the other party, then it may cache the message with seq=10 and treat it as a legitimate packet, so this tcp connection The data is messed up, which is also a meaning of the existence of the time_wait state.

In theory, the longer the active timeout time we set, the safer it is. When we set the timeout time to 2*msl, it basically degenerates into the maintenance cycle of time_wait, because the conditions for data stringing are already very harsh: the opposite end happens to be here. Within time, the port2 port was used again to establish a connection, and the data seq that happened to be delayed was within the acceptable range of the active party. So we can consider this situation to be a small probability event. Once this small probability event is triggered, what are the consequences: the server parses the data incorrectly, the connection is disconnected, and if the opposite end has a disconnection processing mechanism, it may try to reconnect. Assuming that we can accept the results of such processing, then I think it makes sense to send this reset packet, and the server connection can be recycled as soon as possible with negligible impact on the client.

Of course, we can also use the combination of close and reset here to handle timeout situations. Close is used by default. When the server has abnormal accumulation of time_wait, we turn on the reset switch so that the system can clean up bad data in time.

Note: The reset can be triggered actively by enabling the SO_LINGER option of tcp and setting time to 0. Please use it with caution.

Server TIME_WAIT accumulation

As we already know, when the server actively closes the connection or shuts down, it will generate a connection in the timewait state. However, the continuous accumulation of timewait should not appear as a normal state. Excessive timewait will affect the number of available tcp connections in the system. Because the system connection resources are limited, when there is a connection bottleneck, we should try to avoid timewait or use a faster way to release time_wait connections.

These behaviors of general servers may cause time_wait:

1. The server kicks the user and forces the socket to be
closed 2. The server is forced to close the connection
3. The server ping/pong client does not respond when the client is forcibly disconnected
4. Other behaviors

We can adjust to the following steps to minimize server shutdown actions:

1. Try to notify the client to actively close: let the client generate time_wait, the client itself will not have a lot of connections (here only for long-connection games), it doesn't matter whether it accumulates or not.

2. After the client does not close the connection for a period of time, the server decides to use graceful shutdown or reset another form of time_wait accumulation.
Insert picture description here

When our server communicates with some middleware services, such as php connecting mysql, redis, etc., if there are a large number of connection and disconnection operations, then the server will also have timewait accumulation. Unlike the above case, this type of timewait exists in " Client".

In this case, we usually have two solutions: one is to use a long connection; the other is to set the twreuse in the linux kernel parameter to reduce the impact of timewait, see https://vincent.bernat.ch/en for details Explanation of /blog/2014-tcp-time-wait-state-linux#netipv4tcptwreuse.

If the server supports horizontal expansion, and time_wait does not affect the normal business of the service, you can also add machines to solve the problem.

Application of UDP in game development

As players have higher and higher requirements for gaming experience, udp protocol has become an indispensable existence in various competitive games. Compared with conservative strategies such as congestion control and slow start in the tcp protocol, reliable protocols based on udp use more radical methods to achieve these functions, and some even directly abandon some features to ensure transmission efficiency.

Therefore, developers usually find some mature rudp protocol (reliable user datagram protocol) to replace tcp to communicate with the client to reduce latency.

Common rudp protocols such as enet, kcp, Google's quic, udt and so on. Among them, the performance of the kcp protocol is very good. We can look at the benchmark on its official website (https://github.com/libinzhangyuan/reliable_udp_bench_mark/blob/master/bench_mark.md). In addition to excellent performance, kcp has very high standards in terms of code conciseness, ease of use and configurability. In addition, it is developed by Chinese, so the usage rate in domestic games is very high in recent years. Many NetEase games have been connected to the kcp protocol one after another. Next, we will introduce the specific details of the udp communication scheme based on kcp.

The code interpretation and parameter tuning for kcp are described in detail on the github homepage, so I won’t go into details here. Let’s introduce some network setting skills during the specific use of kcp (this article does not involve specific parameter values. , You need to debug according to your needs).

  1. Kcp turns on nodelay, sets a reasonable fast retransmission value resend, and turns off flow control. Regarding the parameter settings of kcp itself, you can refer to the official issue and source code to compare and adjust.

  2. Optimization for udp socket.

Optimize tos parameters and socket priority

The tuning of the tos field is used to call it "metaphysics tuning", because its effect largely depends on the behavior of the operating system and the routing gateway itself, and its effectiveness is uncertain. The list here is for reference only.

There are 7 bits in the IP header to set the priority (SOPRIORITY) and type of service (IPTOS) of the packet. The priority ranges from 0 to 6, with 6 being the highest. If your mobile phone has two connections, one with priority 0 and one with priority 6, then the kernel will send queue data of 6 first.

The service types are divided into 4 categories:

IPTOS_LOWDELAY to minimize delays for interactive traffic,

IPTOS_THROUGHPUT to optimize throughput,

IPTOS_RELIABILITY to optimize for reliability,

IPTOS_MINCOST should be used for “filler data” where slow transmission doesn’t matter.

We can set the first two or the first three flags to 1, and then use setsockopt to make it effective.

bind udp socket

The udp socket can be semi-bound to the socket through the connect operation. After binding, data can be sent and received through send and recv. The client can optimize the sending and receiving efficiency through this setting (fly meat is also meat), see http://www.masterraghu.com/subjects/np/introduction/unixnetworkprogramming_v1.3/ch08lev1sec11.html for details

In addition, if the connected udp socket encounters the unreachable address of the opposite end, the returned icmp packet can be directly received and processed through recv.

Introduction to KCP upper processing solution

Generally, the kcp package transceiver framework is roughly as follows:
Insert picture description here

• Upper packet header + timeout check ping + other game packet types + data

The upper layer header is mainly used for docking game logic packages, such as rpc and other synchronization requests. In the early stage of game development, it is best to reserve type bits according to different request types to facilitate later expansion. The advantage of the independent type is that when we need to deal with different behaviors from ordinary sending and receiving packets, adding a new type will help reduce code coupling and reduce logic interference. For example, when we need to periodically ping/pong with the client to synchronize the time of the other party and detect the heartbeat delay, we can completely separate this type. Ping/pong may require higher real-time feedback, then when other types of packages are all When the packets need to be sent out in a queue, ping-type packets can be sent directly without going out of the queue, thereby increasing the priority. For another example, we will have some requests between systems to report status to each other. This type of request can be completely separated from the user-mode package type to be processed separately.

A special example, many real-time adversarial games will send the player's synchronization status data regularly. This type of request does not actually need to be sent through rudp (that is, through the kcp layer), because there will always be updated status information later, even if I If the packet is lost before, and then follow up, the data can keep the client up to the latest state, so this kind of data can skip the middle layer encapsulation and go directly to the third step of lower layer packaging.

• Intermediate kcp package

No need to explain the package, just look at the official website

• Lower layer header + ping/handshake (if there is a p2p requirement, used for nat port keep-alive) + /fec error correction + unreliable layer transmission identification, packet body + encryption + xor or

This layer of data has basically nothing to do with the upper-level business, but there are many things that can be done.

If it is a hole-punched p2p connection, there must be a regular ping/pong packet to keep alive the mapping relationship, otherwise your channel will be recycled by the nat device, and this keep-alive mechanism can directly pass through this layer as an independent Type data is regularly pushed by the upper layer.

In order to maintain a udp session, we need to implement our own "three handshake" and "four wave hands" and connection management, then these types of packages need to be distinguished as independent types.

In order to reduce the impact of the packet loss rate on the delay, we can add a forward error correction mechanism in the lower layer (this mechanism is mentioned in the kcp official website and many kcp protocol applications, and you can consult related information), by sacrificing a certain amount Bandwidth sends redundant packets to indirectly reduce the packet loss rate. The forward error correction strategy requires us to group the sent data packets in order, and each member packet of each group needs to be assigned a continuous self-incrementing id, so this id also needs to be reflected in the protocol header.

Generally, tcp is not encrypted in the handshake or disconnected wave phase. It is completely possible to see what data we communicated with each other during the handshake through tcpdump or wireshark, but udp can be completely encrypted, because the handshake process is completely defined by the user. You can add XOR at both ends to disrupt the entire data packet, as long as both ends negotiate a consistent xor key.

Usually, udp is only used as an auxiliary connection in the game. We will have an encrypted tcp long connection or https short connection to communicate with the server, then before establishing the udp connection, the xor key and other attributes can be reported to c in advance /s both ends to help us create udp connections without pain.
Insert picture description here
If you need the upper layer to do fragmentation, kcp can be changed to stream mode (no change is possible), and the upper layer does data fragmentation to ensure that the packet body data transmitted to the kcp end is less than mss. For upper-layer fragmentation, the frg field of the kcp header can be removed, and the upper-layer handshake can also be removed from the conv of the kcp header, so that each kcp header can be 5 bytes smaller.

When is the upper layer fragmentation needed? Usually the data we send exceeds the upper limit of kcp, and it needs to be fragmented. If you are not sure whether it will exceed the limit, either do a good job of logging and exception handling of the sending failure, or doing sharding. At the same time, we can also increase the upper limit of the sending limit of kcp itself. The default upper limit should be 128*mss bytes.

Connection package in the game

With the decline of web games on the computer end , the mobile end has become the mainstream game platform, and the reconnection mechanism is a standard feature in the long-connection game server. Without the reconnection function, the gaming experience will inevitably be affected. It is not very suitable to directly use tcp to maintain the connection relationship with the client. We can encapsulate a layer of connection abstract connection in the upper layer.

Connection defines a connection from end A to end B, and internally maintains a tcp connection (or udp, or one for each of the two connections. Connection does not need to expose internal connections to the outside. When network fluctuations cause tcp to disconnect, internal attempts to reconnect. We. You can turn on the tcp fastopen mechanism.

For details, please refer to http://abcdxyzk.github.io/blog/2018/01/25/kernel-net-fastopen/, if it is udp, you can also implement this mechanism by yourself.

The upper layer only exposes standard common interfaces, such as connection, disconnection, monitoring, etc., as well as various callbacks, such as connection callback, disconnection callback, and internal connection disconnection callback. (The significance of this callback is that the server may need to process some behaviors in real time when the client is disconnected.)
Insert picture description here
Connection internally implements a sending queue. The significance of the sending queue is to ensure the timing of sending packets, and to provide a sending buffer when the internal connection is disconnected.

Give an example to illustrate:

Connection internally implements ping/pong, monitors heartbeat, time synchronization, etc. at any time, and provides a configurable timeout period. The connection is considered disconnected when it is overtime or actively disconnected.

In addition, the connection provides a configurable watermark threshold for the sending queue. When the sending data accumulates to the threshold, it is disconnected. This means that there may be a problem with our connection and we need to recover resources in time.

In addition, we can also realize udp+tcp dual connection, which can be switched at any time according to the delay situation. (There may not be much demand for online games, and cross-border data transmission may be more demanded. Because we usually use udp to accelerate, but udp is sometimes not very stable in a cross-border environment.)

Reconnection

The game reconnection is actually a big pit. For planning, the implementation of this function is very simple. If the network is disconnected, you can reconnect to me and restore the previous state. But in fact, there are a lot of details in the reconnection strategy that need to be dealt with. If a little bit is not handled properly, it will affect the game experience. Because before and after you disconnect the game, many contexts are actually different. For example, if the player is playing a dungeon and the dungeon was destroyed when the network was disconnected and reconnected, then the player needs to be restored to the state before the battle. For example, if the player came from scene A, then To send the player back to scene A; to restore the original combat state, to reset back to the normal state, and so on.

Before designing each system module, we must make a corresponding strategy for reconnection. For example, when building a chat system, you need to think about the status of a player joining a chat channel. Is it maintained by the chat system? Or is it maintained by the players? If the chat system is maintained, then every time the player reconnects, the chat system will perform a recovery action; if the player maintains the state, it is best to hang this state in the place that the player uses for recovery to store the data. , And resume when reconnecting.

Short connection is inherently friendly on reconnection issues because it does not need to consider reconnection at all. The online status it maintains is determined by the age of the cookie. Then we can fully implement the same mechanism in our persistent connection service. When the player logs in successfully, a cookie is generated for the client, and all the later states are placed in the server cookie, which can solve most of the problems.

When do I need to reconnect? Generally, as long as the server connection is not disconnected due to the active exit behavior of the client, we can all think that we need to reconnect. In fact, while the connection is not disconnected, there may actually be a layer reconnected inside. After this layer reconnection fails, it will trigger our second layer's upper-layer logic to reconnect, and we need to manually restore the player state.

However, the player may have more than one connection to communicate with the server during the game. For example, a long connection is always accompanied by the player's entire online cycle from the initial login to the player logout, and there will be a udp connection when entering the battle during the period (mentioned above) .

Then we can think that if the first connection is disconnected, it can be regarded as the player is disconnected, while the second connection only affects the player's battle. When the second connection fails to reconnect, it only restores the player to a non-combat state or scene. . Some game designs are different from the previous case. When a battle occurs, the first connection will be disconnected directly to ensure that there is only one connection on the client side. When the battle is over, a re-login process or a quick connection back to the game is required. Then restore the state.

Therefore, reconnection requires the system designer to set a recovery strategy based on the server architecture at the early stage of development, clarify the reconnection rules, and avoid continuous stepping on pits during development.

The following figure briefly shows the above reconnection process:
Insert picture description here

Thoughts and insights on some future directions in network connection

Combine P2P hole punching strategy

P2P depends on the gateway environment and can be used as a reference direction, but it is not necessarily useful. It can be combined with upnp to assist hole punching, and different clients in the same local area network can be degenerated into direct udp connections. For example, chat or some multi-person interaction logic can communicate through P2P to reduce server pressure and improve communication efficiency. The specific hole-making method is not explained here.

An optimized idea for the synchronization of battle-type battles:
Insert picture description here

Usually the server synchronization form is as shown in the figure above. Each client's request (status or operation) is sent to the server, and then the server broadcasts to all clients after verification (AOI optimization is generally considered in large scenarios).

The author here provides a new way of thinking as follows:
Insert picture description here

The server designates the client c1 as a proxy server for inspection and feedback through a certain strategy. Earlier we talked about the abstraction of connection. In this application scenario, our connection represents the connection between other clients and client c1. The internal implementation is to try p2p hole punching first, and the hole punching failure is changed to client->server->c1 abstract.

When more than half of the clients successfully punched holes, the server recognized the status of c1 and started to play through c1. If it is not more than half, then continue to be synchronized by the server side in the most primitive way. The advantage of this method is to reduce the server's computing load. When all of our players are on a local area network, then this game can be regarded as a local area network game, and the server side only performs regular legality checks. The second is to reduce the server network IO, while reducing the client communication delay. We can add more strategies when deciding whether to issue service rights on the server side, such as the delay between the clients and the delay comparison between the client and the server, and so on.

The disadvantage of this form is: Should we trust the client? In case this client has been tampered with by the user, the risk of getting the item experience by the player will increase.

In order to deal with this problem, we will add more restrictions on the execution strategy. For example, for a room-opening game, you must not be friends with each other, the ips are inconsistent, and the match enters the game and the current online exceeds a certain number of people before trying to open the strategy.

Each player needs a persistent record of the reporting threshold. When half of the clients report a report, a mandatory server check is triggered, and the server directly takes punitive measures when it detects a criminal record. The report value will always follow the player, and the threshold will be updated for other sessions. If there is no report in the audience, the threshold will be reduced. If there is a report, it will be increased according to the human flesh of the report.

Reporting is divided into code automatic judgment behavior report and player active report (the value is low, and the client's automatic report is multiplied). Penalties will be imposed upon reporting to a certain threshold.

An unknown history, the rise of kcp in the open source world

In 2013, the rise of BTC, I joined the mining army with 6 graphics cards. From building the machine to starting mining, after burning the motherboard, burning the motherboard interface, burning the hard disk, burning the hard disk, burning the hard disk (later I went to the repair shop to confirm that it was a power supply problem), the mining machine finally started to work stably. At this time, the electricity bill has risen to 200 per day, and the BTC has fallen to about 1500 rmb, so he decisively abandoned the pit. At that time, the mining captain was like this:
Insert picture description here
Picture source: Page Tour http://www.hp91.cn/Page Tour

Because of the need for remote operation at any time during mining. If this mining pool has insufficient computing power or is about to be emptied, or when this coin is not easy to mine and other coins are easy to mine, you need to change the configuration file and restart the program at any time (I just returned to my hometown during the New Year when I was mining), so I need it urgently A tool that can be connected to the home ssh-server at any time. At that time, the author only had one foreign vps. At first, it was connected to the vps through both ends as a springboard, but it was very unstable. Every request had to jump to the foreign country to delay the double, and sometimes it was stuck for a few minutes. I wanted to use a ddns mapping port, but I always found it troublesome and it was more dangerous to expose the port to the outside, so I came up with an idea, why not make my own p2p hole punching tool? So I made up some knowledge of p2p penetration, and after repeated debugging under different network environments, I finally succeeded in making holes.

The author first used the stop-and-wait protocol to maintain the connection, request a request and then return the response. The implementation method is simple and fast but the efficiency is very low. At that time, it was able to meet my basic needs of remote ssh, but as the demand increased later, the author gradually found that this method could not keep up with the rhythm and urgently needed a set of rudp protocol to replace it. Later, I thought about kcp. Kcp was first used in cc voice transmission for real-time transmission. At that time, the author of kcp, Wei Yixiao, was still working at NetEase. Although the project was open sourced on googlecode, no one seemed to know or pay attention to it. Because the author saw this project address in Wei Yixiao’s popo (Netease’s internal IM) signature, I checked out the code overnight and translated it into a Go language version (the hole-punching logic was done by Go at the time), and then connected In my own program, the test results are perfect, and the udp connection after the remote direct connection basically does not feel the delay (the author's hometown is more than 1,000 kilometers from Hangzhou). Later, the author open sourced this project on github (in order to avoid suspicion, the name was hidden) and provided a free public p2p port mapping service, which attracted the attention of a group of developers and some Internet enthusiasts. After that, various kcp-based applications were like this After the rain, mushrooms have gradually emerged, and there are some excellent works such as kcptun, frp and so on.
Insert picture description here

end

Network connection is the most basic and the most basic skill in server development. Everyone needs to learn more theoretical knowledge, write demo exercises combined with packet capture tools, and participate more in game development and operation. After the degree of accumulation, you can control it and use it as you like.

This article has a lot of text and the author has limited expressive ability. I hope to provide you with some reference in your work. If you have any questions or suggestions, please feel free to discuss it.

Guess you like

Origin blog.csdn.net/weixin_52308504/article/details/113351332